[jira] [Commented] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884187#comment-17884187
 ] 

ASF GitHub Bot commented on HDFS-17630:
---

cxzl25 commented on PR #7063:
URL: https://github.com/apache/hadoop/pull/7063#issuecomment-2370670841

   Spark uses FsUrlStreamHandlerFactory to support HDFS Jar, but in some 
scenarios PacketReceiver will be called nested, causing Spark to fail to start.
   
   cc @sunchao 
   
   
https://github.com/apache/spark/blob/982028ea7fc61d7aa84756aa46860ebb49bfe9d1/sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala#L201
   
   
   
 PacketReceiver Exception
 
   ```java
   java.lang.Exception
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:166)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
at 
org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
at java.base/java.io.InputStream.transferTo(InputStream.java:796)
at java.base/java.nio.file.Files.copy(Files.java:3151)
at 
java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
at 
java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
at 
java.base/sun.net.www.protocol.jar.URLJarFile.retrieve(URLJarFile.java:211)
at 
java.base/sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:71)
at 
java.base/sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:153)
at 
java.base/sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:109)
at 
java.base/sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:70)
at 
java.base/jdk.internal.loader.URLClassPath$JarLoader.getJarFile(URLClassPath.java:814)
at 
java.base/jdk.internal.loader.URLClassPath$JarLoader$1.run(URLClassPath.java:774)
at 
java.base/jdk.internal.loader.URLClassPath$JarLoader$1.run(URLClassPath.java:768)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:714)
at 
java.base/jdk.internal.loader.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:767)
at 
java.base/jdk.internal.loader.URLClassPath$JarLoader.(URLClassPath.java:734)
at 
java.base/jdk.internal.loader.URLClassPath$3.run(URLClassPath.java:497)
at 
java.base/jdk.internal.loader.URLClassPath$3.run(URLClassPath.java:479)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:714)
at 
java.base/jdk.internal.loader.URLClassPath.getLoader(URLClassPath.java:478)
at 
java.base/jdk.internal.loader.URLClassPath.getLoader(URLClassPath.java:446)
at 
java.base/jdk.internal.loader.URLClassPath.findResource(URLClassPath.java:292)
at java.base/java.net.URLClassLoader$2.run(URLClassLoader.java:629)
at java.base/java.net.URLClassLoader$2.run(URLClassLoader.java:627)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:400)
at 
java.base/java.net.URLClassLoader.findResource(URLClassLoader.java:626)
at java.base/java.lang.ClassLoader.getResource(ClassLoader.java:1418)
at 
org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2861)
at 
org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3135)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3094)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3067)
at 
org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2945)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2927)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1265)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.(BlockReaderRemote.java:101)
  

[jira] [Commented] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884142#comment-17884142
 ] 

ASF GitHub Bot commented on HDFS-17631:
---

LiuGuH opened a new pull request, #7066:
URL: https://github.com/apache/hadoop/pull/7066

   …_FAILED when EditLogInputStream.skipUntil() throw IOException
   
   
   
   ### Description of PR
   As descirbed in 
[HDFS-17631](https://issues.apache.org/jira/browse/HDFS-17631)
   
   ### How was this patch tested?
   Add a test case for this.




> RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
> EditLogInputStream.skipUntil() throw IOException
> --
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Now when EditLogInputStream.skipUntil() throw 
> IOException in RedundantEditLogInputStream.nextOp(), it is still into 
> State.OK rather than State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17631:
--
Labels: pull-request-available  (was: )

> RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
> EditLogInputStream.skipUntil() throw IOException
> --
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Now when EditLogInputStream.skipUntil() throw 
> IOException in RedundantEditLogInputStream.nextOp(), it is still into 
> State.OK rather than State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-24 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua reassigned HDFS-17631:
--

Assignee: liuguanghua

> RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
> EditLogInputStream.skipUntil() throw IOException
> --
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Now when EditLogInputStream.skipUntil() throw 
> IOException in RedundantEditLogInputStream.nextOp(), it is still into 
> State.OK rather than State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-24 Thread liuguanghua (Jira)
liuguanghua created HDFS-17631:
--

 Summary: RedundantEditLogInputStream.nextOp() will be 
State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException
 Key: HDFS-17631
 URL: https://issues.apache.org/jira/browse/HDFS-17631
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: Now when EditLogInputStream.skipUntil() throw IOException 
in RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
Reporter: liuguanghua






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884034#comment-17884034
 ] 

ASF GitHub Bot commented on HDFS-17630:
---

hadoop-yetus commented on PR #7063:
URL: https://github.com/apache/hadoop/pull/7063#issuecomment-2369281227

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   5m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   5m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   5m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   5m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  hadoop-hdfs-project: The 
patch generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   5m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 224m  3s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 400m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7063/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 40d2434a6c71 5.15.0-119-generic #129-Ubuntu SMP Fri Aug 2 
19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 577163b14b384629890ce476120132d94c04e2ef |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7063/2/testReport/ |
   | Max. process+thread count | 3290 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-pro

[jira] [Resolved] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze resolved HDFS-17040.
---
Fix Version/s: 3.5.0
 Assignee: Attila Magyar
   Resolution: Fixed

The pull request is now merged.  Thanks, [~amagyar] !

> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884010#comment-17884010
 ] 

ASF GitHub Bot commented on HDFS-17040:
---

szetszwo merged PR #5721:
URL: https://github.com/apache/hadoop/pull/5721




> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884009#comment-17884009
 ] 

ASF GitHub Bot commented on HDFS-17040:
---

szetszwo commented on PR #5721:
URL: https://github.com/apache/hadoop/pull/5721#issuecomment-2369168774

   Since this pull request already passed all the checks earlier, let's just 
merge it.




> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884006#comment-17884006
 ] 

ASF GitHub Bot commented on HDFS-17040:
---

Galsza commented on PR #5721:
URL: https://github.com/apache/hadoop/pull/5721#issuecomment-2369155419

   @szetszwo could you please take another look at this PR?




> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884005#comment-17884005
 ] 

ASF GitHub Bot commented on HDFS-17040:
---

Galsza commented on PR #5721:
URL: https://github.com/apache/hadoop/pull/5721#issuecomment-2369152762

   Thanks for the patch, looking good to me +1
   
   I've tested this change in a cluster by trying to upload a file containing 
the text "hello %" via namenode ui. The change is indeed working. I haven't 
found a CORS related problem or at least Chrome browsers security levels don't 
seem to affect it.




> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884003#comment-17884003
 ] 

ASF GitHub Bot commented on HDFS-17040:
---

Galsza commented on code in PR #5721:
URL: https://github.com/apache/hadoop/pull/5721#discussion_r1771950619


##
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js:
##
@@ -518,7 +518,8 @@
 url: url,
 data: file.file,
 processData: false,
-crossDomain: true
+crossDomain: true,
+contentType: 'application/octet-stream'

Review Comment:
   @szetszwo I'm not sure the CORS problem still exists. I've tested this using 
Chrome browser and with maximum security enabled the change was still working.





> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
>     URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17040) Namenode web UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-17040:
--
Summary: Namenode web UI should set content type to 
application/octet-stream when uploading a file  (was: Namenode UI should set 
content type to application/octet-stream when uploading a file)

> Namenode web UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17040) Namenode UI should set content type to application/octet-stream when uploading a file

2024-09-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-17040:
--
Component/s: ui
Description: 
When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
content type to application/x-www-form-urlencoded, as this is the default used 
by jQuery

https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516

This causes knox to urlencode the request body so that uploading a CVS file 
1,2,3 will result 1%2C2%2C3.

Instead of application/x-www-form-urlencoded I think the encoding should be set 
to application/octet-stream.

  was:
When uploading a file WebHDFS will set the content type to 
application/x-www-form-urlencoded, as this is the default used by jQuery

https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516

This causes knox to urlencode the request body so that uploading a CVS file 
1,2,3 will result 1%2C2%2C3.

Instead of application/x-www-form-urlencoded I think the encoding should be set 
to application/octet-stream.

Summary: Namenode UI should set content type to 
application/octet-stream when uploading a file  (was: WebHDFS UI should set 
content type to application/octet-stream when uploading a file)

WebHDFS is a [REST 
API|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
 which does not have a UI. The 
[explorer.js|https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516]
 file is from the Namenode web UI, which was designed many years ago to be used 
by a human with a browser. The Namenode web UI is never a public API. It 
probably is not following the http standard very strictly.  Of course, we 
should fix this problem.

[~amagyar], thanks a lot for filing this JIRA!

(Updating the Summary and Description ...)

> Namenode UI should set content type to application/octet-stream when 
> uploading a file
> -
>
> Key: HDFS-17040
> URL: https://issues.apache.org/jira/browse/HDFS-17040
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
>
> When uploading a file using -WebHDFS- the Namenode web UI, it will set the 
> content type to application/x-www-form-urlencoded, as this is the default 
> used by jQuery
> https://github.com/apache/hadoop/blob/160b9fc3c9255024c00d487b7fcdf5ea59a42781/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js#L516
> This causes knox to urlencode the request body so that uploading a CVS file 
> 1,2,3 will result 1%2C2%2C3.
> Instead of application/x-www-form-urlencoded I think the encoding should be 
> set to application/octet-stream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883874#comment-17883874
 ] 

ASF GitHub Bot commented on HDFS-17630:
---

hadoop-yetus commented on PR #7063:
URL: https://github.com/apache/hadoop/pull/7063#issuecomment-2368151787

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7063/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | -1 :x: |  spotbugs  |   2m 37s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7063/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  36m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
   |  |  
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.MAX_PACKET_SIZE 
isn't final but should be refactored to be so  At PacketReceiver.java:be 
refactored to be so  At PacketReceiver.java:[line 51] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7063/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dcb310fda747 5.15.0-119-generic #129-Ubuntu SMP Fri Aug 2 
19:25:20 U

[jira] [Updated] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread dzcxzl (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dzcxzl updated HDFS-17630:
--
Affects Version/s: 3.4.0

> Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0
> -
>
> Key: HDFS-17630
> URL: https://issues.apache.org/jira/browse/HDFS-17630
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: dzcxzl
>Priority: Major
>
> There are nested calls, causing the MAX_PACKET_SIZE of PacketReceiver to be 0.
>  
> {code:java}
> java.io.IOException: Incorrect value for packet payload size: 1014776
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:167)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
>     at 
> org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
>     at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
>     at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
>     at java.base/java.io.InputStream.transferTo(InputStream.java:796)
>     at java.base/java.nio.file.Files.copy(Files.java:3151)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
>     at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
>     at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
>     at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.(BlockReaderRemote.java:101)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883809#comment-17883809
 ] 

ASF GitHub Bot commented on HDFS-17630:
---

cxzl25 opened a new pull request, #7063:
URL: https://github.com/apache/hadoop/pull/7063

   ### Description of PR
   
   There are nested calls, causing the MAX_PACKET_SIZE of PacketReceiver to be 
0.
   
   ```java
   java.io.IOException: Incorrect value for packet payload size: 1014776
       at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:167)
       at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
       at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
       at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
       at 
org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
       at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
       at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
       at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
       at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
       at java.base/java.io.InputStream.transferTo(InputStream.java:796)
       at java.base/java.nio.file.Files.copy(Files.java:3151)
       at 
java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
       at 
java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
       at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
   
   
       at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
       at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
       at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
       at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.(BlockReaderRemote.java:101)
 
   ```
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0
> -
>
> Key: HDFS-17630
> URL: https://issues.apache.org/jira/browse/HDFS-17630
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: dzcxzl
>Priority: Major
>
> There are nested calls, causing the MAX_PACKET_SIZE of PacketReceiver to be 0.
>  
> {code:java}
> java.io.IOException: Incorrect value for packet payload size: 1014776
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:167)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
>     at 
> org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
>     at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
>     at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
>     at java.base/java.io.InputStream.transferTo(InputStream.java:796)
>     at java.base/java.nio.file.Files.copy(Files.java:3151)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
>     at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
>     at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
>     at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
>

[jira] [Updated] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17630:
--
Labels: pull-request-available  (was: )

> Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0
> -
>
> Key: HDFS-17630
> URL: https://issues.apache.org/jira/browse/HDFS-17630
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
> There are nested calls, causing the MAX_PACKET_SIZE of PacketReceiver to be 0.
>  
> {code:java}
> java.io.IOException: Incorrect value for packet payload size: 1014776
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:167)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
>     at 
> org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
>     at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
>     at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
>     at java.base/java.io.InputStream.transferTo(InputStream.java:796)
>     at java.base/java.nio.file.Files.copy(Files.java:3151)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
>     at 
> java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
>     at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
>     at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
>     at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
>     at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.(BlockReaderRemote.java:101)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17630) Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0

2024-09-23 Thread dzcxzl (Jira)
dzcxzl created HDFS-17630:
-

 Summary: Avoid PacketReceiver#MAX_PACKET_SIZE Initialized to 0
 Key: HDFS-17630
 URL: https://issues.apache.org/jira/browse/HDFS-17630
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: dzcxzl


There are nested calls, causing the MAX_PACKET_SIZE of PacketReceiver to be 0.

 
{code:java}
java.io.IOException: Incorrect value for packet payload size: 1014776
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:167)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:112)
    at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:187)
    at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:146)
    at 
org.apache.hadoop.hdfs.ByteArrayStrategy.readFromBlock(ReaderStrategy.java:118)
    at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:789)
    at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:855)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
    at java.base/java.io.DataInputStream.read(DataInputStream.java:158)
    at java.base/java.io.InputStream.transferTo(InputStream.java:796)
    at java.base/java.nio.file.Files.copy(Files.java:3151)
    at java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:216)
    at java.base/sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:212)
    at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:571)


    at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1319)
    at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1545)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.(PacketReceiver.java:82)
    at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.(BlockReaderRemote.java:101)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16300) Use libcrypto in Windows for libhdfspp

2024-09-23 Thread hu xiaodong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-16300:
---
Affects Version/s: (was: 3.4.0)

> Use libcrypto in Windows for libhdfspp
> --
>
> Key: HDFS-16300
> URL: https://issues.apache.org/jira/browse/HDFS-16300
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
> Environment: Windows
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-hdfs-nacl-windows-10.log
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently, eay32 is the library that's used in libhdfspp for Windows. 
> Whereas, we use libcrypto for the rest of the platforms. As per the following 
> mail thread, the OpenSSL library was renamed from eay32 to libcrypto from 
> OpenSSL version 1.1.0 onwards - 
> https://mta.openssl.org/pipermail/openssl-dev/2016-August/008351.html.
> Thus, we need to use libcrypto on Windows as well to ensure that we 
> standardize the version of the OpenSSL libraries used across platforms.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16300) Use libcrypto in Windows for libhdfspp

2024-09-23 Thread hu xiaodong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-16300:
---
Affects Version/s: 3.4.0

> Use libcrypto in Windows for libhdfspp
> --
>
> Key: HDFS-16300
> URL: https://issues.apache.org/jira/browse/HDFS-16300
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-hdfs-nacl-windows-10.log
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently, eay32 is the library that's used in libhdfspp for Windows. 
> Whereas, we use libcrypto for the rest of the platforms. As per the following 
> mail thread, the OpenSSL library was renamed from eay32 to libcrypto from 
> OpenSSL version 1.1.0 onwards - 
> https://mta.openssl.org/pipermail/openssl-dev/2016-August/008351.html.
> Thus, we need to use libcrypto on Windows as well to ensure that we 
> standardize the version of the OpenSSL libraries used across platforms.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17621) Make PathIsNotEmptyDirectoryException terse

2024-09-23 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-17621.
-
Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Make PathIsNotEmptyDirectoryException terse
> ---
>
> Key: HDFS-17621
> URL: https://issues.apache.org/jira/browse/HDFS-17621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17621) Make PathIsNotEmptyDirectoryException terse

2024-09-23 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883739#comment-17883739
 ] 

Ayush Saxena commented on HDFS-17621:
-

Committed to trunk.
Thanx [~dzcxzl] for the contribution!!!

> Make PathIsNotEmptyDirectoryException terse
> ---
>
> Key: HDFS-17621
> URL: https://issues.apache.org/jira/browse/HDFS-17621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17621) Make PathIsNotEmptyDirectoryException terse

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883738#comment-17883738
 ] 

ASF GitHub Bot commented on HDFS-17621:
---

ayushtkn merged PR #7036:
URL: https://github.com/apache/hadoop/pull/7036




> Make PathIsNotEmptyDirectoryException terse
> ---
>
> Key: HDFS-17621
> URL: https://issues.apache.org/jira/browse/HDFS-17621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17621) Make PathIsNotEmptyDirectoryException terse

2024-09-23 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-17621:
---

Assignee: dzcxzl

> Make PathIsNotEmptyDirectoryException terse
> ---
>
> Key: HDFS-17621
> URL: https://issues.apache.org/jira/browse/HDFS-17621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-09-22 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-17526.
-
Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> getMetadataInputStream should use getShareDeleteFileInputStream for windows
> ---
>
> Key: HDFS-17526
> URL: https://issues.apache.org/jira/browse/HDFS-17526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> In HDFS-10636, the getDataInputStream method uses the 
> getShareDeleteFileInputStream for windows, but the getMetaDataInputStream 
> does not use this. The following error can happen when a DataNode is trying 
> to update the genstamp on a block in Windows.
> DataNode Logs:
> {{Caused by: java.io.IOException: Failed to rename 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_1.meta
>  to 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_2.meta
>  due to failure in native rename. 32: The process cannot access the file 
> because it is being used by another process.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-09-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883709#comment-17883709
 ] 

Ayush Saxena commented on HDFS-17526:
-

Committed to trunk.
Thanx [~dannytbecker] for the contribution & [~elgoiri] for the review!!!

> getMetadataInputStream should use getShareDeleteFileInputStream for windows
> ---
>
> Key: HDFS-17526
> URL: https://issues.apache.org/jira/browse/HDFS-17526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-10636, the getDataInputStream method uses the 
> getShareDeleteFileInputStream for windows, but the getMetaDataInputStream 
> does not use this. The following error can happen when a DataNode is trying 
> to update the genstamp on a block in Windows.
> DataNode Logs:
> {{Caused by: java.io.IOException: Failed to rename 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_1.meta
>  to 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_2.meta
>  due to failure in native rename. 32: The process cannot access the file 
> because it is being used by another process.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883708#comment-17883708
 ] 

ASF GitHub Bot commented on HDFS-17526:
---

ayushtkn merged PR #6826:
URL: https://github.com/apache/hadoop/pull/6826




> getMetadataInputStream should use getShareDeleteFileInputStream for windows
> ---
>
> Key: HDFS-17526
> URL: https://issues.apache.org/jira/browse/HDFS-17526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: Danny Becker
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-10636, the getDataInputStream method uses the 
> getShareDeleteFileInputStream for windows, but the getMetaDataInputStream 
> does not use this. The following error can happen when a DataNode is trying 
> to update the genstamp on a block in Windows.
> DataNode Logs:
> {{Caused by: java.io.IOException: Failed to rename 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_1.meta
>  to 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_2.meta
>  due to failure in native rename. 32: The process cannot access the file 
> because it is being used by another process.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-09-22 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-17526:
---

Assignee: Danny Becker

> getMetadataInputStream should use getShareDeleteFileInputStream for windows
> ---
>
> Key: HDFS-17526
> URL: https://issues.apache.org/jira/browse/HDFS-17526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-10636, the getDataInputStream method uses the 
> getShareDeleteFileInputStream for windows, but the getMetaDataInputStream 
> does not use this. The following error can happen when a DataNode is trying 
> to update the genstamp on a block in Windows.
> DataNode Logs:
> {{Caused by: java.io.IOException: Failed to rename 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_1.meta
>  to 
> G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_2.meta
>  due to failure in native rename. 32: The process cannot access the file 
> because it is being used by another process.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883664#comment-17883664
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

haiyang1987 commented on PR #7025:
URL: https://github.com/apache/hadoop/pull/7025#issuecomment-2367143262

   Hi @kokon191 this PR should be move to hadoop common. thanks~




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883662#comment-17883662
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

hadoop-yetus commented on PR #7059:
URL: https://github.com/apache/hadoop/pull/7059#issuecomment-2367142725

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  
https://github.com/apache/hadoop/pull/7059 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/7059 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7059/1/console |
   | versions | git=2.34.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883659#comment-17883659
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

haiyang1987 merged PR #7059:
URL: https://github.com/apache/hadoop/pull/7059




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883658#comment-17883658
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

haiyang1987 opened a new pull request, #7059:
URL: https://github.com/apache/hadoop/pull/7059

   Reverts apache/hadoop#7025




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883656#comment-17883656
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

haiyang1987 commented on PR #7025:
URL: https://github.com/apache/hadoop/pull/7025#issuecomment-2367137542

   Committed to trunk.  Thanks @kokon191 for your works. And @steveloughran 
@ferhui  for your reviews.




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17611) Move all DistCp execution logic to execute()

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883655#comment-17883655
 ] 

ASF GitHub Bot commented on HDFS-17611:
---

haiyang1987 merged PR #7025:
URL: https://github.com/apache/hadoop/pull/7025




> Move all DistCp execution logic to execute()
> 
>
> Key: HDFS-17611
> URL: https://issues.apache.org/jira/browse/HDFS-17611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>
> Many code flows create a DistCp instance and call the public method execute() 
> to get the Job object for better control over the distcp job but some logics 
> are only called by the run() method. Should move these lines over to 
> execute().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883654#comment-17883654
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1770720963


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   > HI, IMO, "if (LOG.isDebugEnabled()) {...}" is better.
   
   Thanks for your comment. I agree with @ayushtkn and @virajjasani here. 
LOG.debug already does isDebugEnabled() internally.
   https://github.com/user-attachments/assets/ed4f9b82-8dbd-4a89-80b5-1263f982c782";>





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883653#comment-17883653
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1770720963


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   > HI, IMO, "if (LOG.isDebugEnabled()) {...}" is better.
   
   Thanks for your comment. I agree with @ayushtkn and @virajjasani here. 
LOG.debug already does isDebugEnabled() internally.





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-22 Thread zeekling (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zeekling updated HDFS-17629:

Description: 
 

!image-2024-09-23-09-22-28-495.png!

 

function open_hostip_list in histogram-hostip.js , here is root  reason
{code:java}
if (index > x0 && index <= x1) {
      ips.push(dn.infoAddr.split(":")[0]);
} {code}
need change to:
{code:java}
if (index > x0 && index <= x1) {
   ips.push(dn.infoAddr.split(":")[0]);
   var idx = dn.infoAddr.lastIndexOf(":"); 
   var dnIp = dn.infoAddr.substring(0, idx);
   ips.push(dnIp);   
}{code}
 

 

  was:
 

!image-2024-09-23-09-22-28-495.png!

 

function open_hostip_list in histogram-hostip.js

 

 

 


> The IP address is incorrectly displayed in the IPv6 environment.
> 
>
> Key: HDFS-17629
> URL: https://issues.apache.org/jira/browse/HDFS-17629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zeekling
>Priority: Major
> Attachments: image-2024-09-23-09-22-28-495.png
>
>
>  
> !image-2024-09-23-09-22-28-495.png!
>  
> function open_hostip_list in histogram-hostip.js , here is root  reason
> {code:java}
> if (index > x0 && index <= x1) {
>       ips.push(dn.infoAddr.split(":")[0]);
> } {code}
> need change to:
> {code:java}
> if (index > x0 && index <= x1) {
>ips.push(dn.infoAddr.split(":")[0]);
>    var idx = dn.infoAddr.lastIndexOf(":"); 
>var dnIp = dn.infoAddr.substring(0, idx);
>ips.push(dnIp);   
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-22 Thread zeekling (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zeekling updated HDFS-17629:

Description: 
 

!image-2024-09-23-09-22-28-495.png!

 

function open_hostip_list in histogram-hostip.js

 

 

 

  was:
 

!image-2024-09-23-09-22-28-495.png!


> The IP address is incorrectly displayed in the IPv6 environment.
> 
>
> Key: HDFS-17629
> URL: https://issues.apache.org/jira/browse/HDFS-17629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zeekling
>Priority: Major
> Attachments: image-2024-09-23-09-22-28-495.png
>
>
>  
> !image-2024-09-23-09-22-28-495.png!
>  
> function open_hostip_list in histogram-hostip.js
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-22 Thread zeekling (Jira)
zeekling created HDFS-17629:
---

 Summary: The IP address is incorrectly displayed in the IPv6 
environment.
 Key: HDFS-17629
 URL: https://issues.apache.org/jira/browse/HDFS-17629
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: zeekling
 Attachments: image-2024-09-23-09-22-28-495.png

 

!image-2024-09-23-09-22-28-495.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-09-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883476#comment-17883476
 ] 

ASF GitHub Bot commented on HDFS-17526:
---

hadoop-yetus commented on PR #6826:
URL: https://github.com/apache/hadoop/pull/6826#issuecomment-2365178894

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 231m 37s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 401m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6826/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f62e5f56f761 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5f042ded4b7bb9ce826cd9e8b65c08a16e79aea9 |
   | Default Java | Red Hat, Inc.-1.8.0_412-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6826/2/testReport/ |
   | Max. process+thread count | 3701 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6826/2/console |
   | versions | git=2.9.5 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> getMetadataInputStream should use getShareDeleteFileInputStream for windows
> ---
>
> Key: HDFS-17526
> URL: https://issues.apache.org/jira/browse/HDFS-17526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: Danny Becker
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-10636, the getDataInputStream method uses the 
> getShareDeleteFileInputStream for windows, but the getMetaDataInputStream 
> does not use this. The following error can happen when a DataNode is trying 
> to update the genstamp on a block in Windows.
> DataNode Logs:
> {{Caused by: java.io.IOException: Failed to rename 
> G:\data\hdfs\data\current\

[jira] [Commented] (HDFS-17254) DataNode httpServer has too many worker threads

2024-09-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883462#comment-17883462
 ] 

ASF GitHub Bot commented on HDFS-17254:
---

ayushtkn commented on code in PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#discussion_r1769487525


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java:
##
@@ -144,7 +147,16 @@ public DatanodeHttpServer(final Configuration conf,
 confForCreate.set(FsPermission.UMASK_LABEL, "000");
 
 this.bossGroup = new NioEventLoopGroup();
-this.workerGroup = new NioEventLoopGroup();
+int workerCount = conf.getInt(DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY,
+DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT);
+if (workerCount < 0) {
+  LOG.warn("The value of " +
+  DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY + " is less than 0, will 
use default value: " +
+  DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT);
+  workerCount = DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT;

Review Comment:
   Use logger format {} instead of concat





> DataNode httpServer has too many worker threads
> ---
>
> Key: HDFS-17254
> URL: https://issues.apache.org/jira/browse/HDFS-17254
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Liangjun He
>Assignee: Liangjun He
>Priority: Minor
>  Labels: pull-request-available
>
> When optimizing the thread number of high-density storage DN, we found the 
> number of worker threads for the DataNode httpServer is twice the number of 
> available cores on node , resulting in too many threads. We can change this 
> to be configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17565) EC: dfs.datanode.ec.reconstruction.threads should support dynamic reconfigured.

2024-09-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883450#comment-17883450
 ] 

ASF GitHub Bot commented on HDFS-17565:
---

ayushtkn commented on code in PR #6928:
URL: https://github.com/apache/hadoop/pull/6928#discussion_r1769491905


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java:
##
@@ -172,4 +175,19 @@ public void shutDown() {
   public float getXmitWeight() {
 return xmitWeight;
   }
+
+  public void setStripedReconstructionPoolSize(int size) {
+Preconditions.checkArgument(size > 0,
+DFS_DN_EC_RECONSTRUCTION_THREADS_KEY + " should be greater than 0");
+this.stripedReconstructionPool.setCorePoolSize(size);
+this.stripedReconstructionPool.setMaximumPoolSize(size);
+  }
+
+  @VisibleForTesting
+  public int getStripedReconstructionPoolSize() {
+int poolSize = this.stripedReconstructionPool.getCorePoolSize();
+Preconditions.checkArgument(poolSize == 
this.stripedReconstructionPool.getMaximumPoolSize(),
+"The maximum pool size should be equal to core pool size");

Review Comment:
   I think this isn't required, this looks like validating the 
``ThreadPoolExecutor``, that ain't a hadoop thing & isn't required here in the 
scope either





> EC: dfs.datanode.ec.reconstruction.threads should support dynamic 
> reconfigured.
> ---
>
> Key: HDFS-17565
> URL: https://issues.apache.org/jira/browse/HDFS-17565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> dfs.datanode.ec.reconstruction.threads should support dynamic reconfigured, 
> then we can adjust the speed of ec block copy. Especially HDFS-17550 wanna 
> decommissioning DataNode by EC block reconstruction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17254) DataNode httpServer has too many worker threads

2024-09-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883449#comment-17883449
 ] 

ASF GitHub Bot commented on HDFS-17254:
---

ayushtkn commented on code in PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#discussion_r1769487525


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java:
##
@@ -144,7 +147,16 @@ public DatanodeHttpServer(final Configuration conf,
 confForCreate.set(FsPermission.UMASK_LABEL, "000");
 
 this.bossGroup = new NioEventLoopGroup();
-this.workerGroup = new NioEventLoopGroup();
+int workerCount = conf.getInt(DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY,
+DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT);
+if (workerCount < 0) {
+  LOG.warn("The value of " +
+  DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY + " is less than 0, will 
use default value: " +
+  DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT);
+  workerCount = DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT;

Review Comment:
   you logger format {} instead of concat





> DataNode httpServer has too many worker threads
> ---
>
> Key: HDFS-17254
> URL: https://issues.apache.org/jira/browse/HDFS-17254
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Liangjun He
>Assignee: Liangjun He
>Priority: Minor
>  Labels: pull-request-available
>
> When optimizing the thread number of high-density storage DN, we found the 
> number of worker threads for the DataNode httpServer is twice the number of 
> available cores on node , resulting in too many threads. We can change this 
> to be configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883384#comment-17883384
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

virajjasani commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1769078801


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   LOG.debug already does isDebugEnabled() internally, the original issue is 
String concatenation with heavy object.
   
   This change is sufficient IMO.





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17569) Setup Effective Work Number when Generating Block Reconstruction Work

2024-09-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883372#comment-17883372
 ] 

ASF GitHub Bot commented on HDFS-17569:
---

hadoop-yetus commented on PR #6924:
URL: https://github.com/apache/hadoop/pull/6924#issuecomment-2364311859

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 31s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6924/16/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 18 new + 274 unchanged 
- 0 fixed = 292 total (was 274)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 32s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6924/16/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 290m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6924/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6924 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0ee5e8423f4c 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 749a810dcd7c3f48cd02342656bc7fbc4b47dfa7 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6924/16/testReport/ |
   | Max. process+thread count | 4900 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/ha

[jira] [Commented] (HDFS-17610) Upgrade Bootstrap version used in HDFS UI to fix CVE

2024-09-20 Thread PJ Fanning (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883198#comment-17883198
 ] 

PJ Fanning commented on HDFS-17610:
---

Apache Hadoop is a volunteer project. [~palsai] would you have time to upgrade 
this yourself and submit a PR on GitHub?

> Upgrade Bootstrap version used in HDFS UI to fix CVE
> 
>
> Key: HDFS-17610
> URL: https://issues.apache.org/jira/browse/HDFS-17610
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Palakur Eshwitha Sai
>Priority: Major
>
> The current versions of bootstrap has multiple medium severity CVEs reported 
> till date and needs to be updated to the latest versions with no reported 
> CVEs.
> [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484]
> [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531]
> [CVE-2024-6485|https://nvd.nist.gov/vuln/detail/CVE-2024-6485]
> For [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484], our tool 
> states that:
> This vulnerability affects all 4.x and 5.x versions and not only those 
> leading up to and including {{{}3.4.1{}}}.
> For [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531], our tool 
> states that:
> This vulnerability was introduced in version {{{}2.0.0{}}}, not {{4.0.0}} as 
> the advisory states. Additionally, this vulnerability affects all 5.x 
> versions and not only the versions leading up to and including {{{}4.6.2{}}}.
> Therefore, alternative upgrade path needs to be found for the same as there 
> is no non-vulnerable upgrade version for this component/package at the moment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] (HDFS-15759) EC: Verify EC reconstruction correctness on DataNode

2024-09-19 Thread ruiliang (Jira)


[ https://issues.apache.org/jira/browse/HDFS-15759 ]


ruiliang deleted comment on HDFS-15759:
-

was (Author: ruilaing):
[~weichiu]

Hello, our current production data also has this kind of EC storage data damage 
problem, about the problem description
[https://github.com/apache/orc/issues/1939]
I was wondering if cherry picked your current code (GitHub pull request #2869),
Can I skip patches related to HDFS-14768,HDFS-15186, and HDFS-15240?

The current version of hdfs is 3.1.0.
Thank you!

> EC: Verify EC reconstruction correctness on DataNode
> 
>
> Key: HDFS-15759
> URL: https://issues.apache.org/jira/browse/HDFS-15759
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, ec, erasure-coding
>Affects Versions: 3.4.0
>Reporter: Toshihiko Uchida
>Assignee: Toshihiko Uchida
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> EC reconstruction on DataNode has caused data corruption: HDFS-14768, 
> HDFS-15186 and HDFS-15240. Those issues occur under specific conditions and 
> the corruption is neither detected nor auto-healed by HDFS. It is obviously 
> hard for users to monitor data integrity by themselves, and even if they find 
> corrupted data, it is difficult or sometimes impossible to recover them.
> To prevent further data corruption issues, this feature proposes a simple and 
> effective way to verify EC reconstruction correctness on DataNode at each 
> reconstruction process.
> It verifies correctness of outputs decoded from inputs as follows:
> 1. Decoding an input with the outputs;
> 2. Compare the decoded input with the original input.
> For instance, in RS-6-3, assume that outputs [d1, p1] are decoded from inputs 
> [d0, d2, d3, d4, d5, p0]. Then the verification is done by decoding d0 from 
> [d1, d2, d3, d4, d5, p1], and comparing the original and decoded data of d0.
> When an EC reconstruction task goes wrong, the comparison will fail with high 
> probability.
> Then the task will also fail and be retried by NameNode.
> The next reconstruction will succeed if the condition triggered the failure 
> is gone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17610) Upgrade Bootstrap version used in HDFS UI to fix CVE

2024-09-19 Thread Palakur Eshwitha Sai (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883150#comment-17883150
 ] 

Palakur Eshwitha Sai commented on HDFS-17610:
-

cc: [~brahmareddy], [~hemanthboyina] 

> Upgrade Bootstrap version used in HDFS UI to fix CVE
> 
>
> Key: HDFS-17610
> URL: https://issues.apache.org/jira/browse/HDFS-17610
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Palakur Eshwitha Sai
>Priority: Major
>
> The current versions of bootstrap has multiple medium severity CVEs reported 
> till date and needs to be updated to the latest versions with no reported 
> CVEs.
> [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484]
> [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531]
> [CVE-2024-6485|https://nvd.nist.gov/vuln/detail/CVE-2024-6485]
> For [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484], our tool 
> states that:
> This vulnerability affects all 4.x and 5.x versions and not only those 
> leading up to and including {{{}3.4.1{}}}.
> For [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531], our tool 
> states that:
> This vulnerability was introduced in version {{{}2.0.0{}}}, not {{4.0.0}} as 
> the advisory states. Additionally, this vulnerability affects all 5.x 
> versions and not only the versions leading up to and including {{{}4.6.2{}}}.
> Therefore, alternative upgrade path needs to be found for the same as there 
> is no non-vulnerable upgrade version for this component/package at the moment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17610) Upgrade Bootstrap version used in HDFS UI to fix CVE

2024-09-19 Thread Palakur Eshwitha Sai (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883149#comment-17883149
 ] 

Palakur Eshwitha Sai commented on HDFS-17610:
-

[~aajisaka], [~ayushsaxena], [~fanningpj], [~slfan1989]

Do you have any thoughts on this? 

> Upgrade Bootstrap version used in HDFS UI to fix CVE
> 
>
> Key: HDFS-17610
> URL: https://issues.apache.org/jira/browse/HDFS-17610
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Palakur Eshwitha Sai
>Priority: Major
>
> The current versions of bootstrap has multiple medium severity CVEs reported 
> till date and needs to be updated to the latest versions with no reported 
> CVEs.
> [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484]
> [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531]
> [CVE-2024-6485|https://nvd.nist.gov/vuln/detail/CVE-2024-6485]
> For [CVE-2024-6484|https://nvd.nist.gov/vuln/detail/CVE-2024-6484], our tool 
> states that:
> This vulnerability affects all 4.x and 5.x versions and not only those 
> leading up to and including {{{}3.4.1{}}}.
> For [CVE-2024-6531|https://nvd.nist.gov/vuln/detail/CVE-2024-6531], our tool 
> states that:
> This vulnerability was introduced in version {{{}2.0.0{}}}, not {{4.0.0}} as 
> the advisory states. Additionally, this vulnerability affects all 5.x 
> versions and not only the versions leading up to and including {{{}4.6.2{}}}.
> Therefore, alternative upgrade path needs to be found for the same as there 
> is no non-vulnerable upgrade version for this component/package at the moment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17628) hdfs ec datanode Decommissioning Stuck and some blk Decommissioning to many nodes indefinitely

2024-09-19 Thread ruiliang (Jira)
ruiliang created HDFS-17628:
---

 Summary: hdfs ec datanode Decommissioning Stuck and some blk 
Decommissioning to many nodes indefinitely
 Key: HDFS-17628
 URL: https://issues.apache.org/jira/browse/HDFS-17628
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ec, hdfs
Affects Versions: 3.1.1
Reporter: ruiliang
 Attachments: image-2024-09-20-10-47-10-051.png

When the datanode decommissioning reaches the last few blocks, the stuck phase 
is processed, and there are unlimited decommissioning cases when viewing the 
log. The md5 values of physical blocks on each node are consistent, indicating 
that block replication is indeed performed. Has this issue been fixed? Is there 
a patch available to fix it? thank you

!image-2024-09-20-10-47-10-051.png!

 

log

 
{code:java}
xx-dn-12-67-49.hiido.host.xx.xx.com is DECOMMISSIONING 
grep  9223372036628464382_15347979  xxx-hdfs-datanode.log
2024-09-20 10:13:32,097 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 to 
10.12.65.86:1019 
2024-09-20 10:13:32,264 INFO  datanode.DataNode (DataNode.java:run(2541)) - 
DataTransfer, at xx-dn-12-67-49.hiido.host.xx.xx.com:1019: Transmitted 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 
(numBytes=83886080) to /10.12.65.86:1019
2024-09-20 10:13:35,096 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 to 
10.12.66.30:1019 
2024-09-20 10:13:35,519 INFO  datanode.DataNode (DataNode.java:run(2541)) - 
DataTransfer, at xx-dn-12-67-49.hiido.host.xx.xx.com:1019: Transmitted 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 
(numBytes=83886080) to /10.12.66.30:1019
2024-09-20 10:13:38,096 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 to 
10.12.78.39:1019 
2024-09-20 10:13:38,510 INFO  datanode.DataNode (DataNode.java:run(2541)) - 
DataTransfer, at xx-dn-12-67-49.hiido.host.xx.xx.com:1019: Transmitted 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 
(numBytes=83886080) to /10.12.78.39:1019
2024-09-20 10:13:44,095 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 to 
10.12.66.85:1019 
2024-09-20 10:13:44,599 INFO  datanode.DataNode (DataNode.java:run(2541)) - 
DataTransfer, at xx-dn-12-67-49.hiido.host.xx.xx.com:1019: Transmitted 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 
(numBytes=83886080) to /10.12.66.85:1019
2024-09-20 10:13:50,097 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 to 
10.12.67.42:1019 
2024-09-20 10:13:50,514 INFO  datanode.DataNode (DataNode.java:run(2541)) - 
DataTransfer, at xx-dn-12-67-49.hiido.host.xx.xx.com:1019: Transmitted 
BP-1822992414-10.12.65.48-1660893388633:blk_-9223372036628464382_15347979 
(numBytes=83886080) to /10.12.67.42:1019
2024-09-20 10:13:53,095 INFO  datanode.DataNode 
(DataNode.java:transferBlock(2328)) - DatanodeRegistration(10.12.67.49:1019, 
datanodeUuid=e73eb2ed-634b-40bd-a110-21ce485b329c, infoPort=1022, 
infoSecurePort=0, ipcPort=38010, 
storageInfo=lv=-57;cid=CID-1becf536-8c05-40cb-a1ff-106923139c5c;nsid=848315649;c=1660893388633)
 Starting thread to transfer 
BP-1822992414-10.12.65.48-1660893388633:blk_

[jira] [Commented] (HDFS-17603) [FGL] Abstract a LockManager to manage locks

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882957#comment-17882957
 ] 

ASF GitHub Bot commented on HDFS-17603:
---

hadoop-yetus commented on PR #7054:
URL: https://github.com/apache/hadoop/pull/7054#issuecomment-2360539733

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  
https://github.com/apache/hadoop/pull/7054 does not apply to HDFS-17384. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/7054 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/1/console |
   | versions | git=2.34.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> [FGL] Abstract a LockManager to manage locks 
> -
>
> Key: HDFS-17603
> URL: https://issues.apache.org/jira/browse/HDFS-17603
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-08-14-17-17-28-792.png
>
>
> Abstract a LockManager to manage locks. 
> Some requirements for this LockManager:
>  * Cached a fixed number lock instances, such as: 1000
>  * Assign a Lock instance to a key and keep this mapping until the key 
> released this instance
>  * This LockManager needs a high performance, such as: QPS 1000w
>  
> Some implementations that we can refer to:
>  * alluxio.collections.LockPool in Alluxio
>  * Implementation in MEITUAN 
> !image-2024-08-14-17-17-28-792.png|width=205,height=196!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17603) [FGL] Abstract a LockManager to manage locks

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882966#comment-17882966
 ] 

ASF GitHub Bot commented on HDFS-17603:
---

hadoop-yetus commented on PR #7054:
URL: https://github.com/apache/hadoop/pull/7054#issuecomment-2360628592

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17384 Compile Tests _ |
   | -1 :x: |  mvninstall  |   5m 48s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-mvninstall-root.txt)
 |  root in HDFS-17384 failed.  |
   | -1 :x: |  compile  |   1m 37s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  compile  |   0m 43s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  HDFS-17384 passed  |
   | -1 :x: |  mvnsite  |   0m 49s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in HDFS-17384 failed.  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | -1 :x: |  spotbugs  |   0m 45s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in HDFS-17384 failed.  |
   | -1 :x: |  shadedclient  |   5m  0s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 24s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  javac  |   0m 24s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7054/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  compile  |   0m 24s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org

[jira] [Commented] (HDFS-17603) [FGL] Abstract a LockManager to manage locks

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882958#comment-17882958
 ] 

ASF GitHub Bot commented on HDFS-17603:
---

kokon191 commented on PR #7054:
URL: https://github.com/apache/hadoop/pull/7054#issuecomment-2360542315

   @ZanderXu Can you take a look when you're free? Thanks!




> [FGL] Abstract a LockManager to manage locks 
> -
>
> Key: HDFS-17603
> URL: https://issues.apache.org/jira/browse/HDFS-17603
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-08-14-17-17-28-792.png
>
>
> Abstract a LockManager to manage locks. 
> Some requirements for this LockManager:
>  * Cached a fixed number lock instances, such as: 1000
>  * Assign a Lock instance to a key and keep this mapping until the key 
> released this instance
>  * This LockManager needs a high performance, such as: QPS 1000w
>  
> Some implementations that we can refer to:
>  * alluxio.collections.LockPool in Alluxio
>  * Implementation in MEITUAN 
> !image-2024-08-14-17-17-28-792.png|width=205,height=196!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17603) [FGL] Abstract a LockManager to manage locks

2024-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17603:
--
Labels: pull-request-available  (was: )

> [FGL] Abstract a LockManager to manage locks 
> -
>
> Key: HDFS-17603
> URL: https://issues.apache.org/jira/browse/HDFS-17603
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-08-14-17-17-28-792.png
>
>
> Abstract a LockManager to manage locks. 
> Some requirements for this LockManager:
>  * Cached a fixed number lock instances, such as: 1000
>  * Assign a Lock instance to a key and keep this mapping until the key 
> released this instance
>  * This LockManager needs a high performance, such as: QPS 1000w
>  
> Some implementations that we can refer to:
>  * alluxio.collections.LockPool in Alluxio
>  * Implementation in MEITUAN 
> !image-2024-08-14-17-17-28-792.png|width=205,height=196!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17603) [FGL] Abstract a LockManager to manage locks

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882955#comment-17882955
 ] 

ASF GitHub Bot commented on HDFS-17603:
---

kokon191 opened a new pull request, #7054:
URL: https://github.com/apache/hadoop/pull/7054

   A `LockPoolManager` that stores locks and a ref count. Ref count increments 
upon lock acquisition, decrements upon lock release. Lock is ejected when ref 
count hits 0. Cached locks get an extra ref count and won't be ejected by lock 
releases. Cached locks are updated periodically with a `PromotionService`, 
engined by the `PromotionMonitor` that tracks which locks are the most active.




> [FGL] Abstract a LockManager to manage locks 
> -
>
> Key: HDFS-17603
> URL: https://issues.apache.org/jira/browse/HDFS-17603
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Felix N
>Priority: Major
> Attachments: image-2024-08-14-17-17-28-792.png
>
>
> Abstract a LockManager to manage locks. 
> Some requirements for this LockManager:
>  * Cached a fixed number lock instances, such as: 1000
>  * Assign a Lock instance to a key and keep this mapping until the key 
> released this instance
>  * This LockManager needs a high performance, such as: QPS 1000w
>  
> Some implementations that we can refer to:
>  * alluxio.collections.LockPool in Alluxio
>  * Implementation in MEITUAN 
> !image-2024-08-14-17-17-28-792.png|width=205,height=196!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882934#comment-17882934
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

hadoop-yetus commented on PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#issuecomment-2360299403

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 265m 38s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 407m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7053 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0f87bfa7b622 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fc9d069d22cf1d0ef94e0f6f70841e84ab20c4de |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/2/testReport/ |
   | Max. process+thread count | 3146 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.a

[jira] [Commented] (HDFS-17627) Performance optimization on BlockUnderConstructionFeature

2024-09-19 Thread Jian Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882941#comment-17882941
 ] 

Jian Zhang commented on HDFS-17627:
---

[~hnzhu] Can you describe the scenario or environment and the number of 
replicas under which the {{getStaleReplicas}} operation encounters performance 
bottlenecks?

> Performance optimization on BlockUnderConstructionFeature
> -
>
> Key: HDFS-17627
> URL: https://issues.apache.org/jira/browse/HDFS-17627
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: server
>Affects Versions: 3.3.0
>Reporter: Hao-Nan Zhu
>Priority: Minor
>
> Hi, I’ve encountered performance bottlenecks in 
> _blockmanagement.BlockUnderConstructionFeature_ and I wonder if there's a 
> chance for optimization.
>  
> [_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219]
>  may cause performance degradation when the list of replicas is large. The 
> method uses an *ArrayList* to collect stale replicas, which could cause 
> memory re-allocations and potential OOM errors when the number of stale 
> replicas increases. Furthermore, 
> [_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219]
>  could also cause lock contention at some code paths like:  
> [_updatePipelineInternal()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6054]
>  (holding global lock) -> 
> [_updateLastBlock()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L4965]
>  -> 
> [_setGenerationStampAndVerifyReplicas_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java#L426]{_}(){_}
>  -> 
> [_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219].
>  
>  
> The optimization could be pre-sizing the ArrayList based on the actual number 
> of replicas (i.e. _List staleReplicas = new 
> ArrayList<>(replicas.length)_ ), which could minimize the number of times 
> resizing or reallocations. Another way to do the optimization is to have a 
> persisted list of {_}staleReplicas{_}, so there is no need to iterate over 
> the replicas.
>  
> Same issue could also happen with 
> [_appendUCPartsConcise()_|https://github.com/apache/hadoop/blob/6be04633b55bbd67c2875e39977cd9d2308dc1d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L349].
>  It takes in a StringBuilder with a [default size of 
> 150|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java#L792]
>  characters, which leads to risks of resizing when the number of replicas is 
> large. Within {_}BlockUnderConstructionFeature{_}, there are other similar 
> issues exist, including 
> [_addReplicaIfNotPresent()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L294]
>  or 
> [_setExpectedLocations()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L74].
>  
> Please let me know if there is something wrong with the analysis above, or 
> any comments on the optimization. Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882923#comment-17882923
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

KeeProMise commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1766306651


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   ```java
   if (LOG.isDebugEnabled()) {
   LOG.debug("{} received versionRequest response: {}", this, nsInfo);
   }
   
   ```





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
>     URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882922#comment-17882922
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

KeeProMise commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1766301368


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   HI, IMO,  "if (LOG.isDebugEnabled()) {...}" is better.





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
>     URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882841#comment-17882841
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1765980808


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,9 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+if (LOG.isDebugEnabled()) {
+  LOG.debug(this + " received versionRequest response: " + nsInfo);
+}

Review Comment:
   Thank you for your advice, this way can also achieve the same effect.





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17627) Performance optimization on BlockUnderConstructionFeature

2024-09-18 Thread Hao-Nan Zhu (Jira)
Hao-Nan Zhu created HDFS-17627:
--

 Summary: Performance optimization on BlockUnderConstructionFeature
 Key: HDFS-17627
 URL: https://issues.apache.org/jira/browse/HDFS-17627
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: server
Affects Versions: 3.3.0
Reporter: Hao-Nan Zhu


Hi, I’ve encountered performance bottlenecks in 
_blockmanagement.BlockUnderConstructionFeature_ and I wonder if there's a 
chance for optimization.

 

[_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219]
 may cause performance degradation when the list of replicas is large. The 
method uses an *ArrayList* to collect stale replicas, which could cause memory 
re-allocations and potential OOM errors when the number of stale replicas 
increases. Furthermore, 
[_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219]
 could also cause lock contention at some code paths like:  
[_updatePipelineInternal()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6054]
 (holding global lock) -> 
[_updateLastBlock()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L4965]
 -> 
[_setGenerationStampAndVerifyReplicas_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java#L426]{_}(){_}
 -> 
[_getStaleReplica()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L219].
 

 

The optimization could be pre-sizing the ArrayList based on the actual number 
of replicas (i.e. _List staleReplicas = new 
ArrayList<>(replicas.length)_ ), which could minimize the number of times 
resizing or reallocations. Another way to do the optimization is to have a 
persisted list of {_}staleReplicas{_}, so there is no need to iterate over the 
replicas.

 

Same issue could also happen with 
[_appendUCPartsConcise()_|https://github.com/apache/hadoop/blob/6be04633b55bbd67c2875e39977cd9d2308dc1d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L349].
 It takes in a StringBuilder with a [default size of 
150|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java#L792]
 characters, which leads to risks of resizing when the number of replicas is 
large. Within {_}BlockUnderConstructionFeature{_}, there are other similar 
issues exist, including 
[_addReplicaIfNotPresent()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L294]
 or 
[_setExpectedLocations()_|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java#L74].

 

Please let me know if there is something wrong with the analysis above, or any 
comments on the optimization. Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882793#comment-17882793
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

hadoop-yetus commented on PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#issuecomment-2359267690

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 267m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 408m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7053 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 75a8e6b6150d 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 11aa442354fef3ea355d2bd33aff5fb221060ea8 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/1/testReport/ |
   | Max. process+thread count | 3144 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7053/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.a

[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882717#comment-17882717
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

ayushtkn commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1765089605


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,9 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+if (LOG.isDebugEnabled()) {
+  LOG.debug(this + " received versionRequest response: " + nsInfo);
+}

Review Comment:
   I think if we change this to
   ```
   LOG.debug("{} received versionRequest response: {}", this, nsInfo);
   ```
   
   in this case as well, ``toString()`` won't be invoked & we can get away 
without having this ``isDebugEnabled`` gaurd





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17625) libhdfspp: Failed to read expected SASL data transfer protection handshake

2024-09-18 Thread mukvin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882704#comment-17882704
 ] 

mukvin commented on HDFS-17625:
---

Hi [~wheat9] ,
I checked the kerberos related issue which are developed by you.

And I checked the codes, now it confused me a long time to solve this issue.

May you can give me a tip.

Looking forward to your reply, thank you very much.

> libhdfspp: Failed to read expected SASL data transfer protection handshake
> --
>
> Key: HDFS-17625
> URL: https://issues.apache.org/jira/browse/HDFS-17625
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.6
>Reporter: mukvin
>Priority: Blocker
>
> I using the libhdfspp to connect a secure (kerberos) hdfs.
> By the way, I was create this hdfs in a machine and only one namenode and one 
> datanode.
> And I found that  I can get the data correclty by command `hdfs dfs -cat 
> /user/data.csv`.
> But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
> error is following:
> ```
> $./cat /user/test_tbl1.csv
> Error reading the file: Connection reset by peer
> [WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
> 139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
> disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
> connected
> ```
> ```
> 2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Failed to read expected SASL data transfer protection handshake from client 
> at /127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
> which does not support SASL data transfer protection
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException:
>  Received 1c51a1 instead of deadbeef from client.
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
>     at java.lang.Thread.run(Thread.java:750)
> ```
> and hdfs-site.xml
> ```
> $ cat hdfs-site.xml
> 
>     
>         dfs.namenode.rpc-address
>         0.0.0.0:8020
>     
>     
>         dfs.replication
>         1
>     
>     
>         dfs.block.access.token.enable
>         true
>     
>     
>         dfs.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.secondary.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.data.dir.perm
>         700
>     
>     
>         dfs.datanode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.datanode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.encrypt.data.transfer
>         false
>     
>     
> dfs.data.transfer.protection
> integrity
>     
>     
>         dfs.http.policy
>         HTTPS_ONLY
>     
> 
>   dfs.datanode.address
>   0.0.0.0:61004
> 
> 
>   dfs.datanode.http.address
>   0.0.0.0:61006
> 
> 
>   dfs.datanode.https.address
>   0.0.0.0:61010
> 
>  
> 
>      
>          dfs.client.https.need-auth
>          false
>      
> 
> ```
> and core-site.xml
> ```
> $ cat core-site.xml
> 
> fs.default.namehdfs://0.0.0.0
> fs.defaultFShdfs://0.0.0.0
> hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
> hadoop.security.authenticationkerberos
> hadoop.security.authorizationtrue
> 
> ```
> Can anyone help, pls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17626:
--
Labels: pull-request-available  (was: )

> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882703#comment-17882703
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut opened a new pull request, #7053:
URL: https://github.com/apache/hadoop/pull/7053

   
   JIRA: HDFS-17626
   
   ### Description of PR
   During the datanode startup process, there is a debug log, because there is 
no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
will be obtained. The guard should be added here to reduce lock contention.
   
   ### How was this patch tested?
   Not required
   
   




> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-18 Thread Tao Li (Jira)
Tao Li created HDFS-17626:
-

 Summary: Reduce lock contention at datanode startup
 Key: HDFS-17626
 URL: https://issues.apache.org/jira/browse/HDFS-17626
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tao Li
Assignee: Tao Li
 Attachments: image-2024-09-18-20-45-56-999.png

During the datanode startup process, there is a debug log, because there is no 
LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock will 
be obtained. The guard should be added here to reduce lock contention.

!image-2024-09-18-20-45-56-999.png|width=333,height=263!

!https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17625) libhdfspp: Failed to read expected SASL data transfer protection handshake

2024-09-18 Thread mukvin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mukvin updated HDFS-17625:
--
Summary: libhdfspp: Failed to read expected SASL data transfer protection 
handshake  (was: Failed to read expected SASL data transfer protection 
handshake)

> libhdfspp: Failed to read expected SASL data transfer protection handshake
> --
>
> Key: HDFS-17625
> URL: https://issues.apache.org/jira/browse/HDFS-17625
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.6
>Reporter: mukvin
>Priority: Blocker
>
> I using the libhdfspp to connect a secure (kerberos) hdfs.
> By the way, I was create this hdfs in a machine and only one namenode and one 
> datanode.
> And I found that  I can get the data correclty by command `hdfs dfs -cat 
> /user/data.csv`.
> But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
> error is following:
> ```
> $./cat /user/test_tbl1.csv
> Error reading the file: Connection reset by peer
> [WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
> 139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
> disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
> connected
> ```
> ```
> 2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Failed to read expected SASL data transfer protection handshake from client 
> at /127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
> which does not support SASL data transfer protection
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException:
>  Received 1c51a1 instead of deadbeef from client.
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
>     at java.lang.Thread.run(Thread.java:750)
> ```
> and hdfs-site.xml
> ```
> $ cat hdfs-site.xml
> 
>     
>         dfs.namenode.rpc-address
>         0.0.0.0:8020
>     
>     
>         dfs.replication
>         1
>     
>     
>         dfs.block.access.token.enable
>         true
>     
>     
>         dfs.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.secondary.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.data.dir.perm
>         700
>     
>     
>         dfs.datanode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.datanode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.encrypt.data.transfer
>         false
>     
>     
> dfs.data.transfer.protection
> integrity
>     
>     
>         dfs.http.policy
>         HTTPS_ONLY
>     
> 
>   dfs.datanode.address
>   0.0.0.0:61004
> 
> 
>   dfs.datanode.http.address
>   0.0.0.0:61006
> 
> 
>   dfs.datanode.https.address
>   0.0.0.0:61010
> 
>  
> 
>      
>          dfs.client.https.need-auth
>          false
>      
> 
> ```
> and core-site.xml
> ```
> $ cat core-site.xml
> 
> fs.default.namehdfs://0.0.0.0
> fs.defaultFShdfs://0.0.0.0
> hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
> hadoop.security.authenticationkerberos
> hadoop.security.authorizationtrue
> 
> ```
> Can anyone help, pls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17219) Inconsistent count results when upgrading hdfs cluster from 2.10.2 to 3.3.6

2024-09-14 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated HDFS-17219:
--
Description: 
When upgrading hdfs cluster from 2.10.2 to 3.3.6, the results returned from 
*dfs count* command is inconsistent.
h1. Reproduce

Start up 2.10.2 hdfs cluster (1 NN, 2 DN, 1 SNN), execute the following commands
{code:java}
dfs -mkdir /GscWZRxS
dfs -put -f  -d /tmp/hpLjvJVW/cl /GscWZRxS/
dfs -put -f  -d /tmp/hpLjvJVW/Zjpk /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR
dfsadmin -clrQuota /GscWZRxS/cl
dfsadmin -refreshSuperUserGroupsConfiguration
dfs -mkdir /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf/mGpVA
dfsadmin -refreshCallQueue
dfsadmin -clrQuota /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd
dfsadmin -setSpaceQuota 2 -storageType DISK 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf
dfsadmin -refreshNodes
dfsadmin -setSpaceQuota 2 -storageType DISK /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd
dfsadmin -clrSpaceQuota -storageType ARCHIVE /GscWZRxS/cl
dfsadmin -restoreFailedStorage true{code}
before upgrade, check the quota results
{code:java}
bin/hdfs dfs -count -q -h -u /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf 
none             inf            none             inf 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf {code}
Then prepare the upgrade. Enter safemode, {*}create image{*}, shutdown the 
cluster and start up the new cluster
{code:java}
bin/hdfs dfs -count -q -h -u /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf 
8.0 E           8.0 E            none             inf 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf {code}
The values of the first two columns are inconsistent with the quota I set 
before.

I have attached the file used by the command. I am digging out the root cause, 
I'll try to submit a patch once I can fix it. Any help is appreciated!
h1. Root Cause

The issue exists when persisting data to FSImage.

The quota values stored in Edit Logs are correct. However, once HDFS creates an 
FSImage, the edit logs will be discarded. Therefore, the quota information is 
lost.

  was:
When upgrading hdfs cluster from 2.10.2 to 3.3.6, the results returned from 
*dfs count* command is inconsistent.
h1. Reproduce

Start up 2.10.2 hdfs cluster (1 NN, 2 DN, 1 SNN), execute the following commands
{code:java}
dfs -mkdir /GscWZRxS
dfs -put -f  -d /tmp/hpLjvJVW/cl /GscWZRxS/
dfs -put -f  -d /tmp/hpLjvJVW/Zjpk /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR
dfsadmin -clrQuota /GscWZRxS/cl
dfsadmin -refreshSuperUserGroupsConfiguration
dfs -mkdir /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf/mGpVA
dfsadmin -refreshCallQueue
dfsadmin -clrQuota /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd
dfsadmin -setSpaceQuota 2 -storageType DISK 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf
dfsadmin -refreshNodes
dfsadmin -setSpaceQuota 2 -storageType DISK /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd
dfsadmin -clrSpaceQuota -storageType ARCHIVE /GscWZRxS/cl
dfsadmin -restoreFailedStorage true{code}
before upgrade, check the quota results
{code:java}
bin/hdfs dfs -count -q -h -u /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf 
none             inf            none             inf 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf {code}
Then prepare the upgrade. Enter safemode, create image, shutdown the cluster 
and start up the new cluster
{code:java}
bin/hdfs dfs -count -q -h -u /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf 
8.0 E           8.0 E            none             inf 
/GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf {code}
The values of the first two columns are inconsistent with the quota I set 
before.

I have attached the file used by the command. I am digging out the root cause, 
I'll try to submit a patch once I can fix it. Any help is appreciated!


> Inconsistent count results when upgrading hdfs cluster from 2.10.2 to 3.3.6
> ---
>
> Key: HDFS-17219
> URL: https://issues.apache.org/jira/browse/HDFS-17219
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.2, 3.3.6
>Reporter: Ke Han
>Priority: Major
> Attachments: hpLjvJVW.tar.gz
>
>
> When upgrading hdfs cluster from 2.10.2 to 3.3.6, the results returned from 
> *dfs count* command is inconsistent.
> h1. Reproduce
> Start up 2.10.2 hdfs cluster (1 NN, 2 DN, 1 SNN), execute the following 
> commands
> {code:java}
> dfs -mkdir /GscWZRxS
> dfs -put -f  -d /tmp/hpLjvJVW/cl /GscWZRxS/
> dfs -put -f  -d /tmp/hpLjvJVW/Zjpk /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR
> dfsadmin -clrQuota /GscWZRxS/cl
> dfsadmin -refreshSuperUserGroupsConfiguration
> dfs -mkdir /GscWZRxS/cl/lBsmFBlyBd/pozIeNFjzd/PsLbgpR/Zjpk/Cf/mGpVA
> dfsadmin -refreshCallQueue
> dfsadmin -clrQuota /G

[jira] [Updated] (HDFS-17625) Failed to read expected SASL data transfer protection handshake

2024-09-14 Thread mukvin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mukvin updated HDFS-17625:
--
Priority: Blocker  (was: Major)

> Failed to read expected SASL data transfer protection handshake
> ---
>
> Key: HDFS-17625
> URL: https://issues.apache.org/jira/browse/HDFS-17625
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.6
>Reporter: mukvin
>Priority: Blocker
>
> I using the libhdfspp to connect a secure (kerberos) hdfs.
> By the way, I was create this hdfs in a machine and only one namenode and one 
> datanode.
> And I found that  I can get the data correclty by command `hdfs dfs -cat 
> /user/data.csv`.
> But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
> error is following:
> ```
> $./cat /user/test_tbl1.csv
> Error reading the file: Connection reset by peer
> [WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
> 139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
> disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
> connected
> ```
> ```
> 2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Failed to read expected SASL data transfer protection handshake from client 
> at /127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
> which does not support SASL data transfer protection
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException:
>  Received 1c51a1 instead of deadbeef from client.
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
>     at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
>     at java.lang.Thread.run(Thread.java:750)
> ```
> and hdfs-site.xml
> ```
> $ cat hdfs-site.xml
> 
>     
>         dfs.namenode.rpc-address
>         0.0.0.0:8020
>     
>     
>         dfs.replication
>         1
>     
>     
>         dfs.block.access.token.enable
>         true
>     
>     
>         dfs.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.secondary.namenode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.secondary.namenode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.data.dir.perm
>         700
>     
>     
>         dfs.datanode.keytab.file
>         /data/1/hadoop-kerberos/keytabs/hdfs.keytab
>     
>     
>         dfs.datanode.kerberos.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.datanode.kerberos.https.principal
>         hdfs/had...@xxx.com
>     
>     
>         dfs.encrypt.data.transfer
>         false
>     
>     
> dfs.data.transfer.protection
> integrity
>     
>     
>         dfs.http.policy
>         HTTPS_ONLY
>     
> 
>   dfs.datanode.address
>   0.0.0.0:61004
> 
> 
>   dfs.datanode.http.address
>   0.0.0.0:61006
> 
> 
>   dfs.datanode.https.address
>   0.0.0.0:61010
> 
>  
> 
>      
>          dfs.client.https.need-auth
>          false
>      
> 
> ```
> and core-site.xml
> ```
> $ cat core-site.xml
> 
> fs.default.namehdfs://0.0.0.0
> fs.defaultFShdfs://0.0.0.0
> hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
> hadoop.security.authenticationkerberos
> hadoop.security.authorizationtrue
> 
> ```
> Can anyone help, pls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17625) Failed to read expected SASL data transfer protection handshake

2024-09-13 Thread mukvin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mukvin updated HDFS-17625:
--
Description: 
I using the libhdfspp to connect a secure (kerberos) hdfs.

By the way, I was create this hdfs in a machine and only one namenode and one 
datanode.

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```

```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
    at java.lang.Thread.run(Thread.java:750)

```

and hdfs-site.xml
```

$ cat hdfs-site.xml

    
        dfs.namenode.rpc-address
        0.0.0.0:8020
    
    
        dfs.replication
        1
    
    
        dfs.block.access.token.enable
        true
    
    
        dfs.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.secondary.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.data.dir.perm
        700
    
    
        dfs.datanode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.datanode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.encrypt.data.transfer
        false
    
    
dfs.data.transfer.protection
integrity
    
    
        dfs.http.policy
        HTTPS_ONLY
    

  dfs.datanode.address
  0.0.0.0:61004


  dfs.datanode.http.address
  0.0.0.0:61006


  dfs.datanode.https.address
  0.0.0.0:61010

 

     
         dfs.client.https.need-auth
         false
     

```

and core-site.xml

```

$ cat core-site.xml


fs.default.namehdfs://0.0.0.0
fs.defaultFShdfs://0.0.0.0
hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
hadoop.security.authenticationkerberos
hadoop.security.authorizationtrue


```

Can anyone help, pls.

  was:
I using the libhdfspp to connect a secure (kerberos) hdfs.



By the way, I was create this hdfs with *standalone mode.*

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```

```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run

[jira] [Updated] (HDFS-17625) Failed to read expected SASL data transfer protection handshake

2024-09-13 Thread mukvin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mukvin updated HDFS-17625:
--
Description: 
I using the libhdfspp to connect a secure (kerberos) hdfs.



By the way, I was create this hdfs with *standalone mode.*

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```

```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
    at java.lang.Thread.run(Thread.java:750)

```

and hdfs-site.xml
```

$ cat hdfs-site.xml

    
        dfs.namenode.rpc-address
        0.0.0.0:8020
    
    
        dfs.replication
        1
    
    
        dfs.block.access.token.enable
        true
    
    
        dfs.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.secondary.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.data.dir.perm
        700
    
    
        dfs.datanode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.datanode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.encrypt.data.transfer
        false
    
    
dfs.data.transfer.protection
integrity
    
    
        dfs.http.policy
        HTTPS_ONLY
    

  dfs.datanode.address
  0.0.0.0:61004


  dfs.datanode.http.address
  0.0.0.0:61006


  dfs.datanode.https.address
  0.0.0.0:61010

 

     
         dfs.client.https.need-auth
         false
     

```

and core-site.xml

```

$ cat core-site.xml


fs.default.namehdfs://0.0.0.0
fs.defaultFShdfs://0.0.0.0
hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
hadoop.security.authenticationkerberos
hadoop.security.authorizationtrue


```

Can anyone help, pls.

  was:
I using the libhdfspp to connect a secure (kerberos) hdfs.

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```


```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
    at java.lang.Thread.run(Thread.java:750)

```

and hdfs-site.xml
```

$ cat

[jira] [Created] (HDFS-17625) Failed to read expected SASL data transfer protection handshake

2024-09-13 Thread mukvin (Jira)
mukvin created HDFS-17625:
-

 Summary: Failed to read expected SASL data transfer protection 
handshake
 Key: HDFS-17625
 URL: https://issues.apache.org/jira/browse/HDFS-17625
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.6
Reporter: mukvin


I using the libhdfspp to connect a secure (kerberos) hdfs.

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```


```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
    at java.lang.Thread.run(Thread.java:750)

```

and hdfs-site.xml
```

$ cat hdfs-site.xml

    
        dfs.namenode.rpc-address
        0.0.0.0:8020
    
    
        dfs.replication
        1
    
    
        dfs.block.access.token.enable
        true
    
    
        dfs.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.secondary.namenode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.secondary.namenode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.data.dir.perm
        700
    
    
        dfs.datanode.keytab.file
        /data/1/hadoop-kerberos/keytabs/hdfs.keytab
    
    
        dfs.datanode.kerberos.principal
        hdfs/had...@xxx.com
    
    
        dfs.datanode.kerberos.https.principal
        hdfs/had...@xxx.com
    
    
        dfs.encrypt.data.transfer
        false
    
    
dfs.data.transfer.protection
integrity
    
    
        dfs.http.policy
        HTTPS_ONLY
    

  dfs.datanode.address
  0.0.0.0:61004


  dfs.datanode.http.address
  0.0.0.0:61006


  dfs.datanode.https.address
  0.0.0.0:61010

 

     
         dfs.client.https.need-auth
         false
     

```

and core-site.xml

```

$ cat core-site.xml


fs.default.namehdfs://0.0.0.0
fs.defaultFShdfs://0.0.0.0
hadoop.tmp.dir/data/1/hadoop-kerberos/temp_data/336
hadoop.security.authenticationkerberos
hadoop.security.authorizationtrue


```

Can anyone help, pls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17624) The availableCount will be deducted only if the excludedNode is included in the selected scope.

2024-09-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881537#comment-17881537
 ] 

ASF GitHub Bot commented on HDFS-17624:
---

hadoop-yetus commented on PR #7042:
URL: https://github.com/apache/hadoop/pull/7042#issuecomment-2348829791

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 199m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7042/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 294m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.net.TestDFSNetworkTopology |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7042/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7042 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8e6d8c50326f 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c4fa42327348ac515c7065f20996fbc1fe66a3b |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7042/1/testReport/ |
   | Max. process+thread count | 4419 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/ha

[jira] [Updated] (HDFS-17624) The availableCount will be deducted only if the excludedNode is included in the selected scope.

2024-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17624:
--
Labels: pull-request-available  (was: )

> The availableCount will be deducted only if the excludedNode is included in 
> the selected scope.
> ---
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17620) Better block placement for small EC files

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881473#comment-17881473
 ] 

ASF GitHub Bot commented on HDFS-17620:
---

hadoop-yetus commented on PR #7035:
URL: https://github.com/apache/hadoop/pull/7035#issuecomment-2348089297

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 50s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 54s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  javac  |   0m 54s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  compile  |   0m 49s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | -1 :x: |  javac  |   0m 49s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 8 new + 13 unchanged - 
0 fixed = 21 total (was 13)  |
   | -1 :x: |  mvnsite  |   0m 53s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7035/2/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Priv

[jira] [Updated] (HDFS-16984) Directory timestamp lost during the upgrade process

2024-09-12 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated HDFS-16984:
--
Description: 
h1. Symptoms

The access timestamp for a directory is lost after the upgrading from HDFS 
cluster 2.10.2 to 3.3.6.
h1. Reproduce

Start up a four-node HDFS cluster in 2.10.2 version.

Execute the following commands. (The client is started up in NN, We have 
minimized the command sequence for reproducing)
{code:java}
bin/hdfs dfs -mkdir /GUBIkxOc
bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/{code}
Perform read in the old version
{code:java}
bin/hdfs dfs -ls     -t  -r -u /GUBIkxOc/

Found 1 items
drwxr-xr-x   - 20001 998                 0 2023-04-17 16:15 
/GUBIkxOc/bQfxf{code}
Then perform a full-stop upgrade to upgrade the entire cluster to 3.3.6. 
(Follow upgrade procedure in the website: (1) enter safemode (2) rolling 
upgrade prepare (3) exit from safe mode). When all nodes in new version have 
started up, we perform the same read:
{code:java}
Found 1 items
drwxr-xr-x   - 20001 998                 0 1970-01-01 00:00 
/GUBIkxOc/bQfxf{code}
The access timestamp info of directory /GUBIkxOc/bQfxf is lost. It changes from 
2023-04-17 16:15 to 1970-01-01 00:00.

PS: The prepare upgrade must happen after the commands have been executed.

I have also attached the required file: +/tmp/upfuzz/hdfs/GUBIkxOc/bQfxf+ . 
h1. Root Cause

When creating the FSImage, the access time field is not persisted.

If users perform an upgrade without creating the FSImage, this bug won't happen 
because access time is stored in the Edit Log. However, once FSImage is 
created, all the edit logs before the snapshot will be invalidated. When the 
new version system starts up, it only reconstructs the in-memory file system 
from the FSImage and ignores those edit logs.

This can also happen to the 3.x version upgrade process since the access time 
is not properly persisted.

We should make sure the access time of the directory is also properly 
persisted, just as files. I have submitted a PR for a fix.

  was:
h1. Symptoms

The access timestamp for a directory is lost after the upgrading from HDFS 
cluster 2.10.2 to 3.3.6.
h1. Reproduce

Start up a four-node HDFS cluster in 2.10.2 version.

Execute the following commands. (The client is started up in NN, We have 
minimized the command sequence for reproducing)
{code:java}
bin/hdfs dfs -mkdir /GUBIkxOc
bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/{code}
Perform read in the old version
{code:java}
bin/hdfs dfs -ls     -t  -r -u /GUBIkxOc/

Found 1 items
drwxr-xr-x   - 20001 998                 0 2023-04-17 16:15 
/GUBIkxOc/bQfxf{code}
Then perform a full-stop upgrade to upgrade the entire cluster to 3.3.6. 
(Follow upgrade procedure in the website: (1) enter safemode (2) rolling 
upgrade prepare (3) exit from safe mode). When all nodes in new version have 
started up, we perform the same read:
{code:java}
Found 1 items
drwxr-xr-x   - 20001 998                 0 1970-01-01 00:00 
/GUBIkxOc/bQfxf{code}
The access timestamp info of directory /GUBIkxOc/bQfxf is lost. It changes from 
2023-04-17 16:15 to 1970-01-01 00:00.

PS: The prepare upgrade must happen after the commands have been executed.

I have also attached the required file: +/tmp/upfuzz/hdfs/GUBIkxOc/bQfxf+ . 
h1. Root Cause

When creating the FSImage, the access time field is not persisted.

If users perform an upgrade without creating the FSImage, this bug won't happen 
because access time is stored in the Edit Log. However, once FSImage is 
created, all the edit logs before the snapshot will be invalidated. When the 
new version system starts up, it only reconstructs the in-memory file system 
from the FSImage and ignores those edit logs.

We should make sure the access time of the directory is also properly 
persisted, just as files. I have submitted a PR for a fix.


> Directory timestamp lost during the upgrade process
> ---
>
> Key: HDFS-16984
> URL: https://issues.apache.org/jira/browse/HDFS-16984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.2, 3.3.6
>Reporter: Ke Han
>Priority: Major
>  Labels: pull-request-available
> Attachments: GUBIkxOc.tar.gz
>
>
> h1. Symptoms
> The access timestamp for a directory is lost after the upgrading from HDFS 
> cluster 2.10.2 to 3.3.6.
> h1. Reproduce
> Start up a four-node HDFS cluster in 2.10.2 version.
> Execute the following commands. (The client is started up in NN, We have 
> minimized the command sequence for reproducing)
> {code:java}
> bin/hdfs dfs -mkdir /GUBIkxOc
> bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/{code}
> 

[jira] [Updated] (HDFS-16984) Directory timestamp lost during the upgrade process

2024-09-12 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated HDFS-16984:
--
Description: 
h1. Symptoms

The access timestamp for a directory is lost after the upgrading from HDFS 
cluster 2.10.2 to 3.3.6.
h1. Reproduce

Start up a four-node HDFS cluster in 2.10.2 version.

Execute the following commands. (The client is started up in NN, We have 
minimized the command sequence for reproducing)
{code:java}
bin/hdfs dfs -mkdir /GUBIkxOc
bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/{code}
Perform read in the old version
{code:java}
bin/hdfs dfs -ls     -t  -r -u /GUBIkxOc/

Found 1 items
drwxr-xr-x   - 20001 998                 0 2023-04-17 16:15 
/GUBIkxOc/bQfxf{code}
Then perform a full-stop upgrade to upgrade the entire cluster to 3.3.6. 
(Follow upgrade procedure in the website: (1) enter safemode (2) rolling 
upgrade prepare (3) exit from safe mode). When all nodes in new version have 
started up, we perform the same read:
{code:java}
Found 1 items
drwxr-xr-x   - 20001 998                 0 1970-01-01 00:00 
/GUBIkxOc/bQfxf{code}
The access timestamp info of directory /GUBIkxOc/bQfxf is lost. It changes from 
2023-04-17 16:15 to 1970-01-01 00:00.

PS: The prepare upgrade must happen after the commands have been executed.

I have also attached the required file: +/tmp/upfuzz/hdfs/GUBIkxOc/bQfxf+ . 
h1. Root Cause

When creating the FSImage, the access time field is not persisted.

If users perform an upgrade without creating the FSImage, this bug won't happen 
because access time is stored in the Edit Log. However, once FSImage is 
created, all the edit logs before the snapshot will be invalidated. When the 
new version system starts up, it only reconstructs the in-memory file system 
from the FSImage and ignores those edit logs.

We should make sure the access time of the directory is also properly 
persisted, just as files. I have submitted a PR for a fix.

  was:
h1. Symptoms

The access timestamp for a directory is lost after the upgrading from HDFS 
cluster 2.10.2 to 3.3.6.
h1. Reproduce

Start up a four-node HDFS cluster in 2.10.2 version.

Execute the following commands. (The client is started up in NN, We have 
minimized the command sequence for reproducing)
{code:java}
bin/hdfs dfs -mkdir /GUBIkxOc
bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/
bin/hdfs dfs -mkdir /GUBIkxOc/sKbTRjvS{code}
Perform read in the old version
{code:java}
bin/hdfs dfs -ls     -t  -r -u /GUBIkxOc/

Found 2 items
drwxr-xr-x   - root  supergroup          0 1970-01-01 00:00 /GUBIkxOc/sKbTRjvS
drwxr-xr-x   - 20001 998                 0 2023-04-17 16:15 
/GUBIkxOc/bQfxf{code}
Then perform a full-stop upgrade to upgrade the entire cluster to 3.3.6. 
(Follow upgrade procedure in the website: (1) enter safemode (2) rolling 
upgrade prepare (3) exit from safe mode). When all nodes in new version have 
started up, we perform the same read:
{code:java}
Found 2 items
drwxr-xr-x   - 20001 998                 0 1970-01-01 00:00 /GUBIkxOc/bQfxf
drwxr-xr-x   - root  supergroup          0 1970-01-01 00:00 /GUBIkxOc/sKbTRjvS 
{code}
The access timestamp info of directory /GUBIkxOc/bQfxf is lost. It changes from 
2023-04-17 16:15 to 1970-01-01 00:00.

PS: The prepare upgrade must happen after the commands have been executed.

I have also attached the required file: +/tmp/upfuzz/hdfs/GUBIkxOc/bQfxf+ . 
h1. Root Cause

When creating the FSImage, the access time field is not persisted.

If users perform an upgrade without creating the FSImage, this bug won't happen 
because access time is stored in the Edit Log. However, once FSImage is 
created, all the edit logs before the snapshot will be invalidated. When the 
new version system starts up, it only reconstructs the in-memory file system 
from the FSImage and ignores those edit logs.

We should make sure the access time of the directory is also properly 
persisted, just as files. I have submitted a PR for a fix.


> Directory timestamp lost during the upgrade process
> ---
>
> Key: HDFS-16984
> URL: https://issues.apache.org/jira/browse/HDFS-16984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.2, 3.3.6
>Reporter: Ke Han
>Priority: Major
>  Labels: pull-request-available
> Attachments: GUBIkxOc.tar.gz
>
>
> h1. Symptoms
> The access timestamp for a directory is lost after the upgrading from HDFS 
> cluster 2.10.2 to 3.3.6.
> h1. Reproduce
> Start up a four-node HDFS cluster in 2.10.2 version.
> Execute the following commands. (The client is started up in NN, We have 
> minimized the command sequence for reproducing)
> {code:java}
> bin/hdfs dfs -mkdir /GUB

[jira] [Commented] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881453#comment-17881453
 ] 

Guo Wei commented on HDFS-17623:


I claim this issue

> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> {color:#FF}rm: Failed to move to trash: 
> hdfs://test-fed/guov100/data/test: Rename of /guov100/data/test to 
> /user/hdfs/.Trash/Current/guov100/data/test is not allowed, no eligible 
> destination in the same namespace was found.{color}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guo Wei reopened HDFS-17623:


> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> {color:#FF}rm: Failed to move to trash: 
> hdfs://test-fed/guov100/data/test: Rename of /guov100/data/test to 
> /user/hdfs/.Trash/Current/guov100/data/test is not allowed, no eligible 
> destination in the same namespace was found.{color}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guo Wei resolved HDFS-17623.

Fix Version/s: 3.4.0
   Resolution: Works for Me

> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> {color:#FF}rm: Failed to move to trash: 
> hdfs://test-fed/guov100/data/test: Rename of /guov100/data/test to 
> /user/hdfs/.Trash/Current/guov100/data/test is not allowed, no eligible 
> destination in the same namespace was found.{color}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guo Wei updated HDFS-17623:
---
Description: 
Please see the error message in the following example:

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test1/guov100/data
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test2/guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
Successfully added mount point /guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
Mount Table Entries:
Source                    Destinations              Owner                     
Group                     Mode       Quota/Usage
/guov100/data              
hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                    
  hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
hdfs://test-fed/guov100/data/test
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test1/guov100/data/test/
Found 1 items
{-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test2/guov100/data/test/
Found 1 items
{-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls hdfs://test-fed/guov100/data/test/
Found 2 items
{-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test1.txt
{-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test2.txt
[hdfs@sjsy-hh202-zbxh55w root]$ 
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
hdfs://test-fed/guov100/data/test/
{color:#FF}rm: Failed to move to trash: hdfs://test-fed/guov100/data/test: 
Rename of /guov100/data/test to /user/hdfs/.Trash/Current/guov100/data/test is 
not allowed, no eligible destination in the same namespace was found.{color}

 

  was:
Please see the error message in the following example:

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test1/guov100/data
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test2/guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
Successfully added mount point /guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
Mount Table Entries:
Source                    Destinations              Owner                     
Group                     Mode       Quota/Usage
/guov100/data              
hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                    
  hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
hdfs://test-fed/guov100/data/test
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test1/guov100/data/test/
Found 1 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test2/guov100/data/test/
Found 1 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls hdfs://test-fed/guov100/data/test/
Found 2 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test1.txt
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test2.txt
[hdfs@sjsy-hh202-zbxh55w root]$ 
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
hdfs://test-fed/guov100/data/test/
rm: Failed to move to trash: hdfs://test-fed/guov100/data/test: Rename of 
/guov100/data/test to /user/hdfs/.Trash/Current/guov100/data/test is not 
allowed, no eligible destination in the same namespace was found.

 


> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>

[jira] [Updated] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guo Wei updated HDFS-17623:
---
External issue ID:   (was: HDFS-16024)

> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> rm: Failed to move to trash: hdfs://test-fed/guov100/data/test: Rename of 
> /guov100/data/test to /user/hdfs/.Trash/Current/guov100/data/test is not 
> allowed, no eligible destination in the same namespace was found.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guo Wei updated HDFS-17623:
---
External issue URL:   (was: 
https://issues.apache.org/jira/browse/HDFS-16024)

> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> -rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> rm: Failed to move to trash: hdfs://test-fed/guov100/data/test: Rename of 
> /guov100/data/test to /user/hdfs/.Trash/Current/guov100/data/test is not 
> allowed, no eligible destination in the same namespace was found.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-12 Thread Guo Wei (Jira)
Guo Wei created HDFS-17623:
--

 Summary: RBF:The router service fails to delete a mount table with 
multiple subclusters mounted on it through MultipleDestinationMountTableResolver
 Key: HDFS-17623
 URL: https://issues.apache.org/jira/browse/HDFS-17623
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.4.0
Reporter: Guo Wei


Please see the error message in the following example:

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test1/guov100/data
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
hdfs://hh-rbf-test2/guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
Successfully added mount point /guov100/data

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
Mount Table Entries:
Source                    Destinations              Owner                     
Group                     Mode       Quota/Usage
/guov100/data              
hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                    
  hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
hdfs://test-fed/guov100/data/test
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test1/guov100/data/test/
Found 1 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
hdfs://hh-rbf-test2/guov100/data/test/
Found 1 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt

[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls hdfs://test-fed/guov100/data/test/
Found 2 items
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test1.txt
-rw-r--r--   3 hdfs hdfs          0 2024-09-13 09:56 
hdfs://test-fed/guov100/data/test/file-test2.txt
[hdfs@sjsy-hh202-zbxh55w root]$ 
[hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
hdfs://test-fed/guov100/data/test/
rm: Failed to move to trash: hdfs://test-fed/guov100/data/test: Rename of 
/guov100/data/test to /user/hdfs/.Trash/Current/guov100/data/test is not 
allowed, no eligible destination in the same namespace was found.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881356#comment-17881356
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

hadoop-yetus commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2346698787

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 49s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  27m 16s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6871/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 13d151344b1a 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 46255050b486a73d12458651c4b6390e534576cd |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6871/19/testReport/ |
   | Max. process+thread count | 3856 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console ou

[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881268#comment-17881268
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

KeeProMise opened a new pull request, #6871:
URL: https://github.com/apache/hadoop/pull/6871

   
   
   ### Description of PR
   please see: https://issues.apache.org/jira/browse/HDFS-17545
   NOTE: This is a sub-pull request (PR) related to 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531)(Asynchronous 
router RPC). For more details or context, please refer to the main issue 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531)
   More detailed documentation: 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531) Router 
asynchronous rpc implementation.pdf and Aynchronous router.pdf
   
   **Main modifications:**
   
   - RouterRpcClient.java: The original functionality remains unchanged; common 
methods have been extracted from the original methods.
   - RouterAsyncRpcClient.java: An asynchronous implementation of 
RouterRpcClient, inheriting from RouterRpcClient.
   - Added configuration for asynchronous feature toggle, as well as the number 
of asynchronous handlers and responders.
   - Using ThreadLocalContext to maintain thread local variables, ensuring that 
thread local variables can be correctly passed between handler, 
asyncRouterHandler, and asyncRouterResponder.
   
   ### How was this patch tested?
   new UT TestRouterAsyncRpcClient
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881267#comment-17881267
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

KeeProMise closed pull request #6871: HDFS-17545. [ARR] router async rpc client.
URL: https://github.com/apache/hadoop/pull/6871




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881261#comment-17881261
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

hadoop-yetus commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2345847507

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | -1 :x: |  mvninstall  |  35m 28s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6871/18/artifact/out/branch-mvninstall-root.txt)
 |  root in HDFS-17531 failed.  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 32s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  27m 19s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 111m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6871/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux a0ae766166d6 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 4e7e35471cd5fbad975e3549dd8129ee6d5f791b |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6871/18/testReport/ |
   | Max. process+thread count | 4229

[jira] [Commented] (HDFS-17401) EC: Excess internal block may not be able to be deleted correctly when it's stored in fallback storage

2024-09-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881223#comment-17881223
 ] 

ASF GitHub Bot commented on HDFS-17401:
---

RuinanGu commented on PR #6597:
URL: https://github.com/apache/hadoop/pull/6597#issuecomment-2345638573

   @haiyang1987 Could you please take a look?




> EC: Excess internal block may not be able to be deleted correctly when it's 
> stored in fallback storage
> --
>
> Key: HDFS-17401
> URL: https://issues.apache.org/jira/browse/HDFS-17401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.6
>Reporter: Ruinan Gu
>Assignee: Ruinan Gu
>Priority: Major
>  Labels: pull-request-available
>
> Excess internal block can't be deleted correctly when it's stored in fallback 
> storage.
> Simple case:
> EC-RS-6-3-1024k file is stored using ALL_SSD storage policy(SSD is default 
> storage type and DISK is fallback storage type), if the block group is as 
> follows
> [0(SSD), 0(SSD), 1(SSD), 2(SSD), 3(SSD), 4(SSD), 5(SSD), 6(SSD), 7(SSD), 
> 8(DISK)] 
> The are two index 0 internal block and one of them should be chosen to 
> delete.But the current implement chooses the index 0 internal blocks as 
> candidates but DISK as exess storage type.As a result, the exess storage 
> type(DISK) can not correspond to the exess internal blocks' storage type(SSD) 
> correctly, and the exess internal block can not be deleted correctly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17556) Avoid adding block to neededReconstruction repeatedly in decommission

2024-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880927#comment-17880927
 ] 

ASF GitHub Bot commented on HDFS-17556:
---

lfxy commented on PR #6896:
URL: https://github.com/apache/hadoop/pull/6896#issuecomment-2343274503

   @Hexiaoqiao Could you help to merge this commit?




> Avoid adding block to neededReconstruction repeatedly in decommission
> -
>
> Key: HDFS-17556
> URL: https://issues.apache.org/jira/browse/HDFS-17556
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode
>Affects Versions: 3.5.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Major
>  Labels: pull-request-available
>
> In decommission and maintenance process, before added to 
> BlockManager::neededReconstruction block will be check if it has been added. 
> The check contains if block is in BlockManager::neededReconstruction or in 
> PendingReconstructionBlocks::pendingReconstructions as below code. 
> But it also need to check if it is in 
> PendingReconstructionBlocks::timedOutItems. Or else 
> DatanodeAdminDefaultMonitor will add block to 
> BlockManager::neededReconstruction repeatedly if block time out in 
> PendingReconstructionBlocks::pendingReconstructions.
>  
> {code:java}
> if (!blockManager.neededReconstruction.contains(block) &&
> blockManager.pendingReconstruction.getNumReplicas(block) == 0 &&
> blockManager.isPopulatingReplQueues()) {
>   // Process these blocks only when active NN is out of safe mode.
>   blockManager.neededReconstruction.add(block,
>   liveReplicas, num.readOnlyReplicas(),
>   num.outOfServiceReplicas(),
>   blockManager.getExpectedRedundancyNum(block));
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880868#comment-17880868
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

KeeProMise commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2342733716

   Hi, @goiri @simbadzina @Hexiaoqiao @sjlee @ayushtkn @haiyang1987 @ZanderXu  
the PR has been blocked for 2 months, and if everyone has time, please help to 
review it because several subtask PRs need to depend on this PR. This PR does 
not modify the existing synchronous router logic. To facilitate review, my 
**main modifications** are as follows:
   - RouterRpcClient.java: The original functionality and logic have not been 
changed; I only have extracted some common methods.
   - RouterAsyncRpcClient.java: An asynchronous implementation of 
RouterRpcClient.
   - Added configuration for the asynchronous feature toggle, as well as the 
number of asynchronous handlers and responders.
   - Using ThreadLocalContext to maintain thread local variables, ensuring that 
thread local variables can be correctly passed between handler, 
asyncRouterHandler, and asyncRouterResponder.
   
   The PRs that depend on this PR are:
   https://github.com/apache/hadoop/pull/6994
   https://github.com/apache/hadoop/pull/6988
   https://github.com/apache/hadoop/pull/6986
   https://github.com/apache/hadoop/pull/6983




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880792#comment-17880792
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

hadoop-yetus commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2342056697

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  5s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/12/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 142 unchanged - 0 fixed = 150 total (was 
142)  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 24s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 183m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ad0713a08e22 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3c0b1fb6757a53640167af52c3826258bd7cf21b |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/ha

[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880684#comment-17880684
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

hadoop-yetus commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2341197524

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 10s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   9m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   2m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   9m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   9m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m 11s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 9 new + 142 unchanged - 0 fixed = 151 total (was 
142)  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 37s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/10/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
 with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 22s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  6s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 31s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 190m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 202a1bb972c8 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Persona

[jira] [Commented] (HDFS-17609) [FGL] Fix lock mode in some RPC

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880581#comment-17880581
 ] 

ASF GitHub Bot commented on HDFS-17609:
---

hadoop-yetus commented on PR #7037:
URL: https://github.com/apache/hadoop/pull/7037#issuecomment-2340247121

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | -1 :x: |  mvninstall  |   5m 54s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-mvninstall-root.txt)
 |  root in HDFS-17384 failed.  |
   | -1 :x: |  compile  |   1m 34s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  compile  |   0m 41s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  HDFS-17384 passed  |
   | -1 :x: |  mvnsite  |   0m 47s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in HDFS-17384 failed.  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-hdfs in HDFS-17384 failed with JDK Private 
Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | -1 :x: |  spotbugs  |   0m 43s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in HDFS-17384 failed.  |
   | -1 :x: |  shadedclient  |   4m 55s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 41s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 44s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  javac  |   0m 44s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7037/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | -1 :x: |  compile  |   0m 42s |

[jira] [Commented] (HDFS-17609) [FGL] Fix lock mode in some RPC

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880560#comment-17880560
 ] 

ASF GitHub Bot commented on HDFS-17609:
---

hfutatzhanghb commented on PR #7037:
URL: https://github.com/apache/hadoop/pull/7037#issuecomment-2340112640

   @ZanderXu @ferhui Sir,  PTAL if you have free time~ Thanks a lot.




> [FGL] Fix lock mode in some RPC
> ---
>
> Key: HDFS-17609
> URL: https://issues.apache.org/jira/browse/HDFS-17609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> * FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
> because of below codes:
> ```java
> for (DatanodeDescriptor dataNode :
>         blockManager.getDatanodeManager().getDatanodes()) {
> }
> ```
>  
>  * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
> getFilesBlockingDecom method.
>  * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode 
> because it calls computeFileSize method.
>  * BlockManagerSafeMode#leaveSafeMode: use GLOBAL mode rather than BM mode 
> because it calls startSecretManagerIfNecessary which depended upon FS lock. 
> Change to GLOBAL mode is safe here.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17609) [FGL] Fix lock mode in some RPC

2024-09-10 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba updated HDFS-17609:
-
Description: 
* FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
because of below codes:

```java

for (DatanodeDescriptor dataNode :
        blockManager.getDatanodeManager().getDatanodes()) {

}

```

 
 * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
getFilesBlockingDecom method.
 * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode because 
it calls computeFileSize method.
 * BlockManagerSafeMode#leaveSafeMode: use GLOBAL mode rather than BM mode 
because it calls startSecretManagerIfNecessary which depended upon FS lock. 
Change to GLOBAL mode is safe here.

 

  was:
* FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
because of below codes:

```java

for (DatanodeDescriptor dataNode :
        blockManager.getDatanodeManager().getDatanodes()) {

}

```

 
 * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
getFilesBlockingDecom method.
 * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode because 
it calls computeFileSize method.
 * BlockManagerSafeMode#leaveSafeMode: use GLOBAL mode rather than BM mode 
because it calls startSecretManagerIfNecessary which depended upon FS lock. 
Change to GLOBAL

mode is safe here.

 


> [FGL] Fix lock mode in some RPC
> ---
>
> Key: HDFS-17609
> URL: https://issues.apache.org/jira/browse/HDFS-17609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> * FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
> because of below codes:
> ```java
> for (DatanodeDescriptor dataNode :
>         blockManager.getDatanodeManager().getDatanodes()) {
> }
> ```
>  
>  * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
> getFilesBlockingDecom method.
>  * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode 
> because it calls computeFileSize method.
>  * BlockManagerSafeMode#leaveSafeMode: use GLOBAL mode rather than BM mode 
> because it calls startSecretManagerIfNecessary which depended upon FS lock. 
> Change to GLOBAL mode is safe here.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17609) [FGL] Fix lock mode in some RPC

2024-09-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17609:
--
Labels: pull-request-available  (was: )

> [FGL] Fix lock mode in some RPC
> ---
>
> Key: HDFS-17609
> URL: https://issues.apache.org/jira/browse/HDFS-17609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> * FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
> because of below codes:
> ```java
> for (DatanodeDescriptor dataNode :
>         blockManager.getDatanodeManager().getDatanodes()) {
> }
> ```
>  
>  * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
> getFilesBlockingDecom method.
>  * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode 
> because it calls computeFileSize method.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17609) [FGL] Fix lock mode in some RPC

2024-09-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880555#comment-17880555
 ] 

ASF GitHub Bot commented on HDFS-17609:
---

hfutatzhanghb opened a new pull request, #7037:
URL: https://github.com/apache/hadoop/pull/7037

   ### Description of PR
   Fix lock mode in some RPCs and add some assert statement.




> [FGL] Fix lock mode in some RPC
> ---
>
> Key: HDFS-17609
> URL: https://issues.apache.org/jira/browse/HDFS-17609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>
> * FSNamesystem#getFilesBlockingDecom : use GLOBAL mode rather than FS mode 
> because of below codes:
> ```java
> for (DatanodeDescriptor dataNode :
>         blockManager.getDatanodeManager().getDatanodes()) {
> }
> ```
>  
>  * FSNamesystem#listOpenFiles: use GLOBAL mode because it calls 
> getFilesBlockingDecom method.
>  * FSNamesystem#getContentSummary: use GLOBAL mode rather than FS mode 
> because it calls computeFileSize method.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >