[jira] [Updated] (HDFS-16738) Invalid CallerContext caused NullPointerException

2022-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16738:
--
Labels: pull-request-available  (was: )

> Invalid CallerContext caused NullPointerException
> -
>
> Key: HDFS-16738
> URL: https://issues.apache.org/jira/browse/HDFS-16738
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Critical
>  Labels: pull-request-available
>
> {code:java}
> 2022-08-23 11:58:03,258 [FSEditLogAsync] ERROR namenode.FSEditLog 
> (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: write op failed 
> for required journal (JournalAndStream(mgr=QJM to [127.0.0.1:55779, 
> 127.0.0.1:55781, 127.0.0.1:55783], stream=QuorumOutputStream starting at txid 
> 1))
> java.lang.NullPointerException
>   at org.apache.hadoop.io.UTF8.set(UTF8.java:97)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeString(FSImageSerialization.java:361)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCloseOp.writeFields(FSEditLogOp.java:586)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Writer.writeOp(FSEditLogOp.java:4986)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer$TxnBuffer.writeOp(EditsDoubleBuffer.java:158)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.writeOp(EditsDoubleBuffer.java:61)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.write(QuorumOutputStream.java:50)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$1.apply(JournalSet.java:462)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.access$200(JournalSet.java:56)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.write(JournalSet.java:458)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:496)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:311)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:253)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16738) Invalid CallerContext caused NullPointerException

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583331#comment-17583331
 ] 

ASF GitHub Bot commented on HDFS-16738:
---

ZanderXu opened a new pull request, #4791:
URL: https://github.com/apache/hadoop/pull/4791

   ### Description of PR
   Invalid CallerContext caused NullPointerException.
   ```
   2022-08-23 11:58:03,258 [FSEditLogAsync] ERROR namenode.FSEditLog 
(JournalSet.java:mapJournalsAndReportErrors(398)) - Error: write op failed for 
required journal (JournalAndStream(mgr=QJM to [127.0.0.1:55779, 
127.0.0.1:55781, 127.0.0.1:55783], stream=QuorumOutputStream starting at txid 
1))
   java.lang.NullPointerException
at org.apache.hadoop.io.UTF8.set(UTF8.java:97)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeString(FSImageSerialization.java:361)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCloseOp.writeFields(FSEditLogOp.java:586)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Writer.writeOp(FSEditLogOp.java:4986)
at 
org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer$TxnBuffer.writeOp(EditsDoubleBuffer.java:158)
at 
org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.writeOp(EditsDoubleBuffer.java:61)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.write(QuorumOutputStream.java:50)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$1.apply(JournalSet.java:462)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.access$200(JournalSet.java:56)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.write(JournalSet.java:458)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:496)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:311)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:253)
at java.lang.Thread.run(Thread.java:748)
   ```
   




> Invalid CallerContext caused NullPointerException
> -
>
> Key: HDFS-16738
> URL: https://issues.apache.org/jira/browse/HDFS-16738
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Critical
>
> {code:java}
> 2022-08-23 11:58:03,258 [FSEditLogAsync] ERROR namenode.FSEditLog 
> (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: write op failed 
> for required journal (JournalAndStream(mgr=QJM to [127.0.0.1:55779, 
> 127.0.0.1:55781, 127.0.0.1:55783], stream=QuorumOutputStream starting at txid 
> 1))
> java.lang.NullPointerException
>   at org.apache.hadoop.io.UTF8.set(UTF8.java:97)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeString(FSImageSerialization.java:361)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCloseOp.writeFields(FSEditLogOp.java:586)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Writer.writeOp(FSEditLogOp.java:4986)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer$TxnBuffer.writeOp(EditsDoubleBuffer.java:158)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.writeOp(EditsDoubleBuffer.java:61)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.write(QuorumOutputStream.java:50)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$1.apply(JournalSet.java:462)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.access$200(JournalSet.java:56)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.write(JournalSet.java:458)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:496)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:311)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:253)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16734) RBF: fix some bugs when handling getContentSummary RPC

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583332#comment-17583332
 ] 

ASF GitHub Bot commented on HDFS-16734:
---

hadoop-yetus commented on PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#issuecomment-1223510365

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  34m 55s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4763 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 57021a89bb55 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3a4dfe7e5677c93ac28a7581f0a0e3c265cb5168 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/6/testReport/ |
   | Max. process+thread count | 2786 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> 

[jira] [Created] (HDFS-16738) Invalid CallerContext caused NullPointerException

2022-08-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-16738:
---

 Summary: Invalid CallerContext caused NullPointerException
 Key: HDFS-16738
 URL: https://issues.apache.org/jira/browse/HDFS-16738
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


{code:java}
2022-08-23 11:58:03,258 [FSEditLogAsync] ERROR namenode.FSEditLog 
(JournalSet.java:mapJournalsAndReportErrors(398)) - Error: write op failed for 
required journal (JournalAndStream(mgr=QJM to [127.0.0.1:55779, 
127.0.0.1:55781, 127.0.0.1:55783], stream=QuorumOutputStream starting at txid 
1))
java.lang.NullPointerException
at org.apache.hadoop.io.UTF8.set(UTF8.java:97)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeString(FSImageSerialization.java:361)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCloseOp.writeFields(FSEditLogOp.java:586)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Writer.writeOp(FSEditLogOp.java:4986)
at 
org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer$TxnBuffer.writeOp(EditsDoubleBuffer.java:158)
at 
org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.writeOp(EditsDoubleBuffer.java:61)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.write(QuorumOutputStream.java:50)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$1.apply(JournalSet.java:462)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.access$200(JournalSet.java:56)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.write(JournalSet.java:458)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:496)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:311)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:253)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583326#comment-17583326
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

hadoop-yetus commented on PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#issuecomment-1223491314

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 17 unchanged - 
16 fixed = 21 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 237m 47s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 348m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4724 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1bb5f411c397 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a0e2cac08deed9268a3269b9c2d2e4b3c7a0bd90 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/7/testReport/ |
   | Max. process+thread count | 3016 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Commented] (HDFS-16687) RouterFsckServlet replicates code from DfsServlet base class

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583321#comment-17583321
 ] 

ASF GitHub Bot commented on HDFS-16687:
---

sunchao merged PR #4790:
URL: https://github.com/apache/hadoop/pull/4790




> RouterFsckServlet replicates code from DfsServlet base class
> 
>
> Key: HDFS-16687
> URL: https://issues.apache.org/jira/browse/HDFS-16687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RouterFsckServlet replicates the method "getUGI(HttpServletRequest request, 
> Configuration conf)" from DfsServlet instead of just extending DfsServlet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16625) Unit tests aren't checking for PMDK availability

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583316#comment-17583316
 ] 

ASF GitHub Bot commented on HDFS-16625:
---

hadoop-yetus commented on PR #4788:
URL: https://github.com/apache/hadoop/pull/4788#issuecomment-1223481203

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  26m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 197m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4788/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 312m 44s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4788/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4788 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8d3af3b6d5cc 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / a7e3b0240a440532798e869e8d7d050ad6ceaa28 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4788/1/testReport/ |
   | Max. process+thread count | 3555 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4788/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Unit tests aren't checking for PMDK availability
> 
>
> Key: HDFS-16625
> URL: https://issues.apache.org/jira/browse/HDFS-16625
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There are unit tests that require native PMDK libraries which aren't 

[jira] [Commented] (HDFS-16703) Enable RPC Timeout for some protocols of NameNode.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583293#comment-17583293
 ] 

ASF GitHub Bot commented on HDFS-16703:
---

ZanderXu commented on PR #4660:
URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1223433624

   @slfan1989 Master, thanks for your review.
   
   > ipc.rpc-timeout.for.refresh-user-mappings.ms, if this is not configured to 
0, what is a better configuration value?
   
   I have added some suggestion values in hdfs-default.xml, such as:
   ```
   The default value of 0 indicates that timeout is disabled,
 which can be set to the same as ipc.client.rpc-timeout.ms, such as 
120s.
   ```
   What do you think of this?




> Enable RPC Timeout for some protocols of NameNode.
> --
>
> Key: HDFS-16703
> URL: https://issues.apache.org/jira/browse/HDFS-16703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When I read some code about protocol, I found that only 
> ClientNamenodeProtocolPB proxy with RPC timeout, other protocolPB proxy 
> without RPC timeout, such as RefreshAuthorizationPolicyProtocolPB, 
> RefreshUserMappingsProtocolPB, RefreshCallQueueProtocolPB, 
> GetUserMappingsProtocolPB and NamenodeProtocolPB.
>  
> If proxy without rpc timeout,  it will be blocked for a long time if the NN 
> machine crash or bad network during writing or reading with NN. 
>  
> So I feel that we should enable RPC timeout for all ProtocolPB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16734) RBF: fix some bugs when handling getContentSummary RPC

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583285#comment-17583285
 ] 

ASF GitHub Bot commented on HDFS-16734:
---

hadoop-yetus commented on PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#issuecomment-1223411894

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 25s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  35m  7s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4763 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9e7eebe0cb34 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 64fc26f67e8f4d57f0761a6d6e307e689e502820 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/5/testReport/ |
   | Max. process+thread count | 2791 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
  

[jira] [Commented] (HDFS-16703) Enable RPC Timeout for some protocols of NameNode.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583271#comment-17583271
 ] 

ASF GitHub Bot commented on HDFS-16703:
---

slfan1989 commented on PR #4660:
URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1223375293

   I'm really worried that these timeout configurations are too much for the 
user, because from my personal point of view, I don't know how to configure it, 
or if there is some experience that can provide.




> Enable RPC Timeout for some protocols of NameNode.
> --
>
> Key: HDFS-16703
> URL: https://issues.apache.org/jira/browse/HDFS-16703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When I read some code about protocol, I found that only 
> ClientNamenodeProtocolPB proxy with RPC timeout, other protocolPB proxy 
> without RPC timeout, such as RefreshAuthorizationPolicyProtocolPB, 
> RefreshUserMappingsProtocolPB, RefreshCallQueueProtocolPB, 
> GetUserMappingsProtocolPB and NamenodeProtocolPB.
>  
> If proxy without rpc timeout,  it will be blocked for a long time if the NN 
> machine crash or bad network during writing or reading with NN. 
>  
> So I feel that we should enable RPC timeout for all ProtocolPB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16703) Enable RPC Timeout for some protocols of NameNode.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583270#comment-17583270
 ] 

ASF GitHub Bot commented on HDFS-16703:
---

slfan1989 commented on PR #4660:
URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1223374325

   @ZanderXu Sorry for the late reply,  the code looks fine! Thank you very 
much for your contribution! 
   




> Enable RPC Timeout for some protocols of NameNode.
> --
>
> Key: HDFS-16703
> URL: https://issues.apache.org/jira/browse/HDFS-16703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When I read some code about protocol, I found that only 
> ClientNamenodeProtocolPB proxy with RPC timeout, other protocolPB proxy 
> without RPC timeout, such as RefreshAuthorizationPolicyProtocolPB, 
> RefreshUserMappingsProtocolPB, RefreshCallQueueProtocolPB, 
> GetUserMappingsProtocolPB and NamenodeProtocolPB.
>  
> If proxy without rpc timeout,  it will be blocked for a long time if the NN 
> machine crash or bad network during writing or reading with NN. 
>  
> So I feel that we should enable RPC timeout for all ProtocolPB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16687) RouterFsckServlet replicates code from DfsServlet base class

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583267#comment-17583267
 ] 

ASF GitHub Bot commented on HDFS-16687:
---

hadoop-yetus commented on PR #4790:
URL: https://github.com/apache/hadoop/pull/4790#issuecomment-1223369348

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 16s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  26m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 43s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4790/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9680cefb252a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 5284fe982a544a8d8a25e178e3a2e7591fc7ef63 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4790/1/testReport/ |
   | Max. process+thread count | 1897 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4790/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> RouterFsckServlet replicates code from DfsServlet base class
> 
>
> Key: HDFS-16687
> URL: https://issues.apache.org/jira/browse/HDFS-16687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RouterFsckServlet replicates the method "getUGI(HttpServletRequest request, 
> Configuration conf)" from DfsServlet instead of just extending DfsServlet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To 

[jira] [Commented] (HDFS-16724) RBF should support get the information about ancestor mount points

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583247#comment-17583247
 ] 

ASF GitHub Bot commented on HDFS-16724:
---

ZanderXu commented on PR #4719:
URL: https://github.com/apache/hadoop/pull/4719#issuecomment-1223316386

   @goiri Master, can help me merge this patch into trunk? Then I will rebase 
HDFS-16728 and HDFS-16734 on the latest trunk.




> RBF should support get the information about ancestor mount points
> --
>
> Key: HDFS-16724
> URL: https://issues.apache.org/jira/browse/HDFS-16724
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Suppose RBF cluster have 2 nameservices and to mount point as below:
>  * /user/ns1 -> ns1 -> /user/ns1
>  * /usre/ns2 -> ns2 -> /user/ns2
> Suppose we disable default nameservice of the RBF cluster and try to 
> getFileInfo of the path /user, RBF will throw one IOException to client due 
> to can not find locations for path /user. 
> But as this case, RBF should should return one valid response to client, 
> because /user has two sub mount point ns1 and ns2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16703) Enable RPC Timeout for some protocols of NameNode.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583246#comment-17583246
 ] 

ASF GitHub Bot commented on HDFS-16703:
---

ZanderXu commented on PR #4660:
URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1223313799

   @slfan1989 Master, ping. 




> Enable RPC Timeout for some protocols of NameNode.
> --
>
> Key: HDFS-16703
> URL: https://issues.apache.org/jira/browse/HDFS-16703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When I read some code about protocol, I found that only 
> ClientNamenodeProtocolPB proxy with RPC timeout, other protocolPB proxy 
> without RPC timeout, such as RefreshAuthorizationPolicyProtocolPB, 
> RefreshUserMappingsProtocolPB, RefreshCallQueueProtocolPB, 
> GetUserMappingsProtocolPB and NamenodeProtocolPB.
>  
> If proxy without rpc timeout,  it will be blocked for a long time if the NN 
> machine crash or bad network during writing or reading with NN. 
>  
> So I feel that we should enable RPC timeout for all ProtocolPB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16734) RBF: fix some bugs when handling getContentSummary RPC

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583245#comment-17583245
 ] 

ASF GitHub Bot commented on HDFS-16734:
---

ZanderXu commented on PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#issuecomment-1223311040

   @goiri Sir, thanks for your review and nice suggestions. I have updated it, 
please help me review it again. Thanks




> RBF: fix some bugs when handling getContentSummary RPC
> --
>
> Key: HDFS-16734
> URL: https://issues.apache.org/jira/browse/HDFS-16734
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Suppose there are some mount points as bellows in RBF without default 
> namespace.
> ||Source Path||NameSpace||Destination Path ||
> |/a/b|ns0|/a/b|
> |/a/b/c|ns0|/a/b/c|
> |/a/b/c/d|ns1|/a/b/c/d|
> Suppose there a file /a/b/c/file1 with 10MB data in ns0 and a file 
> /a/b/c/d/file2 with 20MB data in ns1.
> There are bugs during handling some cases:
> ||Case Number||Case||Current Result||Expected Result||
> |1|getContentSummary('/a')|Throw RouterResolveException |2files and 30MB data|
> |2|getContentSummary('/a/b')|2files and 40MB data|3files and 40MB data|
> Bugs for these cases:
> Case1: If can't find any locations for the path,  RBF should try to do it 
> with sub mount points.
> Case2: RBF shouldn't repeatedly get content summary from one same namespace 
> with same ancestors path, such as from ns0 with /a/b and from ns0 with /a/b/c.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583243#comment-17583243
 ] 

ASF GitHub Bot commented on HDFS-16684:
---

hadoop-yetus commented on PR #4786:
URL: https://github.com/apache/hadoop/pull/4786#issuecomment-1223290224

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  28m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 219m 30s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4786/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 344m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4786/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4786 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 10685164b950 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / f332c1efed5d0603506285d25a2bfb473eeeb0af |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4786/1/testReport/ |
   | Max. process+thread count | 2180 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4786/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Exclude self from JournalNodeSyncer when using a bind host
> --
>
> Key: HDFS-16684
> URL: https://issues.apache.org/jira/browse/HDFS-16684
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 

[jira] [Updated] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan updated HDFS-16702:
-
Fix Version/s: 3.4.0
   3.3.9

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16687) RouterFsckServlet replicates code from DfsServlet base class

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583229#comment-17583229
 ] 

ASF GitHub Bot commented on HDFS-16687:
---

snmvaughan opened a new pull request, #4790:
URL: https://github.com/apache/hadoop/pull/4790

   Backport from trunk.  RouterFsckServlet replicates the method 
"getUGI(HttpServletRequest request, Configuration conf)" from DfsServlet 
instead of just extending DfsServlet.
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> RouterFsckServlet replicates code from DfsServlet base class
> 
>
> Key: HDFS-16687
> URL: https://issues.apache.org/jira/browse/HDFS-16687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RouterFsckServlet replicates the method "getUGI(HttpServletRequest request, 
> Configuration conf)" from DfsServlet instead of just extending DfsServlet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16687) RouterFsckServlet replicates code from DfsServlet base class

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan reopened HDFS-16687:
--

Backporting to 3.3

> RouterFsckServlet replicates code from DfsServlet base class
> 
>
> Key: HDFS-16687
> URL: https://issues.apache.org/jira/browse/HDFS-16687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RouterFsckServlet replicates the method "getUGI(HttpServletRequest request, 
> Configuration conf)" from DfsServlet instead of just extending DfsServlet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16625) Unit tests aren't checking for PMDK availability

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan updated HDFS-16625:
-
Fix Version/s: 3.4.0
   3.3.9

> Unit tests aren't checking for PMDK availability
> 
>
> Key: HDFS-16625
> URL: https://issues.apache.org/jira/browse/HDFS-16625
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There are unit tests that require native PMDK libraries which aren't checking 
> if the library is available, resulting in unsuccessful test.  Adding the 
> following in the test setup addresses the problem.
> {code:java}
> assumeTrue ("Requires PMDK", NativeIO.POSIX.isPmdkAvailable()); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16625) Unit tests aren't checking for PMDK availability

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583223#comment-17583223
 ] 

ASF GitHub Bot commented on HDFS-16625:
---

snmvaughan opened a new pull request, #4788:
URL: https://github.com/apache/hadoop/pull/4788

   Backport from trunk.  There are unit tests that require native PMDK 
libraries which aren't checking if the library is available, resulting in 
unsuccessful test. 
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Unit tests aren't checking for PMDK availability
> 
>
> Key: HDFS-16625
> URL: https://issues.apache.org/jira/browse/HDFS-16625
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There are unit tests that require native PMDK libraries which aren't checking 
> if the library is available, resulting in unsuccessful test.  Adding the 
> following in the test setup addresses the problem.
> {code:java}
> assumeTrue ("Requires PMDK", NativeIO.POSIX.isPmdkAvailable()); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan updated HDFS-16684:
-
Fix Version/s: 3.4.0
   3.3.9

> Exclude self from JournalNodeSyncer when using a bind host
> --
>
> Key: HDFS-16684
> URL: https://issues.apache.org/jira/browse/HDFS-16684
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The JournalNodeSyncer will include the local instance in syncing when using a 
> bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude 
> the local instance, but it doesn't recognize the meta-address as a local 
> address.
> Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
> attempts to sync with itself as part of the normal syncing rotation.  For an 
> HA configuration running 3 JournalNodes, the "other" list used by the 
> JournalNodeSyncer will include 3 proxies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16737) Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583222#comment-17583222
 ] 

ASF GitHub Bot commented on HDFS-16737:
---

hadoop-yetus commented on PR #4784:
URL: https://github.com/apache/hadoop/pull/4784#issuecomment-1223150269

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 333m 53s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 452m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4784/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4784 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 5c60f210dffb 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4987d2a33c28fffa706d48724d1f264e48aec89a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4784/1/testReport/ |
   | Max. process+thread count | 2326 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Resolved] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-4043.
---
Resolution: Fixed

> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>   Original Estimate: 24h
>  Time Spent: 50m
>  Remaining Estimate: 23h 10m
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583217#comment-17583217
 ] 

ASF GitHub Bot commented on HDFS-4043:
--

jojochuang merged PR #4785:
URL: https://github.com/apache/hadoop/pull/4785




> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>   Original Estimate: 24h
>  Time Spent: 50m
>  Remaining Estimate: 23h 10m
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583211#comment-17583211
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

snmvaughan commented on code in PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#discussion_r951915598


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java:
##
@@ -35,7 +35,7 @@
 import java.util.List;
 import java.util.Random;
 
-public class MiniQJMHACluster {
+public class MiniQJMHACluster implements AutoCloseable {

Review Comment:
   That's was my mistake.  `close()` can't throw an exception, so the try-catch 
should have appeared in close.  The use of `AutoCloseable` allows 
try-with-resources which ensures that the resources are released properly.  
I've updated `TestRollingUpgrade` because it was consistently failing despite 
not being related to this change.
   
   I'll fix the exception handling to match the original `shutdown()` 
signature, and plan on opening a broader test update to take advantage of 
lessons learned here without other tests.





> GetJournalEditServlet fails to authorize valid Kerberos request
> ---
>
> Key: HDFS-16686
> URL: https://issues.apache.org/jira/browse/HDFS-16686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running in Kubernetes using Java 11 in an HA 
> configuration.  JournalNodes run on separate pods and have their own Kerberos 
> principal "jn/@".
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> GetJournalEditServlet uses request.getRemoteuser() to determine the 
> remoteShortName for Kerberos authorization, which fails to match when the 
> JournalNode uses its own Kerberos principal (e.g. jn/@).
> This can be fixed by using the UserGroupInformation provided by the base 
> DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583192#comment-17583192
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

sunchao commented on code in PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#discussion_r951896386


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java:
##
@@ -35,7 +35,7 @@
 import java.util.List;
 import java.util.Random;
 
-public class MiniQJMHACluster {
+public class MiniQJMHACluster implements AutoCloseable {

Review Comment:
   Yes, but it changes the signatures so it no longer throw `IOException`, so 
all the call sites of this method should no longer need to catch that 
exception. 
   
   I'd prefer to do this separately if not related to this PR.





> GetJournalEditServlet fails to authorize valid Kerberos request
> ---
>
> Key: HDFS-16686
> URL: https://issues.apache.org/jira/browse/HDFS-16686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running in Kubernetes using Java 11 in an HA 
> configuration.  JournalNodes run on separate pods and have their own Kerberos 
> principal "jn/@".
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> GetJournalEditServlet uses request.getRemoteuser() to determine the 
> remoteShortName for Kerberos authorization, which fails to match when the 
> JournalNode uses its own Kerberos principal (e.g. jn/@).
> This can be fixed by using the UserGroupInformation provided by the base 
> DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16734) RBF: fix some bugs when handling getContentSummary RPC

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583189#comment-17583189
 ] 

ASF GitHub Bot commented on HDFS-16734:
---

goiri commented on code in PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#discussion_r951889394


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1242,14 +1243,95 @@ public void setBalancerBandwidth(long bandwidth) throws 
IOException {
 rpcClient.invokeConcurrent(nss, method, true, false);
   }
 
+  /**
+   * Recursively get all the locations for the path.
+   * For example, there are some mount points:
+   *   /a -> ns0 -> /a
+   *   /a/b -> ns1 -> /a/b
+   *   /a/b/c -> ns2 -> /a/b/c
+   * When the path is '/a', the result of locations should be
+   * {ns0 -> [RemoteLocation(/a)], ns1 -> [RemoteLocation(/a/b)], ns2 -> 
[RemoteLocation(/a/b/c)]}
+   * @param path the path to get the locations.
+   * @param locations a map to store all the locations and key is namespace id.
+   * @throws IOException
+   */
+  @VisibleForTesting
+  void getAllLocations(String path, Map> 
locations)
+  throws IOException {
+try {
+  List parentLocations =
+  rpcServer.getLocationsForPath(path, false, false);
+  parentLocations.forEach(
+  l -> locations.computeIfAbsent(l.getNameserviceId(), k -> new 
ArrayList<>()).add(l));
+} catch (NoLocationException | RouterResolveException e) {
+  LOG.debug("Cannot find locations for {}.", path);
+}
+
+final List children = subclusterResolver.getMountPoints(path);
+if (children != null) {
+  for (String child : children) {
+Path childPath = new Path(path, child);
+getAllLocations(childPath.toUri().getPath(), locations);
+  }
+}
+  }
+
+  /**
+   * Get all the locations of the path for {@link 
this#getContentSummary(String)}.
+   * For example, there are some mount points:
+   *   /a -> ns0 -> /a
+   *   /a/b -> ns0 -> /a/b
+   *   /a/b/c -> ns1 -> /a/b/c
+   * When the path is '/a', the result of locations should be
+   * [RemoteLocation('/a', ns0, '/a'), RemoteLocation('/a/b/c', ns1, '/a/b/c')]
+   * When the path is '/b', will throw NoLocationException.
+   * @param path the path to get content summary
+   * @return one list contains all the remote location
+   * @throws IOException
+   */
+  @VisibleForTesting
+  List getLocationsForContentSummary(String path) throws 
IOException {
+final Map> ns2Locations = new HashMap<>();
+final List locations = new ArrayList<>();
+
+// Try to get all the locations of the path.
+getAllLocations(path, ns2Locations);
+
+if (ns2Locations.isEmpty()) {
+  throw new NoLocationException(path, subclusterResolver.getClass());
+}
+
+// remove the redundancy remoteLocation order by destination.
+ns2Locations.forEach((k, v) -> {
+  List sortedList = 
v.stream().sorted().collect(Collectors.toList());
+  int size = sortedList.size();
+  for (int i = size - 1; i > -1; i--) {
+RemoteLocation currentLocation = sortedList.get(i);
+if (i == 0) {
+  locations.add(currentLocation);
+  continue;
+}
+
+RemoteLocation preLocation = sortedList.get(i - 1);
+if (!currentLocation.getDest().startsWith(preLocation.getDest() + 
Path.SEPARATOR)) {
+  locations.add(currentLocation);
+} else {
+  LOG.debug("Ignore the redundancy location {}, because there is an 
ancestor location {}",

Review Comment:
   "Ignore redundant location"



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1242,14 +1243,95 @@ public void setBalancerBandwidth(long bandwidth) throws 
IOException {
 rpcClient.invokeConcurrent(nss, method, true, false);
   }
 
+  /**
+   * Recursively get all the locations for the path.
+   * For example, there are some mount points:
+   *   /a -> ns0 -> /a
+   *   /a/b -> ns1 -> /a/b
+   *   /a/b/c -> ns2 -> /a/b/c
+   * When the path is '/a', the result of locations should be
+   * {ns0 -> [RemoteLocation(/a)], ns1 -> [RemoteLocation(/a/b)], ns2 -> 
[RemoteLocation(/a/b/c)]}
+   * @param path the path to get the locations.
+   * @param locations a map to store all the locations and key is namespace id.
+   * @throws IOException
+   */
+  @VisibleForTesting
+  void getAllLocations(String path, Map> 
locations)
+  throws IOException {
+try {
+  List parentLocations =
+  rpcServer.getLocationsForPath(path, false, false);
+  parentLocations.forEach(
+  l -> locations.computeIfAbsent(l.getNameserviceId(), k -> new 
ArrayList<>()).add(l));
+} catch (NoLocationException | RouterResolveException e) {
+  LOG.debug("Cannot find locations for {}.", path);
+}
+
+final List 

[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583170#comment-17583170
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

hadoop-yetus commented on PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#issuecomment-1222867870

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/6/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 17 unchanged - 
16 fixed = 21 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 235m 29s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 347m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4724 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fa1e7f3da324 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3fee70695c9ddea3a896b1f5401abd3fbaeeb5c8 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/6/testReport/ |
   | Max. process+thread count | 3356 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583169#comment-17583169
 ] 

ASF GitHub Bot commented on HDFS-4043:
--

hadoop-yetus commented on PR #4785:
URL: https://github.com/apache/hadoop/pull/4785#issuecomment-1222856319

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 58s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 59s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  28m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4785/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 93 
unchanged - 0 fixed = 95 total (was 93)  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4785/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 97116f2d2c6a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / a099a30a84f608aa0e06a50a64e6c4be577c61fe |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4785/1/testReport/ |
   | Max. process+thread count | 2868 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4785/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>

[jira] [Commented] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583098#comment-17583098
 ] 

ASF GitHub Bot commented on HDFS-16684:
---

snmvaughan opened a new pull request, #4786:
URL: https://github.com/apache/hadoop/pull/4786

   Backport from trunk.  The JournalNodeSyncer will include the local instance 
in syncing when using a bind host (e.g. 0.0.0.0).  There is a mechanism that is 
supposed to exclude the local instance, but it doesn't recognize the 
meta-address as a local address.
   
   Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
attempts to sync with itself as part of the normal syncing rotation.  For an HA 
configuration running 3 JournalNodes, the "other" list used by the 
JournalNodeSyncer will include 3 proxies.
   
   Exclude bound local addresses, including the use of a wildcard address in 
the bound host configurations, while still allowing multiple instances on the 
same host.
   
   Allow sync attempts with unresolved addresses, so that sync attempts can 
drive resolution as servers become available.
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Exclude self from JournalNodeSyncer when using a bind host
> --
>
> Key: HDFS-16684
> URL: https://issues.apache.org/jira/browse/HDFS-16684
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> The JournalNodeSyncer will include the local instance in syncing when using a 
> bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude 
> the local instance, but it doesn't recognize the meta-address as a local 
> address.
> Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
> attempts to sync with itself as part of the normal syncing rotation.  For an 
> HA configuration running 3 JournalNodes, the "other" list used by the 
> JournalNodeSyncer will include 3 proxies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583087#comment-17583087
 ] 

ASF GitHub Bot commented on HDFS-16684:
---

saintstack merged PR #4723:
URL: https://github.com/apache/hadoop/pull/4723




> Exclude self from JournalNodeSyncer when using a bind host
> --
>
> Key: HDFS-16684
> URL: https://issues.apache.org/jira/browse/HDFS-16684
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> The JournalNodeSyncer will include the local instance in syncing when using a 
> bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude 
> the local instance, but it doesn't recognize the meta-address as a local 
> address.
> Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
> attempts to sync with itself as part of the normal syncing rotation.  For an 
> HA configuration running 3 JournalNodes, the "other" list used by the 
> JournalNodeSyncer will include 3 proxies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583086#comment-17583086
 ] 

ASF GitHub Bot commented on HDFS-4043:
--

snmvaughan opened a new pull request, #4785:
URL: https://github.com/apache/hadoop/pull/4785

   Backport of the changes from trunk.
   
   Use the existing DomainNameResolver to leverage the pluggable resolution 
framework.  This provides a means to perform a reverse lookup if needed.
   
   Update default implementation of DNSDomainNameResolver to protect against 
returning the IP address as a string from a cached value.
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>   Original Estimate: 24h
>  Time Spent: 50m
>  Remaining Estimate: 23h 10m
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan updated HDFS-4043:

Fix Version/s: 3.3.9

> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>   Original Estimate: 24h
>  Time Spent: 50m
>  Remaining Estimate: 23h 10m
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2022-08-22 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan reopened HDFS-4043:
-

Adding a backport to branch-3.3

> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha, 
> 3.4.0, 3.3.9
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>   Original Estimate: 24h
>  Time Spent: 50m
>  Remaining Estimate: 23h 10m
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16736) Link to Boost library in libhdfspp

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583083#comment-17583083
 ] 

ASF GitHub Bot commented on HDFS-16736:
---

goiri commented on code in PR #4782:
URL: https://github.com/apache/hadoop/pull/4782#discussion_r951657346


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt:
##
@@ -28,7 +28,7 @@ project (libhdfspp)
 
 cmake_minimum_required(VERSION 2.8)
 
-find_package (Boost 1.72.0 REQUIRED)
+find_package (Boost 1.72.0 REQUIRED COMPONENTS date_time)

Review Comment:
   Is there a way to make this 1.72.0 a variable?
   I'd like to avoid setting this in 4 places.





> Link to Boost library in libhdfspp
> --
>
> Key: HDFS-16736
> URL: https://issues.apache.org/jira/browse/HDFS-16736
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>
> The compilation of HDFS Native Client fails on Windows 10 due to the 
> following error -
> {code}
> [exec] 
> "H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj"
>  (default target) (105) ->
> [exec]   rpc.lib(rpc_engine.obj) : error LNK2019: unresolved external symbol 
> "__declspec(dllimport) public: __cdecl 
> boost::gregorian::greg_month::greg_month(unsigned short)" 
> (__imp_??0greg_month@gregorian@boost@@QEAA@G@Z) referenced in function 
> "private: static class boost::posix_time::ptime __cdecl 
> boost::date_time::microsec_clock boost::posix_time::ptime>::create_time(struct tm * (__cdecl*)(__int64 const 
> *,struct tm *))" 
> (?create_time@?$microsec_clock@Vptime@posix_time@boost@@@date_time@boost@@CA?AVptime@posix_time@3@P6APEAUtm@@PEB_JPEAU6@@Z@Z)
>  
> [H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
> [exec]   rpc.lib(request.obj) : error LNK2001: unresolved external symbol 
> "__declspec(dllimport) public: __cdecl 
> boost::gregorian::greg_month::greg_month(unsigned short)" 
> (__imp_??0greg_month@gregorian@boost@@QEAA@G@Z) 
> [H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
> [exec]   
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\RelWithDebInfo\logging_test.exe
>  : fatal error LNK1120: 1 unresolved externals 
> [H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
> {code}
> Thus, we need to link against the Boost library to resolve this error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16689) Standby NameNode crashes when transitioning to Active with in-progress tailer

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583004#comment-17583004
 ] 

ASF GitHub Bot commented on HDFS-16689:
---

ZanderXu commented on PR #4744:
URL: https://github.com/apache/hadoop/pull/4744#issuecomment-1222443598

   @abhishekkarigar  Thanks for your attention to this issue.  
   
   @xkrogen and I will solve this problem as soon as possible.
   
   @xkrogen Sir, please review the latest patch. Thanks




> Standby NameNode crashes when transitioning to Active with in-progress tailer
> -
>
> Key: HDFS-16689
> URL: https://issues.apache.org/jira/browse/HDFS-16689
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Standby NameNode crashes when transitioning to Active with a in-progress 
> tailer. And the error message like blew:
> {code:java}
> Caused by: java.lang.IllegalStateException: Cannot start writing at txid X 
> when there is a stream available for read: ByteStringEditLog[X, Y], 
> ByteStringEditLog[X, 0]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:344)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.openForWrite(FSEditLogAsync.java:113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1423)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:2132)
>   ... 36 more
> {code}
> After tracing and found there is a critical bug in 
> *EditlogTailer#catchupDuringFailover()* when 
> *DFS_HA_TAILEDITS_INPROGRESS_KEY* is true. Because *catchupDuringFailover()* 
> try to replay all missed edits from JournalNodes with *onlyDurableTxns=true*. 
> It may cannot replay any edits when they are some abnormal JournalNodes. 
> Reproduce method, suppose:
> - There are 2 namenode, namely NN0 and NN1, and the status of echo namenode 
> is Active, Standby respectively. And there are 3 JournalNodes, namely JN0, 
> JN1 and JN2. 
> - NN0 try to sync 3 edits to JNs with started txid 3, but only successfully 
> synced them to JN1 and JN3. And JN0 is abnormal, such as GC, bad network or 
> restarted.
> - NN1's lastAppliedTxId is 2, and at the moment, we are trying failover 
> active from NN0 to NN1. 
> - NN1 only got two responses from JN0 and JN1 when it try to selecting 
> inputStreams with *fromTxnId=3*  and *onlyDurableTxns=true*, and the count 
> txid of response is 0, 3 respectively. JN2 is abnormal, such as GC,  bad 
> network or restarted.
> - NN1 will cannot replay any Edits with *fromTxnId=3* from JournalNodes 
> because the *maxAllowedTxns* is 0.
> So I think Standby NameNode should *catchupDuringFailover()* with 
> *onlyDurableTxns=false* , so that it can replay all missed edits from 
> JournalNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583002#comment-17583002
 ] 

ASF GitHub Bot commented on HDFS-16684:
---

snmvaughan commented on PR #4723:
URL: https://github.com/apache/hadoop/pull/4723#issuecomment-1222430479

   @saintstack The changes were made in response to your questions, so I 
wouldn't classify them as draft.  You also indirectly identified a scenario 
where the startup sequence was delayed unnecessarily, so I've gone ahead an 
incorporated the update.  Let me know if you have any further questions.




> Exclude self from JournalNodeSyncer when using a bind host
> --
>
> Key: HDFS-16684
> URL: https://issues.apache.org/jira/browse/HDFS-16684
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> The JournalNodeSyncer will include the local instance in syncing when using a 
> bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude 
> the local instance, but it doesn't recognize the meta-address as a local 
> address.
> Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
> attempts to sync with itself as part of the normal syncing rotation.  For an 
> HA configuration running 3 JournalNodes, the "other" list used by the 
> JournalNodeSyncer will include 3 proxies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16737) Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume

2022-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16737:
--
Labels: pull-request-available  (was: )

> Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume
> ---
>
> Key: HDFS-16737
> URL: https://issues.apache.org/jira/browse/HDFS-16737
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> The number of threads in FsDatasetAsyncDiskService#addExecutorForVolume is 
> elastic right now, make it fixed.
> Presently the corePoolSize is set to 1 and maximumPoolSize is set to 
> maxNumThreadsPerVolume, but since the size of Queue is Integer.MAX, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> maxNumThreadsPerVolume.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16737) Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17582999#comment-17582999
 ] 

ASF GitHub Bot commented on HDFS-16737:
---

ZanderXu opened a new pull request, #4784:
URL: https://github.com/apache/hadoop/pull/4784

   ### Description of PR
   The number of threads in FsDatasetAsyncDiskService#addExecutorForVolume is 
elastic right now, make it fixed.
   Presently the corePoolSize is set to 1 and maximumPoolSize is set to 
maxNumThreadsPerVolume, but since the size of Queue is Integer.MAX, the queue 
doesn't tend to get full and threads are always confined to 1 irrespective of 
maxNumThreadsPerVolume.
   
   




> Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume
> ---
>
> Key: HDFS-16737
> URL: https://issues.apache.org/jira/browse/HDFS-16737
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> The number of threads in FsDatasetAsyncDiskService#addExecutorForVolume is 
> elastic right now, make it fixed.
> Presently the corePoolSize is set to 1 and maximumPoolSize is set to 
> maxNumThreadsPerVolume, but since the size of Queue is Integer.MAX, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> maxNumThreadsPerVolume.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16737) Fix number of threads in FsDatasetAsyncDiskService#addExecutorForVolume

2022-08-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-16737:
---

 Summary: Fix number of threads in 
FsDatasetAsyncDiskService#addExecutorForVolume
 Key: HDFS-16737
 URL: https://issues.apache.org/jira/browse/HDFS-16737
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


The number of threads in FsDatasetAsyncDiskService#addExecutorForVolume is 
elastic right now, make it fixed.
Presently the corePoolSize is set to 1 and maximumPoolSize is set to 
maxNumThreadsPerVolume, but since the size of Queue is Integer.MAX, the queue 
doesn't tend to get full and threads are always confined to 1 irrespective of 
maxNumThreadsPerVolume.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17582988#comment-17582988
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

snmvaughan commented on code in PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#discussion_r951463201


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestGetJournalEditServlet.java:
##
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdfs.qjournal.server;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.web.resources.UserParam;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.log4j.Level;
+import org.apache.log4j.LogManager;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import javax.servlet.ServletConfig;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import java.io.IOException;
+
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestGetJournalEditServlet {
+
+  private final static Configuration CONF = new HdfsConfiguration();
+
+  private final static GetJournalEditServlet SERVLET = new 
GetJournalEditServlet();
+
+  @BeforeClass
+  public static void setUp() throws ServletException {
+LogManager.getLogger(GetJournalEditServlet.class).setLevel(Level.DEBUG);

Review Comment:
   That was a hold over from debug testing.  I'll remove it.





> GetJournalEditServlet fails to authorize valid Kerberos request
> ---
>
> Key: HDFS-16686
> URL: https://issues.apache.org/jira/browse/HDFS-16686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running in Kubernetes using Java 11 in an HA 
> configuration.  JournalNodes run on separate pods and have their own Kerberos 
> principal "jn/@".
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> GetJournalEditServlet uses request.getRemoteuser() to determine the 
> remoteShortName for Kerberos authorization, which fails to match when the 
> JournalNode uses its own Kerberos principal (e.g. jn/@).
> This can be fixed by using the UserGroupInformation provided by the base 
> DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17582986#comment-17582986
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

snmvaughan commented on code in PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#discussion_r951462510


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java:
##
@@ -80,16 +82,38 @@ public static void runCmd(DFSAdmin dfsadmin, boolean 
success,
 }
   }
 
+  @Rule
+  public TemporaryFolder folder= new TemporaryFolder();
+
+  /**
+   * Create a default HDFS configuration which has test-specific data 
directories.  This is
+   * intended to protect against interactions between test runs that might 
corrupt results.  Each
+   * test run's data is automatically cleaned-up by JUnit.
+   *
+   * @return a default configuration with test-specific data directories
+   */
+  public Configuration getHdfsConfiguration() throws IOException {
+Configuration conf = new HdfsConfiguration();
+
+// Override the file system locations with test-specific temporary folders
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
+folder.newFolder("dfs/name").toString());
+conf.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
+folder.newFolder("dfs/namesecondary").toString());
+conf.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY,
+folder.newFolder("dfs/data").toString());
+
+return conf;
+  }
+
   /**
* Test DFSAdmin Upgrade Command.
*/
   @Test
   public void testDFSAdminRollingUpgradeCommands() throws Exception {
 // start a cluster
-final Configuration conf = new HdfsConfiguration();
-MiniDFSCluster cluster = null;
-try {
-  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
+final Configuration conf = getHdfsConfiguration();

Review Comment:
   These changes were made because the tests kept failing in the upstream test 
run, but run fine locally.  The upstream test runs are executed in parallel 
resulting in flaky behavior.
   
   The change to all of these tests if a switch from a local HDFS configuration 
to a shared mechanism called `getHdfsConfiguration()`.  The shared mechanism 
has the added benefit of using a `TemporaryFolder` to ensure that all test 
clusters have their own disk space per test .  This avoids any interaction by 
parallel tests, or picking up left-overs from pervious tests.





> GetJournalEditServlet fails to authorize valid Kerberos request
> ---
>
> Key: HDFS-16686
> URL: https://issues.apache.org/jira/browse/HDFS-16686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running in Kubernetes using Java 11 in an HA 
> configuration.  JournalNodes run on separate pods and have their own Kerberos 
> principal "jn/@".
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> GetJournalEditServlet uses request.getRemoteuser() to determine the 
> remoteShortName for Kerberos authorization, which fails to match when the 
> JournalNode uses its own Kerberos principal (e.g. jn/@).
> This can be fixed by using the UserGroupInformation provided by the base 
> DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17582983#comment-17582983
 ] 

ASF GitHub Bot commented on HDFS-16686:
---

snmvaughan commented on code in PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#discussion_r951457153


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java:
##
@@ -35,7 +35,7 @@
 import java.util.List;
 import java.util.Random;
 
-public class MiniQJMHACluster {
+public class MiniQJMHACluster implements AutoCloseable {

Review Comment:
   This doesn't change the call to `shtudown()` at all.  It adds another method 
called `close()` that allows this class to be `AutoCloseable`, and matches the 
same pattern used in the `MiniDFSCluster`.





> GetJournalEditServlet fails to authorize valid Kerberos request
> ---
>
> Key: HDFS-16686
> URL: https://issues.apache.org/jira/browse/HDFS-16686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
> Environment: Running in Kubernetes using Java 11 in an HA 
> configuration.  JournalNodes run on separate pods and have their own Kerberos 
> principal "jn/@".
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> GetJournalEditServlet uses request.getRemoteuser() to determine the 
> remoteShortName for Kerberos authorization, which fails to match when the 
> JournalNode uses its own Kerberos principal (e.g. jn/@).
> This can be fixed by using the UserGroupInformation provided by the base 
> DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org