Re: [VOTE] Hadoop 3.2.x EOL

2023-12-06 Thread Akira Ajisaka
+1



On Wed, Dec 6, 2023 at 1:10 PM Xiaoqiao He  wrote:

> Dear Hadoop devs,
>
> Given the feedback from the discussion thread [1], I'd like to start
> an official thread for the community to vote on release line 3.2 EOL.
>
> It will include,
> a. An official announcement informs no further regular Hadoop 3.2.x
> releases.
> b. Issues which target 3.2.5 will not be fixed.
>
> This vote will run for 7 days and conclude by Dec 13, 2023.
>
> I’ll start with my +1.
>
> Best Regards,
> - He Xiaoqiao
>
> [1] https://lists.apache.org/thread/bbf546c6jz0og3xcl9l3qfjo93b65szr
>


[jira] [Resolved] (HDFS-16878) TestLeaseRecovery2 timeouts

2022-12-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16878.
--
Resolution: Duplicate

Dup of HDFS-16853. Closing.

> TestLeaseRecovery2 timeouts
> ---
>
> Key: HDFS-16878
> URL: https://issues.apache.org/jira/browse/HDFS-16878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>    Reporter: Akira Ajisaka
>Priority: Major
>
> The following tests in TestLeaseRecover2 timeouts
>  * testHardLeaseRecoveryAfterNameNodeRestart
>  * testHardLeaseRecoveryAfterNameNodeRestart2
>  * testHardLeaseRecoveryWithRenameAfterNameNodeRestart
> {noformat}
> [ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 139.044 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestLeaseRecovery2
> [ERROR] 
> testHardLeaseRecoveryAfterNameNodeRestart(org.apache.hadoop.hdfs.TestLeaseRecovery2)
>   Time elapsed: 30.47 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2831)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2880)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:594)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart(TestLeaseRecovery2.java:498)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:750) {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16878) TestLeaseRecovery2 timeouts

2022-12-29 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16878:


 Summary: TestLeaseRecovery2 timeouts
 Key: HDFS-16878
 URL: https://issues.apache.org/jira/browse/HDFS-16878
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Akira Ajisaka


The following tests in TestLeaseRecover2 timeouts
 * testHardLeaseRecoveryAfterNameNodeRestart

 * testHardLeaseRecoveryAfterNameNodeRestart2

 * testHardLeaseRecoveryWithRenameAfterNameNodeRestart

{noformat}
[ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 139.044 
s <<< FAILURE! - in org.apache.hadoop.hdfs.TestLeaseRecovery2
[ERROR] 
testHardLeaseRecoveryAfterNameNodeRestart(org.apache.hadoop.hdfs.TestLeaseRecovery2)
  Time elapsed: 30.47 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 3 
milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2831)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2880)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:594)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart(TestLeaseRecovery2.java:498)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750) {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16766) XML External Entity (XXE) attacks can occur while processing XML received from an untrusted source

2022-09-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16766.
--
Fix Version/s: 3.4.0
   3.3.9
   3.2.5
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~Du] for your report 
and thank you [~groot] for your fix!

> XML External Entity (XXE) attacks can occur while processing XML received 
> from an untrusted source
> --
>
> Key: HDFS-16766
> URL: https://issues.apache.org/jira/browse/HDFS-16766
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.4
>Reporter: Jing
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9, 3.2.5
>
>
> XML External Entity (XXE) attacks can occur when an XML parser supports XML 
> entities while processing XML received from an untrusted source. The attack 
> resides in XML input containing references to an external entity an is parsed 
> by the weakly configured javax.xml.parsers.DocumentBuilder XML parser.
>  
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ECPolicyLoader.java#L93



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16729) RBF: fix some unreasonably annotated docs

2022-08-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16729.
--
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~jianghuazhu] for your 
contribution!

> RBF: fix some unreasonably annotated docs
> -
>
> Key: HDFS-16729
> URL: https://issues.apache.org/jira/browse/HDFS-16729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, rbf
>Affects Versions: 3.3.3
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
> Attachments: image-2022-08-16-14-19-07-630.png
>
>
> I found some unreasonably annotated documentation here. E.g:
>  !image-2022-08-16-14-19-07-630.png! 
> It should be our job to make these annotations cleaner.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.2.4 release

2022-07-10 Thread Akira Ajisaka
Thank you Masatake for proceeding the release process.

-Akira

On Mon, Jul 11, 2022 at 10:12 AM Masatake Iwasaki 
wrote:

> I'm going to cut branch-3.2.4 today.
> If we need more time for 3.3.4 to fix the issue around HADOOP-18033,
> 3.2.4 can be released first.
>
> Thanks,
> Masatake Iwasaki
>
> On 2022/05/10 0:02, Masatake Iwasaki wrote:
> > Hi team,
> >
> > Shaded client artifacts (hadoop-client-api and hadoop-client-runtime)
> > of Hadoop 3.2.3 published to Maven turned out to be broken
> > due to issue of the release process.
> >
> > In addition, we have enough fixes on branch-3.2 after branch-3.2.3 was
> created[1].
> > Migration from log4j to reload4j is one of the major issues.
> >
> > I would like to cut RC of 3.2.4 soon after 3.3.3 release.
> > I volunteer to take a release manager role as done for 3.2.3.
> >
> > [1]
> https://issues.apache.org/jira/issues/?filter=12350757=project%20in%20(YARN%2C%20HDFS%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20status%20%3D%20Resolved%20AND%20fixVersion%20%3D%203.2.4
> >
> > Thanks,
> > Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16064.
--
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Merged the PR into trunk and branch-3.3.

> HDFS-721 causes DataNode decommissioning to get stuck indefinitely
> --
>
> Key: HDFS-16064
> URL: https://issues.apache.org/jira/browse/HDFS-16064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.2.1
>Reporter: Kevin Wikant
>Assignee: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Seems that https://issues.apache.org/jira/browse/HDFS-721 was resolved as a 
> non-issue under the assumption that if the namenode & a datanode get into an 
> inconsistent state for a given block pipeline, there should be another 
> datanode available to replicate the block to
> While testing datanode decommissioning using "dfs.exclude.hosts", I have 
> encountered a scenario where the decommissioning gets stuck indefinitely
> Below is the progression of events:
>  * there are initially 4 datanodes DN1, DN2, DN3, DN4
>  * scale-down is started by adding DN1 & DN2 to "dfs.exclude.hosts"
>  * HDFS block pipelines on DN1 & DN2 must now be replicated to DN3 & DN4 in 
> order to satisfy their minimum replication factor of 2
>  * during this replication process 
> https://issues.apache.org/jira/browse/HDFS-721 is encountered which causes 
> the following inconsistent state:
>  ** DN3 thinks it has the block pipeline in FINALIZED state
>  ** the namenode does not think DN3 has the block pipeline
> {code:java}
> 2021-06-06 10:38:23,604 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
> (DataXceiver for client  at /DN2:45654 [Receiving block BP-YYY:blk_XXX]): 
> DN3:9866:DataXceiver error processing WRITE_BLOCK operation  src: /DN2:45654 
> dst: /DN3:9866; 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-YYY:blk_XXX already exists in state FINALIZED and thus cannot be created.
> {code}
>  * the replication is attempted again, but:
>  ** DN4 has the block
>  ** DN1 and/or DN2 have the block, but don't count towards the minimum 
> replication factor because they are being decommissioned
>  ** DN3 does not have the block & cannot have the block replicated to it 
> because of HDFS-721
>  * the namenode repeatedly tries to replicate the block to DN3 & repeatedly 
> fails, this continues indefinitely
>  * therefore DN4 is the only live datanode with the block & the minimum 
> replication factor of 2 cannot be satisfied
>  * because the minimum replication factor cannot be satisfied for the 
> block(s) being moved off DN1 & DN2, the datanode decommissioning can never be 
> completed 
> {code:java}
> 2021-06-06 10:39:10,106 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN1:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> ...
> 2021-06-06 10:57:10,105 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN2:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> {code}
> Being stuck in decommissioning state forever is not an intended behavior of 
> DataNode decommissioning
> A few potential solutions:
>  * Address the root cause of the problem which is an inconsistent state 
> between namenode & datanode: https://issues.apache.org/jira/browse/HDFS-721
>  * Detect when datanode decommissioning is stuck due to lack of available 
> datanodes for satisfying the minimum replication factor, then recover by 
> re-enabling the datanodes being decommissioned
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16635:


 Summary: Fix javadoc error in Java 11
 Key: HDFS-16635
 URL: https://issues.apache.org/jira/browse/HDFS-16635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation
Reporter: Akira Ajisaka


Javadoc build in Java 11 fails.

{noformat}
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
 error: reference not found
[ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
startup
{noformat}

https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16576) Remove unused Imports in Hadoop HDFS project

2022-06-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16576.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> Remove unused Imports in Hadoop HDFS project
> 
>
> Key: HDFS-16576
> URL: https://issues.apache.org/jira/browse/HDFS-16576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16608) Fix the link in TestClientProtocolForPipelineRecovery

2022-06-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16608.
--
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~samrat007] for your contribution.

> Fix the link in TestClientProtocolForPipelineRecovery
> -
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16604) Install gtest via FetchContent_Declare in CMake

2022-05-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16604.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk.

> Install gtest via FetchContent_Declare in CMake
> ---
>
> Key: HDFS-16604
> URL: https://issues.apache.org/jira/browse/HDFS-16604
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> CMake is unable to checkout *release-1.10.0* version of GoogleTest -
> {code}
> [WARNING] -- Build files have been written to: 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4370/centos-7/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/googletest-download
> [WARNING] Scanning dependencies of target googletest
> [WARNING] [ 11%] Creating directories for 'googletest'
> [WARNING] [ 22%] Performing download step (git clone) for 'googletest'
> [WARNING] Cloning into 'googletest-src'...
> [WARNING] fatal: invalid reference: release-1.10.0
> [WARNING] CMake Error at 
> googletest-download/googletest-prefix/tmp/googletest-gitclone.cmake:40 
> (message):
> [WARNING]   Failed to checkout tag: 'release-1.10.0'
> [WARNING] 
> [WARNING] 
> [WARNING] gmake[2]: *** [CMakeFiles/googletest.dir/build.make:111: 
> googletest-prefix/src/googletest-stamp/googletest-download] Error 1
> [WARNING] gmake[1]: *** [CMakeFiles/Makefile2:95: 
> CMakeFiles/googletest.dir/all] Error 2
> [WARNING] gmake: *** [Makefile:103: all] Error 2
> [WARNING] CMake Error at main/native/libhdfspp/CMakeLists.txt:68 (message):
> [WARNING]   Build step for googletest failed: 2
> {code}
> Jenkins run - 
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
> We need to use *FetchContent_Declare* since we're getting the source code 
> exactly at the given commit SHA. This avoids the checkout step altogether and 
> solves the above issue.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16453) Upgrade okhttp from 2.7.5 to 4.9.3

2022-05-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16453.
--
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~ivan.viaznikov] for your report 
and thank you [~groot] for your contribution!

> Upgrade okhttp from 2.7.5 to 4.9.3
> --
>
> Key: HDFS-16453
> URL: https://issues.apache.org/jira/browse/HDFS-16453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Ivan Viaznikov
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{org.apache.hadoop:hadoop-hdfs-client}} comes with 
> {{com.squareup.okhttp:okhttp:2.7.5}} as a dependency, which is vulnerable to 
> an information disclosure issue due to how the contents of sensitive headers, 
> such as the {{Authorization}} header, can be logged when an 
> {{IllegalArgumentException}} is thrown.
> This issue could allow an attacker or malicious user who has access to the 
> logs to obtain the sensitive contents of the affected headers which could 
> facilitate further attacks.
> Fixed in {{5.0.0-alpha3}} by 
> [this|https://github.com/square/okhttp/commit/dcc6483b7dc6d9c0b8e03ff7c30c13f3c75264a5]
>  commit. The fix was cherry-picked and backported into {{4.9.2}} with 
> [this|https://github.com/square/okhttp/commit/1fd7c0afdc2cee9ba982b07d49662af7f60e1518]
>  commit.
> Requesting you to clarify if this dependency will be updated to a fixed 
> version in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16185) Fix comment in LowRedundancyBlocks.java

2022-05-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16185.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.4
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~groot] for your 
contribution.

> Fix comment in LowRedundancyBlocks.java
> ---
>
> Key: HDFS-16185
> URL: https://issues.apache.org/jira/browse/HDFS-16185
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>    Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/c8e58648389c7b0b476c3d0d47be86af2966842f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java#L249]
> "can only afford one replica loss" is not correct there. Before HDFS-9857, 
> the comment is "there is less than a third as many blocks as requested; this 
> is considered very under-replicated" and it seems correct.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-07 Thread Akira Ajisaka
Hi Chao,

How about using
https://repository.apache.org/content/repositories/orgapachehadoop-1348/
instead of https://repository.apache.org/content/repositories/staging/ ?

Akira

On Sat, May 7, 2022 at 10:52 AM Ayush Saxena  wrote:

> Hmm, I see the artifacts ideally should have got overwritten by the new
> RC, but they didn’t. The reason seems like the staging path shared doesn’t
> have any jars…
> That is why it was picking the old jars. I think Steve needs to run mvn
> deploy again…
>
> Sent from my iPhone
>
> > On 07-May-2022, at 7:12 AM, Chao Sun  wrote:
> >
> > 
> >>
> >> Chao can you use the one that Steve mentioned in the mail?
> >
> > Hmm how do I do that? Typically after closing the RC in nexus the
> > release bits will show up in
> >
> https://repository.apache.org/content/repositories/staging/org/apache/hadoop
> > and Spark build will be able to pick them up for testing. However in
> > this case I don't see any 3.3.3 jars in the URL.
> >
> >> On Fri, May 6, 2022 at 6:24 PM Ayush Saxena  wrote:
> >>
> >> There were two 3.3.3 staged. The earlier one was with skipShade, the
> date was also april 22, I archived that. Chao can you use the one that
> Steve mentioned in the mail?
> >>
> >>> On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:
> >>>
> >>> Seems there are some issues with the shaded client as I was not able
> >>> to compile Apache Spark with the RC
> >>> (https://github.com/apache/spark/pull/36474). Looks like it's compiled
> >>> with the `-DskipShade` option and the hadoop-client-api JAR doesn't
> >>> contain any class:
> >>>
> >>> ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
> >>> META-INF/
> >>> META-INF/MANIFEST.MF
> >>> META-INF/NOTICE.txt
> >>> META-INF/LICENSE.txt
> >>> META-INF/maven/
> >>> META-INF/maven/org.apache.hadoop/
> >>> META-INF/maven/org.apache.hadoop/hadoop-client-api/
> >>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
> >>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
> >>>
> >>> On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
> 
>  +1 (binding)
> 
>   * Signature: ok
>   * Checksum : passed
>   * Rat check (1.8.0_191): passed
>    - mvn clean apache-rat:check
>   * Built from source (1.8.0_191): failed
>    - mvn clean install  -DskipTests
>    - mvn -fae --no-transfer-progress -DskipTests
> -Dmaven.javadoc.skip=true
>  -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
>  -Drequire.zstd -Drequire.test.libhadoop clean install
>   * Unit tests pass (1.8.0_191):
> - HDFS Tests passed (Didn't run more than this).
> 
>  Deployed a ten node ha hdfs cluster with three namenodes and five
>  journalnodes. Ran a ten node hbase (older version of 2.5 branch built
>  against 3.3.2) against it. Tried a small verification job. Good. Ran a
>  bigger job with mild chaos. All seems to be working properly
> (recoveries,
>  logs look fine). Killed a namenode. Failover worked promptly. UIs look
>  good. Poked at the hdfs cli. Seems good.
> 
>  S
> 
>  On Tue, May 3, 2022 at 4:24 AM Steve Loughran
> 
>  wrote:
> 
> > I have put together a release candidate (rc0) for Hadoop 3.3.3
> >
> > The RC is available at:
> > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
> >
> > The git tag is release-3.3.3-RC0, commit d37586cbda3
> >
> > The maven artifacts are staged at
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> >
> > You can find my public key at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > Change log
> > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
> >
> > Release notes
> >
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
> >
> > There's a very small number of changes, primarily critical
> code/packaging
> > issues and security fixes.
> >
> >
> >   - The critical fixes which shipped in the 3.2.3 release.
> >   -  CVEs in our code and dependencies
> >   - Shaded client packaging issues.
> >   - A switch from log4j to reload4j
> >
> >
> > reload4j is an active fork of the log4j 1.17 library with the
> classes which
> > contain CVEs removed. Even though hadoop never used those classes,
> they
> > regularly raised alerts on security scans and concen from users.
> Switching
> > to the forked project allows us to ship a secure logging framework.
> It will
> > complicate the builds of downstream maven/ivy/gradle projects which
> exclude
> > our log4j artifacts, as they need to cut the new dependency
> instead/as
> > well.
> >
> > See the release notes for details.
> >
> > This is my first release through the new docker build process, do
> please
> > validate artifact signing  to make sure it is good. I'll be trying
> builds
> > of downstream 

[jira] [Resolved] (HDFS-16255) RBF: Fix dead link to fedbalance document

2022-04-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16255.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~groot] for your contribution.

> RBF: Fix dead link to fedbalance document
> -
>
> Key: HDFS-16255
> URL: https://issues.apache.org/jira/browse/HDFS-16255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>    Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a dead link in HDFSRouterFederation.md 
> (https://github.com/apache/hadoop/blob/e90c41af34ada9d7b61e4d5a8b88c2f62c7fea25/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md?plain=1#L517)
> {{../../../hadoop-federation-balance/HDFSFederationBalance.md}} should be 
> {{../../hadoop-federation-balance/HDFSFederationBalance.md}}.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16546) Fix UT TestOfflineImageViewer#testReverseXmlWithoutSnapshotDiffSection to branch branch-3.2

2022-04-22 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16546.
--
Fix Version/s: 3.2.4
   Resolution: Fixed

Committed to branch-3.2. Thank you [~cndaimin] for your contribution!

> Fix UT TestOfflineImageViewer#testReverseXmlWithoutSnapshotDiffSection to 
> branch branch-3.2
> ---
>
> Key: HDFS-16546
> URL: https://issues.apache.org/jira/browse/HDFS-16546
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.2.0
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test fails due to incorrect layoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16035) Remove DummyGroupMapping as it is not longer used anywhere

2022-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16035.
--
Resolution: Fixed

Thank you [~vjasani] for your report and thank you [~groot] for your 
contribution.

> Remove DummyGroupMapping as it is not longer used anywhere
> --
>
> Key: HDFS-16035
> URL: https://issues.apache.org/jira/browse/HDFS-16035
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: httpfs, test
>Reporter: Viraj Jasani
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: beginner, newbie, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DummyGroupMapping class was added as part of HDFS-2657 and it was only used 
> in TestHttpFSServer as httpfs.groups.hadoop.security.group.mapping. However, 
> TestHttpFSServer is no longer using DummyGroupMapping and hence, it can be 
> removed completely as it is not used anywhere.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16536) TestOfflineImageViewer fails on branch-3.3

2022-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16536.
--
Fix Version/s: 3.3.4
   Resolution: Fixed

Committed to branch-3.3. Thank you [~groot] 

> TestOfflineImageViewer fails on branch-3.3
> --
>
> Key: HDFS-16536
> URL: https://issues.apache.org/jira/browse/HDFS-16536
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>    Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The NameNodeLayoutVersion -67 is not supported in Hadoop 3.3.x, so we need to 
> downgrade the version in the XML to -66.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
> [ERROR] Tests run: 27, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 7.918 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
> [ERROR] 
> testReverseXmlWithoutSnapshotDiffSection(org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer)
>   Time elapsed: 0.009 s  <<< ERROR!
> java.io.IOException: Layout version mismatch.  This oiv tool handles layout 
> version -66, but the XML file has  -67.  Please either 
> re-generate the XML file with the proper layout version, or manually edit the 
> XML file to be usable with this version of the oiv tool.
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.readVersion(OfflineImageReconstructor.java:1699)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1753)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1846)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer.testReverseXmlWithoutSnapshotDiffSection(TestOfflineImageViewer.java:1209)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(

[jira] [Created] (HDFS-16536) TestOfflineImageViewer fails on branch-3.3

2022-04-10 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16536:


 Summary: TestOfflineImageViewer fails on branch-3.3
 Key: HDFS-16536
 URL: https://issues.apache.org/jira/browse/HDFS-16536
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


The NameNodeLayoutVersion -67 is not supported in Hadoop 3.3.x, so we need to 
downgrade the version in the XML to -66.
{code:java}
[INFO] Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
[ERROR] Tests run: 27, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.918 
s <<< FAILURE! - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
[ERROR] 
testReverseXmlWithoutSnapshotDiffSection(org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer)
  Time elapsed: 0.009 s  <<< ERROR!
java.io.IOException: Layout version mismatch.  This oiv tool handles layout 
version -66, but the XML file has  -67.  Please either 
re-generate the XML file with the proper layout version, or manually edit the 
XML file to be usable with this version of the oiv tool.
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.readVersion(OfflineImageReconstructor.java:1699)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1753)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1846)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer.testReverseXmlWithoutSnapshotDiffSection(TestOfflineImageViewer.java:1209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16529) Remove unnecessary setObserverRead in TestConsistentReadsObserver

2022-04-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16529.
--
Fix Version/s: 3.4.0
   2.10.2
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-2.10. Thank you 
[~wangzhaohui] for your contribution!

> Remove unnecessary setObserverRead in TestConsistentReadsObserver
> -
>
> Key: HDFS-16529
> URL: https://issues.apache.org/jira/browse/HDFS-16529
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: wangzhaohui
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16527) Add global timeout rule for TestRouterDistCpProcedure

2022-04-05 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16527.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~tomscut] for your contribution.

> Add global timeout rule for TestRouterDistCpProcedure
> -
>
> Key: HDFS-16527
> URL: https://issues.apache.org/jira/browse/HDFS-16527
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> As [Ayush Saxena|https://github.com/ayushtkn] mentioned 
> [here|[https://github.com/apache/hadoop/pull/4009#pullrequestreview-925554297].]
>  TestRouterDistCpProcedure failed many times because of timeout. I will add a 
> global timeout rule for it. This makes it easy to set the timeout.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [E] [NOTICE] Attaching patches in JIRA issue no longer works

2022-03-31 Thread Akira Ajisaka
Hi Eric,

> If we're not using patches on JIRA anymore, why are we using JIRA at all?

JIRA issues contain useful information in the fields. We are leveraging
them in development and release process.

> Using JIRA to then redirect to GitHub seems unintuitive and will fracture
the information between two different places.

Agreed that it's ideal to have all the information in one place, but the
pre commit jobs for JIRA have some limitations (
https://issues.apache.org/jira/browse/HADOOP-17798) and I don't want to
maintain the jobs anymore.

> I think this deserves some attention.

Yes, but the developers don't seem to look at the JIRA issue or read the
discussion thread. That's why I sent the [NOTICE] mail.

> we completely changed the way

Really? Most of the Hadoop developers currently use GitHub PR for code
review.

> I'm concerned that the decision was made without community
support/consensus and without a vote thread

The background is to reduce my workload for maintaining the precommit jobs
and to improve the process. I didn't think we needed a vote.
Anyway, the change is a 2-way door decision, I'm okay to revert the change
and start a discussion & vote.

-Akira

On Fri, Apr 1, 2022 at 2:02 AM Eric Badger  wrote:

> I think this deserves some attention. More than just the question of JIRA
> vs GitHub Issues, I'm a little concerned that we completely changed the way
> we post code changes without a vote thread or even a discussion thread that
> had a clear outcome. The previous thread ([DISCUSS] Tips for improving
> productivity, workflow in the Hadoop project?) had many committers giving
> opinions on the matter, but it never came to conclusion and just sat there
> with no traffic for months. The way I read the previous thread was that
> committers were proposing that we clean out stale PRs, not that we turn
> off JIRA patches/Precommit builds.
>
> I'm not necessarily saying that we should go with patches vs GitHub PRs,
> but I'm concerned that the decision was made without community
> support/consensus and without a vote thread (not sure if that's necessary
> for this type of change or not).
>
> Eric
>
> On Mon, Mar 28, 2022 at 1:18 PM Eric Badger  wrote:
>
>> If we're not using patches on JIRA anymore, why are we using JIRA at all?
>> Why don't we just use GitHub Issues? Using JIRA to then redirect to GitHub
>> seems unintuitive and will fracture the information between two different
>> places. Do the conversations happen on JIRA or on a GitHub PR? Having
>> conversations on both is confusing and splitting information. I would
>> rather use JIRA with patches or GitHub Issues with PRs. I think anything in
>> between splits information and makes it hard to find.
>>
>> Eric
>>
>> On Sun, Mar 27, 2022 at 1:25 PM Akira Ajisaka 
>> wrote:
>>
>>> Dear Hadoop developers,
>>>
>>> I've disabled the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.
>>> If you attach a patch to a JIRA issue, the Jenkins precommit job won't
>>> run.
>>> Please use GitHub PR for code review.
>>>
>>> Background:
>>> -
>>> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17798__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBC2oQG0D$
>>> -
>>> https://urldefense.com/v3/__https://lists.apache.org/thread/6g3n4wo3b3tpq2qxyyth3y8m9z4mcj8p__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBHA0JrdK$
>>>
>>> Thanks and regards,
>>> Akira
>>>
>>


[NOTICE] Attaching patches in JIRA issue no longer works

2022-03-27 Thread Akira Ajisaka
Dear Hadoop developers,

I've disabled the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.
If you attach a patch to a JIRA issue, the Jenkins precommit job won't run.
Please use GitHub PR for code review.

Background:
- https://issues.apache.org/jira/browse/HADOOP-17798
- https://lists.apache.org/thread/6g3n4wo3b3tpq2qxyyth3y8m9z4mcj8p

Thanks and regards,
Akira


Re: [DISCUSS] Tips for improving productivity, workflow in the Hadoop project?

2022-03-27 Thread Akira Ajisaka
Hi all,

Let me try to disable the pre-commit job in JIRA:
https://issues.apache.org/jira/browse/HADOOP-17798
In the past discussion, I agreed with Masatake. Let's use JIRA for
background and design discussion, and GitHub PR for code review.

> My concern is that still leaves multiple places to look in order to get a
full picture of an issue.

I think it is inevitable. Now I think most of the contributors want to
review patches in GitHub instead of JIRA.
When getting a full picture, I recommend checking JIRA first. JIRA issue
has some auto-generated links to the corresponding GitHub PRs.
FYI, all the comments in the GitHub PRs are duplicated to the "Work Log" in
the JIRA.

Thanks,
Akira

On Mon, Aug 9, 2021 at 12:37 AM Brahma Reddy Battula 
wrote:

> @Wei-Chiu Chuang  looks this is not concluded yet...
> Can we move forward..?
>
> On Thu, Jul 15, 2021 at 11:09 PM Brahma Reddy Battula 
> wrote:
>
> >
> > I agree with Ahmed Hussein…Jira should not be used for number
> generation..
> >
> > We can always revisit the jira to see useful discussion at one place…
> >
> > @wei-chu, +1 on proposal for cleaning the PR’s..
> >
> >
> > On Thu, 15 Jul 2021 at 9:15 PM, epa...@apache.org 
> > wrote:
> >
> >>  > I usually use PR comments to discuss about the patch submitted.
> >> My concern is that still leaves multiple places to look in order to get
> a
> >> full picture of an issue.
> >> -Eric
> >>
> >> On Wednesday, July 14, 2021, 7:07:30 PM CDT, Masatake Iwasaki <
> >> iwasak...@oss.nttdata.co.jp> wrote:
> >>
> >>  > - recently, JIRA became some sort of a "number generator" with
> >> insufficient
> >> > description/details as the
> >> >developers and the reviewers spending more time discussing in the
> PR.
> >>
> >> JIRA issues contain useful information in the fields.
> >> We are leveraging them in development and release process.
> >>
> >> * https://yetus.apache.org/documentation/0.13.0/releasedocmaker/
> >> *
> >>
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336122
> >>
> >> I usually use PR comments to discuss about the patch submitted.
> >> JIRA comments are used for background or design discussion before and
> >> after submitting PR.
> >> There would be no problem having no comment in minor/trivial JIRA
> issues.
> >>
> >>
> >> On 2021/07/14 23:50, Ahmed Hussein wrote:
> >> > Do you consider migrating Jira issues to Github issues?
> >> >
> >> > I am a little bit concerned that there are some committers who still
> >> prefer
> >> > Jira-precommits over GitHub PR
> >> > (P.S. I am not a committer).
> >> >
> >> > Their point is that Github-PR confuses them with discussions/comments
> >> being
> >> > in two places rather than one.
> >> >
> >> > Personally, I found several Github-PRs comments discussing the
> validity
> >> of
> >> > the feature/bug.
> >> > As a result:
> >> > - recently, JIRA became some sort of a "number generator" with
> >> insufficient
> >> > description/details as the
> >> >developers and the reviewers spending more time discussing in the
> PR.
> >> > - the relation between a single Jira and Github-PR is 1-to-M. In order
> >> to
> >> > find related discussions, the user may
> >> >need to visit every PR (that may include closed ones)
> >> >
> >> >
> >> >
> >> > On Wed, Jul 14, 2021 at 8:46 AM Steve Loughran
> >> 
> >> > wrote:
> >> >
> >> >> not sure about stale PR closing; when you've a patch which is still
> >> pending
> >> >> review it's not that fun to have it closed.
> >> >>
> >> >> maybe better to have review sessions. I recall many, many years ago
> >> >> attempts to try and catch up with all outstanding patch reviews.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> On Wed, 14 Jul 2021 at 03:00, Akira Ajisaka 
> >> wrote:
> >> >>
> >> >>> Thank you Wei-Chiu for starting the discussion,
> >> >>>
> >> >>>> 3. JIRA security
> >> >>> I'm +1 to use private JIRA issues to handle vulnerabilities.
> >> >>>
> >> >>>> 5. Doc update
> &g

[jira] [Resolved] (HDFS-16355) Improve the description of dfs.block.scanner.volume.bytes.per.second

2022-03-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16355.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~philipse] for your 
contribution!

> Improve the description of dfs.block.scanner.volume.bytes.per.second
> 
>
> Key: HDFS-16355
> URL: https://issues.apache.org/jira/browse/HDFS-16355
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, hdfs
>Affects Versions: 3.3.1
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> datanode block scanner will be disabled if 
> `dfs.block.scanner.volume.bytes.per.second` is configured less then or equal 
> to zero, we can improve the desciption



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.3 - RC1

2022-03-26 Thread Akira Ajisaka
+1 (binding)

- Verified the signatures and the checksums
- Built from source using the "start-build-env.sh" from Ubuntu laptop.
- Setup pseudo cluster and ran some mapreduce jobs.
- Checked the Web UIs and the daemon logs.

Thanks,
Akira

On Sun, Mar 20, 2022 at 2:33 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Hi all,
>
> Here's Hadoop 3.2.3 release candidate #1:
>
> The RC is available at:
>https://home.apache.org/~iwasakims/hadoop-3.2.3-RC1/
>
> The RC tag is at:
>https://github.com/apache/hadoop/releases/tag/release-3.2.3-RC1
>
> The Maven artifacts are staged at:
>https://repository.apache.org/content/repositories/orgapachehadoop-1342
>
> You can find my public key at:
>https://downloads.apache.org/hadoop/common/KEYS
>
> Please evaluate the RC and vote.
> The vote will be open for (at least) 5 days.
>
> Thanks,
> Masatake Iwasaki
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-16523) Fix dependency error in hadoop-hdfs

2022-03-26 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16523:


 Summary: Fix dependency error in hadoop-hdfs
 Key: HDFS-16523
 URL: https://issues.apache.org/jira/browse/HDFS-16523
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
 Environment: M1 Pro Mac
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


hadoop-hdfs build is failing on docker with M1 Mac.
{code}
[WARNING]
Dependency convergence error for
org.fusesource.hawtjni:hawtjni-runtime:jar:1.11:provided paths to
dependency are:
+-org.apache.hadoop:hadoop-hdfs:jar:3.4.0-SNAPSHOT
  +-org.openlabtesting.leveldbjni:leveldbjni-all:jar:1.8:compile
+-org.openlabtesting.leveldbjni:leveldbjni:jar:1.8:provided
  +-org.fusesource.hawtjni:hawtjni-runtime:jar:1.11:provided
and
+-org.apache.hadoop:hadoop-hdfs:jar:3.4.0-SNAPSHOT
  +-org.openlabtesting.leveldbjni:leveldbjni-all:jar:1.8:compile
+-org.fusesource.leveldbjni:leveldbjni-osx:jar:1.8:provided
  +-org.fusesource.leveldbjni:leveldbjni:jar:1.8:provided
+-org.fusesource.hawtjni:hawtjni-runtime:jar:1.9:provided
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 2.10.2 release

2022-03-13 Thread Akira Ajisaka
Thank you Masatake.

+1 to release 2.10.2.

On Fri, Mar 4, 2022 at 6:52 PM Masatake Iwasaki 
wrote:

> Hi team,
>
> There are over 170 fixed issues in branch-2.10 after the release of
> 2.10.1[1].
> Given that there is still a need for 2.10.x, I would like to release 2.10.2
> after HADOOP-18088[2] (migration to reload4j) is merged.
> I volunteer to take a release manager role as done for 2.10.1.
>
> Maybe we can declare EOL of branch-2.10 after the release,
> it should be discussed in another thread.
>
> [1]
> https://issues.apache.org/jira/issues/?jql=project%20in%20(YARN%2C%20HDFS%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20status%20%3D%20Resolved%20AND%20fixVersion%20%3D%202.10.2
> [2] https://issues.apache.org/jira/browse/HADOOP-18088
>
> Thanks,
> Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (HDFS-16449) Fix hadoop web site release notes and changelog not available

2022-02-13 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16449.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk.

> Fix hadoop web site release notes and changelog not available
> -
>
> Key: HDFS-16449
> URL: https://issues.apache.org/jira/browse/HDFS-16449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.1
>Reporter: guophilipse
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix hadoop web site release notes and changelog not available



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16443) Fix edge case where DatanodeAdminDefaultMonitor doubly enqueues a DatanodeDescriptor on exception

2022-01-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16443.
--
Fix Version/s: 3.2.4
   Resolution: Fixed

Backported to branch-3.2.

> Fix edge case where DatanodeAdminDefaultMonitor doubly enqueues a 
> DatanodeDescriptor on exception
> -
>
> Key: HDFS-16443
> URL: https://issues.apache.org/jira/browse/HDFS-16443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Kevin Wikant
>Assignee: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As part of the fix merged in: https://issues.apache.org/jira/browse/HDFS-16303
> There was a rare edge case noticed in DatanodeAdminDefaultMonitor which 
> causes a DatanodeDescriptor to be added twice to the pendingNodes queue. 
>  * a [datanode is unhealthy so it gets added to 
> "unhealthyDns"]([https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java#L227)]
>  * an exception is thrown which causes [this catch 
> block](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java#L271)
>  to execute
>  * the [datanode is added to 
> "pendingNodes"]([https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java#L276)]
>  * under certain conditions the [datanode can be added again from 
> "unhealthyDns" to "pendingNodes" 
> here]([https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java#L296)]
> This Jira is to track the 1 line fix for this bug



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16303) Losing over 100 datanodes in state decommissioning results in full blockage of all datanode decommissioning

2022-01-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16303.
--
Resolution: Fixed

> Losing over 100 datanodes in state decommissioning results in full blockage 
> of all datanode decommissioning
> ---
>
> Key: HDFS-16303
> URL: https://issues.apache.org/jira/browse/HDFS-16303
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1, 3.3.1
>Reporter: Kevin Wikant
>Assignee: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 17h 50m
>  Remaining Estimate: 0h
>
> h2. Impact
> HDFS datanode decommissioning does not make any forward progress. For 
> example, the user adds X datanodes to the "dfs.hosts.exclude" file and all X 
> of those datanodes remain in state decommissioning forever without making any 
> forward progress towards being decommissioned.
> h2. Root Cause
> The HDFS Namenode class "DatanodeAdminManager" is responsible for 
> decommissioning datanodes.
> As per this "hdfs-site" configuration:
> {quote}Config = dfs.namenode.decommission.max.concurrent.tracked.nodes 
>  Default Value = 100
> The maximum number of decommission-in-progress datanodes nodes that will be 
> tracked at one time by the namenode. Tracking a decommission-in-progress 
> datanode consumes additional NN memory proportional to the number of blocks 
> on the datnode. Having a conservative limit reduces the potential impact of 
> decomissioning a large number of nodes at once. A value of 0 means no limit 
> will be enforced.
> {quote}
> The Namenode will only actively track up to 100 datanodes for decommissioning 
> at any given time, as to avoid Namenode memory pressure.
> Looking into the "DatanodeAdminManager" code:
>  * a new datanode is only removed from the "tracked.nodes" set when it 
> finishes decommissioning
>  * a new datanode is only added to the "tracked.nodes" set if there is fewer 
> than 100 datanodes being tracked
> So in the event that there are more than 100 datanodes being decommissioned 
> at a given time, some of those datanodes will not be in the "tracked.nodes" 
> set until 1 or more datanodes in the "tracked.nodes" finishes 
> decommissioning. This is generally not a problem because the datanodes in 
> "tracked.nodes" will eventually finish decommissioning, but there is an edge 
> case where this logic prevents the namenode from making any forward progress 
> towards decommissioning.
> If all 100 datanodes in the "tracked.nodes" are unable to finish 
> decommissioning, then other datanodes (which may be able to be 
> decommissioned) will never get added to "tracked.nodes" and therefore will 
> never get the opportunity to be decommissioned.
> This can occur due the following issue:
> {quote}2021-10-21 12:39:24,048 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager 
> (DatanodeAdminMonitor-0): Node W.X.Y.Z:50010 is dead while in Decommission In 
> Progress. Cannot be safely decommissioned or be in maintenance since there is 
> risk of reduced data durability or data loss. Either restart the failed node 
> or force decommissioning or maintenance by removing, calling refreshNodes, 
> then re-adding to the excludes or host config files.
> {quote}
> If a Datanode is lost while decommissioning (for example if the underlying 
> hardware fails or is lost), then it will remain in state decommissioning 
> forever.
> If 100 or more Datanodes are lost while decommissioning over the Hadoop 
> cluster lifetime, then this is enough to completely fill up the 
> "tracked.nodes" set. With the entire "tracked.nodes" set filled with 
> datanodes that can never finish decommissioning, any datanodes added after 
> this point will never be able to be decommissioned because they will never be 
> added to the "tracked.nodes" set.
> In this scenario:
>  * the "tracked.nodes" set is filled with datanodes which are lost & cannot 
> be recovered (and can never finish decommissioning so they will never be 
> removed from the set)
>  * the actual live datanodes being decommissioned are enqueued waiting to 
> enter the "tracked.nodes" set (and are stuck waiting indefinitely)
> This means that no progress towards decommissioning the live datanodes will 
> be made unless the user takes the follo

[jira] [Resolved] (HDFS-16441) The following error occurs when accessing webhdfs in Kerberos security mode:Failed to obtain user group information: java.io.IOException: Security enabled but user not a

2022-01-28 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16441.
--
Resolution: Invalid

Hi [~xiaoqiuqiu] - In Apache Hadoop community, JIRA is used for the development 
and not for end-user questions. Please use 
[u...@hadoop.apache.org|mailto:u...@hadoop.apache.org] mailing list for 
end-user questions.

> The following error occurs when accessing webhdfs in Kerberos security 
> mode:Failed to obtain user group information: java.io.IOException: Security 
> enabled but user not authenticated by filter
> ---
>
> Key: HDFS-16441
> URL: https://issues.apache.org/jira/browse/HDFS-16441
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
>Affects Versions: 3.3.1
>Reporter: xiaoqiuqiu
>Priority: Major
> Attachments: 1.png, 2.png, 3.png, 4.png
>
>
> The following error occurs when accessing webhdfs in Kerberos security 
> mode:Failed to obtain user group information: java.io.IOException: Security 
> enabled but user not authenticated by filter;
> When I use the browser to access, I still get the same error when the machine 
> has Kerberos authentication;
>  
> The code is the first in the comment area
> How to solve it?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-28 Thread Akira Ajisaka
Thank you Masatake and Chao!

On Fri, Jan 28, 2022 at 5:11 PM Chao Sun  wrote:

> Thanks Masatake and Akira for discovering the issue. I used
> `dev-support/bin/create-release` which runs `mvn deploy -DskipTests
> -Pnative -Pdist ...` in a separate container and somehow it didn't hit this
> issue.
>
> Let me cherry-pick https://issues.apache.org/jira/browse/YARN-10561 to
> branch-3.3.2 and start another RC then.
>
> Thanks,
> Chao
>
> On Fri, Jan 28, 2022 at 12:01 AM Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
>> Thanks, Akira.
>>
>> I confirmed that the issue is fixed in current branch-3.3 containing
>> YARN-10561.
>>
>> On 2022/01/28 14:25, Akira Ajisaka wrote:
>> > Hi Masatake,
>> >
>> > I faced the same error in a clean environment and
>> https://issues.apache.org/jira/browse/YARN-10561 <
>> https://issues.apache.org/jira/browse/YARN-10561> should fix this issue.
>> I'll rebase the patch shortly.
>> >
>> > By the way, I'm afraid there is no active maintainer in
>> hadoop-yarn-applications-catalog module. The module is for a sample
>> application catalog, so I think we can move the module to a separate
>> repository. Of course, it should be discussed separately.
>> >
>> > Thanks and regards,
>> > Akira
>> >
>> > On Fri, Jan 28, 2022 at 1:39 PM Masatake Iwasaki <
>> iwasak...@oss.nttdata.co.jp <mailto:iwasak...@oss.nttdata.co.jp>> wrote:
>> >
>> > Thanks for putting this up, Chao Sun.
>> >
>> > I got following error on building the RC3 source tarball.
>> > It is reproducible even in the container launched by
>> `./start-build-env.sh`.
>> > There seems to be no relevant diff between release-3.3.2-RC0 and
>> release-3.3.2-RC3 (and trunk)
>> > under hadoop-yarn-applications-catalog-webapp.
>> >
>> > I guess developers having caches of related artifacts under ~/.m2
>> did not see this?
>> >
>> > ```
>> > $ mvn clean install -DskipTests -Pnative -Pdist
>> > ...
>> > [INFO] Installing node version v8.11.3
>> > [INFO] Downloading
>> https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.gz <
>> https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.gz> to
>> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
>> > [INFO] No proxies configured
>> > [INFO] No proxy was configured, downloading directly
>> > [INFO] Unpacking
>> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
>> into
>> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp
>> > [INFO] Copying node binary from
>> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp/node-v8.11.3-linux-x64/bin/node
>> to
>> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/node
>> > [INFO] Installed node locally.
>> > [INFO] Installing Yarn version v1.7.0
>> > [INFO] Downloading
>> https://github.com/yarnpkg/yarn/releases/download/v1.7.0/yarn-v1.7.0.tar.gz
>> <
>> https://github.com/yarnpkg/yarn/releases/download/v1.7.0/yarn-v1.7.0.tar.gz>
>> to
>> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
>> > [INFO] No proxies configured
>> > [INFO] No proxy was configured, downloading directly
>> > [INFO] Unpacking
>> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
>> into
>> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/yarn
>> > [INFO] Installed Yarn locally.
>> > [INFO]
>> > [INFO] --- frontend-maven-plugin:1.11.2:yarn (yarn install) @
>> hadoop-yarn-applications-catalog-webapp ---
>> > [INFO] testFailureIgnore property is ignored in non test phases
>> > [INFO] Running 'yarn ' in
>> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-c

[jira] [Resolved] (HDFS-16169) Fix TestBlockTokenWithDFSStriped#testEnd2End failure

2022-01-28 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16169.
--
Fix Version/s: 3.4.0
   3.3.3
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~secfree.teng] for your 
contribution! Nice catch.

> Fix TestBlockTokenWithDFSStriped#testEnd2End failure
> 
>
> Key: HDFS-16169
> URL: https://issues.apache.org/jira/browse/HDFS-16169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.4.0
>Reporter: Hui Fei
>Assignee: secfree
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 141.936 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
> [ERROR] 
> testEnd2End(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>  Time elapsed: 28.325 s <<< FAILURE! java.lang.AssertionError: expected:<9> 
> but was:<10> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.junit.Assert.assertEquals(Assert.java:633) at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLocatedStripedBlocks(StripedFileTestUtil.java:344)
>  at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTestBalancerWithStripedFile(TestBalancer.java:1666)
>  at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.integrationTestWithStripedFile(TestBalancer.java:1601)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testEnd2End(TestBlockTokenWithDFSStriped.java:119)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
>  
> CI result is 
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3296/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Akira Ajisaka
Hi Masatake,

I faced the same error in a clean environment and
https://issues.apache.org/jira/browse/YARN-10561 should fix this issue.
I'll rebase the patch shortly.

By the way, I'm afraid there is no active maintainer in
hadoop-yarn-applications-catalog module. The module is for a sample
application catalog, so I think we can move the module to a separate
repository. Of course, it should be discussed separately.

Thanks and regards,
Akira

On Fri, Jan 28, 2022 at 1:39 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Thanks for putting this up, Chao Sun.
>
> I got following error on building the RC3 source tarball.
> It is reproducible even in the container launched by
> `./start-build-env.sh`.
> There seems to be no relevant diff between release-3.3.2-RC0 and
> release-3.3.2-RC3 (and trunk)
> under hadoop-yarn-applications-catalog-webapp.
>
> I guess developers having caches of related artifacts under ~/.m2 did not
> see this?
>
> ```
> $ mvn clean install -DskipTests -Pnative -Pdist
> ...
> [INFO] Installing node version v8.11.3
> [INFO] Downloading
> https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.gz to
> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] Unpacking
> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
> into
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp
> [INFO] Copying node binary from
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp/node-v8.11.3-linux-x64/bin/node
> to
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/node
> [INFO] Installed node locally.
> [INFO] Installing Yarn version v1.7.0
> [INFO] Downloading
> https://github.com/yarnpkg/yarn/releases/download/v1.7.0/yarn-v1.7.0.tar.gz
> to
> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] Unpacking
> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
> into
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/yarn
> [INFO] Installed Yarn locally.
> [INFO]
> [INFO] --- frontend-maven-plugin:1.11.2:yarn (yarn install) @
> hadoop-yarn-applications-catalog-webapp ---
> [INFO] testFailureIgnore property is ignored in non test phases
> [INFO] Running 'yarn ' in
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target
> [INFO] yarn install v1.7.0
> [INFO] info No lockfile found.
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [INFO] error safe-stable-stringify@2.3.1: The engine "node" is
> incompatible with this module. Expected version ">=10".
> [INFO] error safe-stable-stringify@2.3.1: The engine "node" is
> incompatible with this module. Expected version ">=10".info Visit
> https://yarnpkg.com/en/docs/cli/install for documentation about this
> command.
> [INFO] error Found incompatible module
> [INFO]
> 
> ```
>
> Masatake Iwasaki
>
>
> On 2022/01/27 4:16, Chao Sun wrote:
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC3 below:
> >
> > The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1333
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > The only delta between this and RC2 is the addition of the following fix:
> >- HADOOP-18094. Disable S3A auditing by default.
> >
> > I've done the same tests as in RC2 and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: Possibility of using ci-hadoop.a.o for Nutch integration tests

2022-01-05 Thread Akira Ajisaka
(Adding builds@)

Hi Lewis,

Nutch is already using ci-builds.apache.org, so I think Nutch can continue
using it. ci-hadoop.apache.org provides almost the same functionality as
ci-builds.apache.org and there is no non-production Hadoop cluster running
there. Therefore moving to ci-hadoop does not make sense.

Short history: In the past there were some jenkins hosts that were labeled
for Hadoop and its related projects. After the migration to cloudbees, the
labeled hosts are moved under ci-hadoop.apache.org.

Thanks,
Akira


On Thu, Jan 6, 2022 at 2:20 PM lewis john mcgibbney 
wrote:

> Thank you for the response and for directing the conversation to the
> correct places.
> I may have misunderstood what ci-hadoop.apache.org actually is. We are
> looking for a non-production Hadoop cluster which we can use to simulate
> Nutch jobs. I am not sure if this is what ci-hadoop.apache.org actually
> is...
> Instead it looks like lots of compute resources used to perform Jenkins
> CI/CD tasks for Hadoop and associated projects rather than test things
> on-top of Hadoop (and associated projects).
> Any clarity on what ci-hadoop.apache.org actually is would be greatly
> appreciated.
>
> Let me also clarify my language, rather than have the integration tests run
> on every PR, we could trigger the integration tests to be run by tagging a
> Github bot i.e., "@nutchbot integration-test". Similar to what is done with
> Dependabot or conda-forge for anyon familiar with those mechanisms.
>
> Thanks for any advice or comments.
> lewismc
>
> On Wed, Jan 5, 2022 at 9:05 PM Ayush Saxena  wrote:
>
> > Moved to Dev lists.
> >
> > Not sure about this though:
> >  when a PR is submitted to Nutch project it will run some MR job in
> Hadoop
> > CI.
> >
> > Whatever that PR requires should run as part of Nutch Infra. Why in
> Hadoop
> > CI?
> > Our CI is already loaded with our own workloads.
> > If by any chance the above assertion gets a pass, then secondly we have
> > very less number of people managing work related to CI and Infra. I don’t
> > think most of the people won’t have context or say in the Nutch project,
> > neither bandwidth to fix stuff if it gets broken.
> >
> > Just my thoughts. Looped in the dev lists, if others have any feedback.
> As
> > for the process, this would require a consensus from the Hadoop PMC
> >
> > -Ayush
> >
> > > On 06-Jan-2022, at 7:02 AM, lewis john mcgibbney 
> > wrote:
> > >
> > > Hi general@,
> > >
> > > Not sure if this is the correct mailing list. Please redirect me if
> there
> > > is a more suitable location. Thank you
> > >
> > > I am PMC over on the Nutch project (https://nutch.apache.org). I would
> > like
> > > to investigate whether we can build an integration testing capability
> for
> > > the project. This would involve running a Nutch integration test suite
> > > (collection of MR jobs) in a Hadoop CI environment. For example
> whenever
> > a
> > > pull request is submitted to the Nutch project. This could easily be
> > > automated through Jenkins.
> > >
> > > I’m not sure if this is something the Hadoop PMC would consider. Thank
> > you
> > > for the consideration.
> > >
> > > lewismc
> > > --
> > > http://home.apache.org/~lewismc/
> > > http://people.apache.org/keys/committer/lewismc
> >
>
>
> --
> http://home.apache.org/~lewismc/
> http://people.apache.org/keys/committer/lewismc
>


[jira] [Resolved] (HDFS-16409) Fix typo: testHasExeceptionsReturnsCorrectValue -> testHasExceptionsReturnsCorrectValue

2022-01-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16409.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thanks [~groot].

> Fix typo: testHasExeceptionsReturnsCorrectValue -> 
> testHasExceptionsReturnsCorrectValue
> ---
>
> Key: HDFS-16409
> URL: https://issues.apache.org/jira/browse/HDFS-16409
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fixing typo testHasExeceptionsReturnsCorrectValue to 
> testHasExceptionsReturnsCorrectValue in 
> {code:java}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestAddBlockPoolException.java{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16395) Remove useless NNThroughputBenchmark#dummyActionNoSynch()

2021-12-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16395.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~jianghuazhu] for 
your contribution!

> Remove useless NNThroughputBenchmark#dummyActionNoSynch()
> -
>
> Key: HDFS-16395
> URL: https://issues.apache.org/jira/browse/HDFS-16395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Doesn't seem to be used anywhere NNThroughputBenchmark#dummyActionNoSynch(), 
> It is recommended to delete it.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC0

2021-12-23 Thread Akira Ajisaka
Hi Chao,

> For this, I just need to update
the hadoop-project/src/site/markdown/index.md.vm and incorporate notable
changes made in 3.3.1/3.3.2, is that correct?

> ARM binaries
There was a discussion [1] and concluded that ARM binaries are optional. The
release vote is only for source code, not binary.

BTW, I want to include https://issues.apache.org/jira/browse/YARN-11053 to
3.3.2 because it is a regression in 3.3.x releases.

[1]: https://lists.apache.org/thread/ghcgq5s745zs5cc84n4owfphf1h21zz2

Thanks and regards,
Akira



On Wed, Dec 15, 2021 at 7:56 AM Chao Sun  wrote:

> Thanks all for taking a look! looks like we need another RC addressing the
> following issues.
>
> > 1. the overview page of the doc is for the Hadoop 3.0 release. It would
> be best to base the doc on top of Hadoop 3.3.0 overview page. (it's a miss
> on my part... The overview page of 3.3.1 wasn't updated)
>
> For this, I just need to update
> the hadoop-project/src/site/markdown/index.md.vm and incorporate notable
> changes made in 3.3.1/3.3.2, is that correct? looks like the file hasn't
> been touched for a while.
>
> > 2. ARM binaries is not included. For the 3.3.1 release, I had to run the
> create release script on an ARM machine separately to create the binary
> tarball.
>
> Hmm this might be challenging for me. Could you share the steps of how you
> did it? especially where did you get an ARM machine.
>
> > 3. the jdiff version
>
> https://github.com/apache/hadoop/blob/branch-3.3.2/hadoop-project-dist/pom.xml#L137
>
> I just need to backport this commit:
>
> https://github.com/apache/hadoop/commit/a77bf7cf07189911da99e305e3b80c589edbbfb5
> to branch-3.3.2 (and potentially branch-3.3)?
>
> > The 3.3.1 binary tarball is 577mb. The 3.3.2 RC0 is 608mb. I'm curious
> what are added.
>
> The difference is mostly in aws-java-sdk-bundle jar: 3.3.1 uses
>
> https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle/1.11.901
> while 3.3.2 uses
>
> https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle/1.11.1026
> .
> The difference is ~32.5mb.
>
> Chao
>
> On Tue, Dec 14, 2021 at 5:25 AM Steve Loughran 
> wrote:
>
> > I'll do my best to test this; I'm a bit broken right now.
> >
> > I think we should mention in a release notes that is the version of a
> > log4j included in this and all previous releases is not vulnerable. But
> > provide a list plus links to any that have been fixed
> >
> > On Fri, 10 Dec 2021 at 02:09, Chao Sun  wrote:
> >
> >> Hi all,
> >>
> >> Sorry for the long delay. I've prepared RC0 for Hadoop 3.3.2 below:
> >>
> >> The RC is available at:
> >> http://people.apache.org/~sunchao/hadoop-3.3.2-RC0/
> >> The RC tag is at:
> >> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC0
> >> The Maven artifacts are staged at:
> >>
> https://repository.apache.org/content/repositories/orgapachehadoop-1330/
> >>
> >> You can find my public key at: https://people.apache.org/~sunchao/KEYS
> >>
> >> Please evaluate the RC and vote.
> >>
> >> Thanks,
> >> Chao
> >>
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC0

2021-12-12 Thread Akira Ajisaka
Thank you Chao Sun for preparing the RC.
I've added your public key to the Hadoop KEYS file (
https://downloads.apache.org/hadoop/common/KEYS).

Thanks,
Akira


On Fri, Dec 10, 2021 at 11:10 AM Chao Sun  wrote:

> Hi all,
>
> Sorry for the long delay. I've prepared RC0 for Hadoop 3.3.2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC0/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC0
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1330/
>
> You can find my public key at: https://people.apache.org/~sunchao/KEYS
>
> Please evaluate the RC and vote.
>
> Thanks,
> Chao
>


[jira] [Resolved] (HDFS-16324) Fix error log in BlockManagerSafeMode

2021-12-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16324.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk.

> Fix error log in BlockManagerSafeMode
> -
>
> Key: HDFS-16324
> URL: https://issues.apache.org/jira/browse/HDFS-16324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> if `recheckInterval` was set as invalid value, there will be warning log 
> output, but the message seems not that proper ,we can improve it.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16314) Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-12-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16314.
--
Fix Version/s: 3.4.0
   3.3.3
   Resolution: Fixed

Committed to trunk and branch-3.3. Thanks [~haiyang Hu] for your contribution!

> Support to make 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
> -
>
> Key: HDFS-16314
> URL: https://issues.apache.org/jira/browse/HDFS-16314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Consider that make 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
> and rapid rollback in case this feature HDFS-16076 unexpected things happen 
> in production environment



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16171) De-flake testDecommissionStatus

2021-11-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16171.
--
Fix Version/s: 3.2.4
   Resolution: Fixed

Merged PR #3720 into branch-3.2.

> De-flake testDecommissionStatus
> ---
>
> Key: HDFS-16171
> URL: https://issues.apache.org/jira/browse/HDFS-16171
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> testDecommissionStatus keeps failing intermittently.
> {code:java}
> [ERROR] 
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor)
>   Time elapsed: 3.299 s  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<4> 
> but was:<3>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:169)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor.testDecommissionStatus(TestDecommissioningStatusWithBackoffMonitor.java:136)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16334) Correct NameNode ACL description

2021-11-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16334.
--
Fix Version/s: 3.4.0
   3.3.3
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~philipse] for your contribution.

> Correct NameNode ACL description
> 
>
> Key: HDFS-16334
> URL: https://issues.apache.org/jira/browse/HDFS-16334
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> `dfs.namenode.acls.enabled` is set to be `true` by default after HDFS-13505 
> ,we can improve the desc



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16328) Correct disk balancer param desc

2021-11-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16328.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2.

> Correct disk balancer param desc
> 
>
> Key: HDFS-16328
> URL: https://issues.apache.org/jira/browse/HDFS-16328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, hdfs
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> `dfs.disk.balancer.enabled` is enabled by default after HDFS-13153, we can 
> improve the doc to avoid confusion



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16330) Fix incorrect placeholder for Exception logs in DiskBalancer

2021-11-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16330.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2.

> Fix incorrect placeholder for Exception logs in DiskBalancer
> 
>
> Key: HDFS-16330
> URL: https://issues.apache.org/jira/browse/HDFS-16330
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16329) Fix log format for BlockManager

2021-11-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16329.
--
Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~tomscut] for your 
contribution!

> Fix log format for BlockManager
> ---
>
> Key: HDFS-16329
> URL: https://issues.apache.org/jira/browse/HDFS-16329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix log format for BlockManager.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2021-11-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-14240.
--
  Assignee: (was: Ranith Sardar)
Resolution: Duplicate

Closing as duplicate.

> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Priority: Major
> Attachments: screenshot-1.png
>
>
> _emphasized text_When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-11-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16269.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~jianghuazhu] for your contribution.

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Checked some code and found that the problem appeared here.
> private ExtendedBlock addBlocks(String fileName, String clientName)
>  throws IOException {
>  for(DatanodeInfo dnInfo: loc.getLocations()) {
>int dnIdx = dnInfo.getXferPort()-1;
>datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
> }
>  }
> It can be seen from this that what dnInfo.getXferPort() gets is a port 
> information and should not be used as an index of an array.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16257) [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver

2021-10-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16257.
--
Fix Version/s: 2.10.2
   Resolution: Fixed

Committed to branch-2.10.

> [HDFS] [RBF] Guava cache performance issue in Router MountTableResolver
> ---
>
> Key: HDFS-16257
> URL: https://issues.apache.org/jira/browse/HDFS-16257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.10.1
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Branch 2.10.1 uses guava version of 11.0.2, which has a bug which affects the 
> performance of cache, which was mentioned in HDFS-13821.
> Since upgrading guava version seems affecting too much, this ticket is to add 
> a configuration setting when initializing cache to walk around this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16276) RBF: Remove the useless configuration of rpc isolation in md

2021-10-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16276.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~zhuxiangyi] for your contribution.

> RBF:  Remove the useless configuration of rpc isolation in md
> -
>
> Key: HDFS-16276
> URL: https://issues.apache.org/jira/browse/HDFS-16276
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The *dfs.federation.router.fairness.enable* configuration is not used in the 
> code, but there is it in md, we should delete it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16258) HDFS-13671 breaks TestBlockManager in branch-3.2

2021-10-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16258.
--
Resolution: Cannot Reproduce

It passed in the latest qbt job. Closing.
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/15/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/

Please feel free to reopen this if the test fails in a specific environment.

> HDFS-13671 breaks TestBlockManager in branch-3.2
> 
>
> Key: HDFS-16258
> URL: https://issues.apache.org/jira/browse/HDFS-16258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> TestBlockManager in branch-3.2 has two failed tests: 
> * testDeleteCorruptReplicaWithStatleStorages
> * testBlockManagerMachinesArray
> Looks like broken by HDFS-13671. CC: [~brahmareddy]
> Branch-3.3 seems fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop-3.2.3 Release Update

2021-10-05 Thread Akira Ajisaka
Hi Brahma,

What is the release process going on? Is there any blocker for the RC?

-Akira

On Wed, Sep 22, 2021 at 7:37 PM Xiaoqiao He  wrote:

> Hi Brahma,
>
> The feature 'BPServiceActor processes commands from NameNode
> asynchronously' has been ready for both branch-3.2 and branch-3.2.3. While
> cherry-picking there is only minor conflict, So I checked in directly. BTW,
> run some unit tests and build pseudo cluster to verify, it seems to work
> fine.
> FYI.
>
> Regards,
> - He Xiaoqiao
>
> On Thu, Sep 16, 2021 at 10:52 PM Brahma Reddy Battula 
> wrote:
>
>> Please go ahead. Let me know any help required on review.
>>
>> On Tue, Sep 14, 2021 at 6:57 PM Xiaoqiao He  wrote:
>>
>>> Hi Brahma,
>>>
>>> I plan to involve HDFS-14997 and related JIRAs if possible. I have
>>> resolved the conflict and verified them locally.
>>> It will include: HDFS-14997 HDFS-15075 HDFS-15651 HDFS-15113.
>>> I would like to hear some more response that if we have enough time to
>>> wait for it to be ready.
>>> Thanks.
>>>
>>> Best Regards,
>>> - He Xiaoqiao
>>>
>>> On Tue, Sep 14, 2021 at 3:39 PM Xiaoqiao He  wrote:
>>>
>>>> Hi Brahma, HDFS-15160 has checked in branch-3.2 & branch-3.2.3. FYI.
>>>>
>>>> On Tue, Sep 14, 2021 at 3:52 AM Brahma Reddy Battula 
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> Waiting for the following jira to commit to hadoop-3.2.3 , mostly this
>>>>> can
>>>>> be done by this week,then I will try to create the RC next if there is
>>>>> no
>>>>> objection.
>>>>>
>>>>> https://issues.apache.org/jira/browse/HDFS-15160
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Aug 16, 2021 at 2:22 PM Brahma Reddy Battula <
>>>>> bra...@apache.org>
>>>>> wrote:
>>>>>
>>>>> > @Akira Ajisaka   and @Masatake Iwasaki
>>>>> > 
>>>>> > Looks all are build related issues when you try with bigtop. We can
>>>>> > discuss and prioritize this.. Will connect with you guys.
>>>>> >
>>>>> > On Mon, Aug 16, 2021 at 1:43 PM Masatake Iwasaki <
>>>>> > iwasak...@oss.nttdata.co.jp> wrote:
>>>>> >
>>>>> >> >> -
>>>>> >>
>>>>> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch2-exclude-spotbugs-annotations.diff
>>>>> >> >
>>>>> >> > This is for building hadoop-3.2.2 against zookeeper-3.4.14.
>>>>> >> > we do not see the issue usually since branch-3.2 uses
>>>>> zooekeper-3.4.13,
>>>>> >> > while it would be harmless to add the exclusion even for
>>>>> >> zooekeeper-3.4.13.
>>>>> >>
>>>>> >> I filed HADOOP-17849 for this.
>>>>> >>
>>>>> >> On 2021/08/16 12:02, Masatake Iwasaki wrote:
>>>>> >> > Thanks for bringing this up, Akira. Let me explain some
>>>>> background.
>>>>> >> >
>>>>> >> >
>>>>> >> >> -
>>>>> >>
>>>>> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch2-exclude-spotbugs-annotations.diff
>>>>> >> >
>>>>> >> > This is for building hadoop-3.2.2 against zookeeper-3.4.14.
>>>>> >> > we do not see the issue usually since branch-3.2 uses
>>>>> zooekeper-3.4.13,
>>>>> >> > while it would be harmless to add the exclusion even for
>>>>> >> zooekeeper-3.4.13.
>>>>> >> >
>>>>> >> >
>>>>> >> >> -
>>>>> >>
>>>>> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch3-fix-broken-dir-detection.diff
>>>>> >> >> -
>>>>> >>
>>>>> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch5-fix-kms-shellprofile.diff
>>>>> >> >> -
>>>>> >>
>>>>> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch6-fix-httpfs-sh.diff
>>>>> >> >
>>>&

[jira] [Created] (HDFS-16256) Minor fixes in HDFS Fedbalance document

2021-10-05 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16256:


 Summary: Minor fixes in HDFS Fedbalance document
 Key: HDFS-16256
 URL: https://issues.apache.org/jira/browse/HDFS-16256
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira Ajisaka


1. "Command submit has 4 options:" is not true. Now it has actually 6 options. 
It should be updated to something like "Command submit has the following 
options".

2. 
{code}
### Configuration Options

{code}
In the above code, the "" is not needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16255) Fix dead link to fedbalance document

2021-10-05 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16255:


 Summary: Fix dead link to fedbalance document
 Key: HDFS-16255
 URL: https://issues.apache.org/jira/browse/HDFS-16255
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira Ajisaka


There is a dead link in HDFSRouterFederation.md 
(https://github.com/apache/hadoop/blob/e90c41af34ada9d7b61e4d5a8b88c2f62c7fea25/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md?plain=1#L517)

{{../../../hadoop-federation-balance/HDFSFederationBalance.md}} should be 
{{../../hadoop-federation-balance/HDFSFederationBalance.md}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Migrate to Yetus Interface classification annotations

2021-09-28 Thread Akira Ajisaka
Hi Masatake,

The problem comes from the removal of com.sun.tools.doclets.* packages in
Java 10.
In Apache Hadoop, I removed the doclet support for filtering javadocs when
the environment is Java 10 or upper.
https://issues.apache.org/jira/browse/HADOOP-15304

Thanks,
Akira

On Tue, Sep 28, 2021 at 10:27 AM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> > In particular, there has been an outstanding problem with doclet support
> for filtering javadocs by annotation since JDK9 came out.
>
> Could you give me a pointer to relevant Yetus JIRA or ML thread?
>
> On 2021/09/28 1:17, Sean Busbey wrote:
> > I think consolidating on a common library and tooling for defining API
> expectations for Hadoop would be great.
> >
> > Unfortunately, the Apache Yetus community recently started a discussion
> around dropping their maintenance of the audience annotations codebase[1]
> due to lack of community interest. In particular, there has been an
> outstanding problem with doclet support for filtering javadocs by
> annotation since JDK9 came out.
> >
> > I think that means a necessary first step here would be to determine if
> we have contributors willing to show up over in that project to get things
> into a good state for future JDK adoption.
> >
> >
> >
> > [1]:
> > https://s.apache.org/ybdl6
> > "[DISCUSS] Drop JDK8; audience-annotations" from d...@yetus.apache.org
> >
> >> On Sep 27, 2021, at 2:46 AM, Viraj Jasani  wrote:
> >>
> >> Since the early days, Hadoop has provided Interface classification
> >> annotations to represent the scope and stability for downstream
> >> applications to select Hadoop APIs carefully. After some time, these
> >> annotations (InterfaceAudience and InterfaceStability) have been
> migrated
> >> to Apache Yetus. As of today, with increasing number of Hadoop ecosystem
> >> applications using (or starting to use) Yetus stability annotations for
> >> their own downstreamers, we should also consider using IA/IS annotations
> >> provided by *org.apache.yetus.audience *directly in our codebase and
> retire
> >> our *org.apache.hadoop.classification* package for the better
> separation of
> >> concern and single source.
> >>
> >> I believe we can go with this migration to maintain compatibility for
> >> Hadoop downstreamers:
> >>
> >>1. In Hadoop trunk (3.4.0+ releases), replace all usages of o.a.h.c
> >>stability annotations with o.a.y.a annotations.
> >>2. Deprecate o.a.h.c annotations, and provide deprecation warning
> that
> >>we will remove o.a.h.c in 4.0.0 (or 5.0.0) release and the only
> source for
> >>these annotations should be o.a.y.a.
> >>
> >> Any thoughts?
> >
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (HDFS-15864) TestFsDatasetImpl#testDnRestartWithHardLink fails intermittently

2021-09-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-15864.
--
Resolution: Duplicate

Closing as duplicate of HDFS-16213. Thank you [~touchida] for your report.

> TestFsDatasetImpl#testDnRestartWithHardLink fails intermittently
> 
>
> Key: HDFS-15864
> URL: https://issues.apache.org/jira/browse/HDFS-15864
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Toshihiko Uchida
>Priority: Minor
>  Labels: flaky-test
>
> This unit test failed in https://github.com/apache/hadoop/pull/2726 due to an 
> AssertionError.
> {code}
> [ERROR] 
> testDnRestartWithHardLink(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl)
>   Time elapsed: 1.452 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testDnRestartWithHardLink(TestFsDatasetImpl.java:1377)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The failure occurred at the following first assertion.
> {code}
>   cluster.restartDataNode(0);
>   cluster.waitDatanodeFullyStarted(cluster.getDataNodes().get(0), 6);
>   cluster.triggerBlockReports();
>   assertTrue(Files.exists(Paths.get(newReplicaInfo.getBlockURI(;
>   assertTrue(Files.exists(Paths.get(oldReplicaInfo.getBlockURI(;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16219) RBF: Set default map tasks and bandwidth in RouterFederationRename

2021-09-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HDFS-16219:
--

Thank you [~vjasani] for your comment. Reopened this.

> RBF: Set default map tasks and bandwidth in RouterFederationRename
> --
>
> Key: HDFS-16219
> URL: https://issues.apache.org/jira/browse/HDFS-16219
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 with patches
>    Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>
> If dfs.federation.router.federation.rename.map or 
> dfs.federation.router.federation.rename.bandwidth is not set, DFSRouter fails 
> to launch.
> This issue is similar to HDFS-16217.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16219) RBF: Set default map tasks and bandwidth in RouterFederationRename

2021-09-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16219.
--
Resolution: Duplicate

Fixed as part of HDFS-16217.

> RBF: Set default map tasks and bandwidth in RouterFederationRename
> --
>
> Key: HDFS-16219
> URL: https://issues.apache.org/jira/browse/HDFS-16219
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 with patches
>    Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>
> If dfs.federation.router.federation.rename.map or 
> dfs.federation.router.federation.rename.bandwidth is not set, DFSRouter fails 
> to launch.
> This issue is similar to HDFS-16217.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16217) RBF: Set default value of hdfs.fedbalance.procedure.scheduler.journal.uri

2021-09-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16217.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR.

> RBF: Set default value of hdfs.fedbalance.procedure.scheduler.journal.uri
> -
>
> Key: HDFS-16217
> URL: https://issues.apache.org/jira/browse/HDFS-16217
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 with patches
>    Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> When dfs.federation.router.federation.rename.option is set to DISTCP and 
> hdfs.fedbalance.procedure.scheduler.journal.uri is not set, DFSRouter fails 
> to launch.
> {quote}
> 2021-09-08 15:39:11,818 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> java.lang.NullPointerException
> at java.base/java.net.URI$Parser.parse(URI.java:3104)
> at java.base/java.net.URI.(URI.java:600)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.initRouterFedRename(RouterRpcServer.java:444)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.(RouterRpcServer.java:419)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createRpcServer(Router.java:391)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:188)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> {quote}
> hdfs.fedbalance.procedure.scheduler.journal.uri is 
> hdfs://localhost:8020/tmp/procedure by default, however, the default value is 
> not used in DFSRouter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16219) RBF: Set default map tasks and bandwidth in RouterFederationRename

2021-09-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16219.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR.

> RBF: Set default map tasks and bandwidth in RouterFederationRename
> --
>
> Key: HDFS-16219
> URL: https://issues.apache.org/jira/browse/HDFS-16219
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 with patches
>    Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.4.0
>
>
> If dfs.federation.router.federation.rename.map or 
> dfs.federation.router.federation.rename.bandwidth is not set, DFSRouter fails 
> to launch.
> This issue is similar to HDFS-16217.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16219) RBF: Set default map tasks and bandwidth in RouterFederationRename

2021-09-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HDFS-16219:
--

> RBF: Set default map tasks and bandwidth in RouterFederationRename
> --
>
> Key: HDFS-16219
> URL: https://issues.apache.org/jira/browse/HDFS-16219
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 with patches
>    Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.4.0
>
>
> If dfs.federation.router.federation.rename.map or 
> dfs.federation.router.federation.rename.bandwidth is not set, DFSRouter fails 
> to launch.
> This issue is similar to HDFS-16217.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16218) RBF: RouterFedbalance should load HDFS config

2021-09-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16218.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Thank you! Merged into trunk.

> RBF: RouterFedbalance should load HDFS config
> -
>
> Key: HDFS-16218
> URL: https://issues.apache.org/jira/browse/HDFS-16218
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: Hadoop 3.3.0 + patches, Kerberos authentication is 
> enabled
>    Reporter: Akira Ajisaka
>Assignee: Fengnan Li
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RouterFedBalance fails to connect to DFSRouter when Kerberos is enabled 
> because "dfs.federation.router.kerberos.principal" in hdfs-site.xml is not 
> loaded.
> {quote}
> 21/09/08 17:21:38 ERROR rbfbalance.RouterFedBalance: Submit balance job 
> failed.
> java.io.IOException: DestHost:destPort 0.0.0.0:8111 , LocalHost:localPort 
> /:0. Failed on local exception: java.io.IOException: Couldn't set 
> up IO streams: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.getMountTableEntries(RouterAdminProtocolTranslatorPB.java:198)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.MountTableProcedure.getMountEntry(MountTableProcedure.java:140)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.getSrcPath(RouterFedBalance.java:326)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.access$000(RouterFedBalance.java:68)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance$Builder.build(RouterFedBalance.java:168)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.submit(RouterFedBalance.java:302)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.run(RouterFedBalance.java:216)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.main(RouterFedBalance.java:376)
> {quote}
> When adding the property specifically by "-D" option, the command worked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16219) RBF: Set default map tasks and bandwidth in RouterFederationRename

2021-09-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16219:


 Summary: RBF: Set default map tasks and bandwidth in 
RouterFederationRename
 Key: HDFS-16219
 URL: https://issues.apache.org/jira/browse/HDFS-16219
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
 Environment: Hadoop 3.3.0 with patches
Reporter: Akira Ajisaka


If dfs.federation.router.federation.rename.map or 
dfs.federation.router.federation.rename.bandwidth is not set, DFSRouter fails 
to launch.

This issue is similar to HDFS-16217.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16218) RBF: RouterFedbalance should load HDFS config

2021-09-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16218:


 Summary: RBF: RouterFedbalance should load HDFS config
 Key: HDFS-16218
 URL: https://issues.apache.org/jira/browse/HDFS-16218
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
 Environment: Hadoop 3.3.0 + patches, Kerberos authentication is enabled
Reporter: Akira Ajisaka


RouterFedBalance fails to connect to DFSRouter when Kerberos is enabled because 
"dfs.federation.router.kerberos.principal" in hdfs-site.xml is not loaded.

{quote}
21/09/08 17:21:38 ERROR rbfbalance.RouterFedBalance: Submit balance job failed.
java.io.IOException: DestHost:destPort 0.0.0.0:8111 , LocalHost:localPort 
/:0. Failed on local exception: java.io.IOException: Couldn't set up 
IO streams: java.lang.IllegalArgumentException: Failed to specify server's 
Kerberos principal name
at 
org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.getMountTableEntries(RouterAdminProtocolTranslatorPB.java:198)
at 
org.apache.hadoop.hdfs.rbfbalance.MountTableProcedure.getMountEntry(MountTableProcedure.java:140)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.getSrcPath(RouterFedBalance.java:326)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.access$000(RouterFedBalance.java:68)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance$Builder.build(RouterFedBalance.java:168)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.submit(RouterFedBalance.java:302)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.run(RouterFedBalance.java:216)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.hdfs.rbfbalance.RouterFedBalance.main(RouterFedBalance.java:376)
{quote}

When adding the property specifically by "-D" option, the command worked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16217) RBF: Set default value of hdfs.fedbalance.procedure.scheduler.journal.uri

2021-09-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16217:


 Summary: RBF: Set default value of 
hdfs.fedbalance.procedure.scheduler.journal.uri
 Key: HDFS-16217
 URL: https://issues.apache.org/jira/browse/HDFS-16217
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
 Environment: Hadoop 3.3.0 with patches
Reporter: Akira Ajisaka


When dfs.federation.router.federation.rename.option is set to DISTCP and 
hdfs.fedbalance.procedure.scheduler.journal.uri is not set, DFSRouter fails to 
launch.
{quote}
2021-09-08 15:39:11,818 ERROR 
org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
router
java.lang.NullPointerException
at java.base/java.net.URI$Parser.parse(URI.java:3104)
at java.base/java.net.URI.(URI.java:600)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.initRouterFedRename(RouterRpcServer.java:444)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.(RouterRpcServer.java:419)
at 
org.apache.hadoop.hdfs.server.federation.router.Router.createRpcServer(Router.java:391)
at 
org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:188)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
{quote}
hdfs.fedbalance.procedure.scheduler.journal.uri is 
hdfs://localhost:8020/tmp/procedure by default, however, the default value is 
not used in DFSRouter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16185) Fix comment in LowRedundancyBlocks.java

2021-08-24 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16185:


 Summary: Fix comment in LowRedundancyBlocks.java
 Key: HDFS-16185
 URL: https://issues.apache.org/jira/browse/HDFS-16185
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka


[https://github.com/apache/hadoop/blob/c8e58648389c7b0b476c3d0d47be86af2966842f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java#L249]

"can only afford one replica loss" is not correct there. Before HDFS-9857, the 
comment is "there is less than a third as many blocks as requested; this is 
considered very under-replicated" and it seems correct.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16171) De-flake testDecommissionStatus

2021-08-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16171.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> De-flake testDecommissionStatus
> ---
>
> Key: HDFS-16171
> URL: https://issues.apache.org/jira/browse/HDFS-16171
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> testDecommissionStatus keeps failing intermittently.
> {code:java}
> [ERROR] 
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor)
>   Time elapsed: 3.299 s  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<4> 
> but was:<3>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:169)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor.testDecommissionStatus(TestDecommissioningStatusWithBackoffMonitor.java:136)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop-3.2.3 Release Update

2021-08-15 Thread Akira Ajisaka
Thanks Brahma for cutting branch-3.2.3.

In Apache Bigtop, there are some patches applied to Hadoop 3.2.2.
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/common/hadoop

In these patches, how about backporting the following issues to
branch-3.2 and branch-3.2.3?
- HADOOP-14922
- HADOOP-15939
- HADOOP-17569

In addition, there are some patches that don't have JIRA issue ID.
Maybe we need to create JIRAs and fix those.
- 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch2-exclude-spotbugs-annotations.diff
- 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch3-fix-broken-dir-detection.diff
- 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch5-fix-kms-shellprofile.diff
- 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch6-fix-httpfs-sh.diff
- 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/patch7-remove-phantomjs-in-yarn-ui.diff

Thanks and regards,
Akira

On Wed, Aug 11, 2021 at 12:31 PM Xiaoqiao He  wrote:
>
> Thanks Brahma for initiating this and making hadoop-3.2.3 release happen.
>
> I would like to validate the HBase project (both the latest release and
> trunk branch).
> Chao Sun will validate the Spark Project (Got in touch with Chao already).
> once RC is out.
>
> Thanks and Regards,
> - He Xiaoqiao
>
>
> On Tue, Aug 10, 2021 at 5:54 PM Brahma Reddy Battula 
> wrote:
>
> > Hi All,
> >
> > I cut branch-3.2.3 and it is ready for release. Please commit to
> > branch-3.2.3 if any critical/blocker issues need to go.
> >
> > *This time I am thinking of having downstream projects and companies'
> > voices,let's know how this can go. *
> >
> >- Planning to check with downstream projects like Spark,HBase and Hive
> >if they can help on validation(Or running their UT on this branch)
> >- Collecting info from Companies who are already deployed and using the
> >branch-3.2
> >
> >
> > so that we can make a more stable release on 3.2 (so that features released
> > on this branch impact can be known) ,please let me know anybody from these
> > communities who can help on this.
> >
> >
> > Planning to create RC by this month end. Any suggestions are welcome.
> >
> >
> >
> > --Brahma Reddy Battula
> >

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15878) RBF: Flaky test TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in Trunk

2021-08-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HDFS-15878:
--

This test still fails.

> RBF: Flaky test 
> TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in 
> Trunk
> 
>
> Key: HDFS-15878
> URL: https://issues.apache.org/jira/browse/HDFS-15878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, rbf
>Reporter: Renukaprasad C
>Assignee: Fengnan Li
>Priority: Major
>
> ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 
> 24.627 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.222 s  <<< ERROR!
> java.io.FileNotFoundException: File /test/testSyncable not found.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.

[jira] [Resolved] (HDFS-16172) TestRouterWebHDFSContractCreate fails

2021-08-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16172.
--
Resolution: Duplicate

> TestRouterWebHDFSContractCreate fails
> -
>
> Key: HDFS-16172
> URL: https://issues.apache.org/jira/browse/HDFS-16172
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>    Reporter: Akira Ajisaka
>Priority: Major
>
> {quote}
> [INFO] Running 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 
> 18.539 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.51 s  <<< ERROR!
> java.io.FileNotFoundException: File /test/testSyncable not found.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.hadoop.i

[jira] [Created] (HDFS-16172) TestRouterWebHDFSContractCreate fails

2021-08-12 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16172:


 Summary: TestRouterWebHDFSContractCreate fails
 Key: HDFS-16172
 URL: https://issues.apache.org/jira/browse/HDFS-16172
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Akira Ajisaka


{quote}
[INFO] Running 
org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
[ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 18.539 
s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
[ERROR] 
testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
  Time elapsed: 0.51 s  <<< ERROR!
java.io.FileNotFoundException: File /test/testSyncable not found.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
/test/testSyncable not found.
at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:90)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:537)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$300(WebHdfsFileSystem.java:146)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:738)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractR

[jira] [Resolved] (HDFS-16151) Improve the parameter comments related to ProtobufRpcEngine2#Server()

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16151.
--
Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~jianghuazhu] for your 
contribution.

> Improve the parameter comments related to ProtobufRpcEngine2#Server()
> -
>
> Key: HDFS-16151
> URL: https://issues.apache.org/jira/browse/HDFS-16151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Now missing some parameter comments related to ProtobufRpcEngine2#Server(), 
> as follows:
> /**
>  * Construct an RPC server.
>  *
>  * @param protocolClass the class of protocol
>  * @param protocolImpl the protocolImpl whose methods will be called
>  * @param conf the configuration to use
>  * @param bindAddress the address to bind on to listen for connection
>  * @param port the port to listen for connections on
>  * @param numHandlers the number of method handler threads to run
>  * @param verbose whether each call should be logged
>  * @param portRangeConfig A config parameter that can be used to restrict
>  * the range of ports used when port is 0 (an ephemeral port)
>  * @param alignmentContext provides server state info on client responses
>  */
> public Server(Class protocolClass, Object protocolImpl,
> Configuration conf, String bindAddress, int port, int numHandlers,
> int numReaders, int queueSizePerHandler, boolean verbose,
> SecretManager secretManager,
> String portRangeConfig, AlignmentContext alignmentContext)
> throws IOException {
>   super(protocolClass, protocolImpl, conf, bindAddress, port, numHandlers,
>   numReaders, queueSizePerHandler, verbose, secretManager,
>   portRangeConfig, alignmentContext);
> }
> The description of numReaders, queueSizePerHandler, and secretManager is 
> missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.2.3 release

2021-08-02 Thread Akira Ajisaka
Hi Steven,

Marked YARN-8990 and YARN-8992 as release-blocker. In addition, I
opened a PR to backport YARN-8990:
https://github.com/apache/hadoop/pull/3254

Thanks,
Akira

On Thu, Jul 29, 2021 at 10:36 AM Steven Rand  wrote:
>
> I think it would be helpful if we could include YARN-8990 and YARN-8992 in 
> the 3.2.3 release. Both are important fixes which were included in 3.2.0, but 
> never made their way to branch-3.2, so were omitted from both 3.2.1 and 3.2.2.
>
> Best,
> Steve
>
> On Wed, Jul 28, 2021 at 5:14 AM Xiaoqiao He  wrote:
>>
>> cc @dev mail-list.
>>
>> On Wed, Jul 28, 2021 at 5:11 PM Xiaoqiao He  wrote:
>>
>> > Hi Brahma,
>> >
>> > I just created version 3.2.4, and changed all unresolved issues (target
>> > version/s: 3.2.3) to 3.2.4 after checking both of them are not blocker
>> > issues. Dashboard[1] is clean now.
>> >
>> > Regards,
>> > - He Xiaoqiao
>> >
>> > [1]
>> > https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336167
>> >
>> > On Sun, Jul 25, 2021 at 7:45 PM Brahma Reddy Battula 
>> > wrote:
>> >
>> >> Hi Xiaoqiao,
>> >>
>> >> Thanks for creating the Dashboard, we need to change the filters and
>> >> target versions in the jira.
>> >>
>> >> On Sun, Jul 25, 2021 at 2:05 PM Xiaoqiao He  wrote:
>> >>
>> >>> Thanks Brahma for volunteering and driving this release plan. I just
>> >>> created a dashboard for 3.2.3 release[1].
>> >>> I would like to support for this release line if need. (cc Brahma)
>> >>>
>> >>> Thanks. Regards,
>> >>> - He Xiaoqiao
>> >>>
>> >>> [1]
>> >>>
>> >>> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336167
>> >>>
>> >>>
>> >>> On Sat, Jul 24, 2021 at 1:16 AM Akira Ajisaka 
>> >>> wrote:
>> >>>
>> >>> > Hi Brahma,
>> >>> >
>> >>> > Thank you for volunteering!
>> >>> >
>> >>> > -Akira
>> >>> >
>> >>> > On Fri, Jul 23, 2021 at 5:57 PM Brahma Reddy Battula <
>> >>> bra...@apache.org>
>> >>> > wrote:
>> >>> > >
>> >>> > > Hi Akira,
>> >>> > >
>> >>> > > Thanks for bringing this..
>> >>> > >
>> >>> > > I want to drive this if nobody already plan to do this..
>> >>> > >
>> >>> > >
>> >>> > > On Thu, 22 Jul 2021 at 8:48 AM, Akira Ajisaka 
>> >>> > wrote:
>> >>> > >
>> >>> > > > Hi all,
>> >>> > > >
>> >>> > > > Hadoop 3.2.2 was released half a year ago, and now, we have
>> >>> > > > accumulated more than 230 commits [1]. Therefore I want to start
>> >>> the
>> >>> > > > release work for 3.2.3.
>> >>> > > >
>> >>> > > > There is one blocker for 3.2.3 [2].
>> >>> > > > - https://issues.apache.org/jira/browse/HDFS-12920
>> >>> > > >
>> >>> > > > Is there anyone who would volunteer to be the 3.2.3 release
>> >>> manager?
>> >>> > > > Are there any other blockers? If any, please file an issue, raise
>> >>> the
>> >>> > > > blocker, and add the target version.
>> >>> > > >
>> >>> > > > [1]
>> >>> > > >
>> >>> >
>> >>> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%203.2.3
>> >>> > > > [2]
>> >>> > > >
>> >>> >
>> >>> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20cf%5B12310320%5D%20%3D%203.2.3
>> >>> > > >
>> >>> > > > Regards,
>> >>> > > > Akira
>> >>> > > >
>> >>> > > >
>> >>> -
>> >>> > > > To unsubscribe, e-mail:
>> >>> mapreduce-dev-unsubscr...@hadoop.apache.org
>> >>> > > > For additional commands, e-mail:
>> >>> mapreduce-dev-h...@hadoop.apache.org
>> >>> > > >
>> >>> > > > --
>> >>> > >
>> >>> > >
>> >>> > >
>> >>> > > --Brahma Reddy Battula
>> >>> >
>> >>> > -
>> >>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> >>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >>> >
>> >>> >
>> >>>
>> >>
>> >>
>> >> --
>> >>
>> >>
>> >>
>> >> --Brahma Reddy Battula
>> >>
>> >

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-07-25 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16143:


 Summary: 
TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky
 Key: HDFS-16143
 URL: https://issues.apache.org/jira/browse/HDFS-16143
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Akira Ajisaka


https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3229/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
{quote}
[ERROR] 
testStandbyTriggersLogRollsWhenTailInProgressEdits[0](org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer)
  Time elapsed: 6.862 s  <<< FAILURE!
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertTrue(Assert.java:53)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRollsWhenTailInProgressEdits(TestEditLogTailer.java:444)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x

2021-07-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-12920.
--
Fix Version/s: 3.3.2
   3.2.3
   3.4.0
   Resolution: Fixed

Merged the revert PR into trunk, branch-3.3, and branch-3.2.

Thank you [~brahmareddy] for your comment. Agreed with you, so I lowered the 
priority.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with Hadoop 3.x
> 
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>    Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16142) TestObservernode#testMkdirsRaceWithObserverRead is flaky

2021-07-25 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16142:


 Summary: TestObservernode#testMkdirsRaceWithObserverRead is flaky
 Key: HDFS-16142
 URL: https://issues.apache.org/jira/browse/HDFS-16142
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Akira Ajisaka


https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
{quote}
[ERROR] Tests run: 21, Failures: 1, Errors: 4, Skipped: 0, Time elapsed: 
741.856 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode [ERROR] 
testMkdirsRaceWithObserverRead(org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode)
 Time elapsed: 2.697 s <<< FAILURE! java.lang.AssertionError: Client #2 
lastSeenStateId=-9223372036854775808 activStateId=37 null at 
org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode.testMkdirsRaceWithObserverRead(TestObserverNode.java:557)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16140) TestBootstrapAliasmap fails by BindException

2021-07-24 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16140:


 Summary: TestBootstrapAliasmap fails by BindException
 Key: HDFS-16140
 URL: https://issues.apache.org/jira/browse/HDFS-16140
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Akira Ajisaka


TestBootstrapAliasmap fails if 50200 port is already in use.
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
{quote}
[ERROR] 
testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
  Time elapsed: 0.472 s  <<< ERROR!
java.net.BindException: Problem binding to [0.0.0.0:50200] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
at org.apache.hadoop.ipc.Server.bind(Server.java:642)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1301)
at org.apache.hadoop.ipc.Server.(Server.java:3199)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1062)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.(ProtobufRpcEngine2.java:464)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
at 
org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.2.3 release

2021-07-23 Thread Akira Ajisaka
Hi Brahma,

Thank you for volunteering!

-Akira

On Fri, Jul 23, 2021 at 5:57 PM Brahma Reddy Battula  wrote:
>
> Hi Akira,
>
> Thanks for bringing this..
>
> I want to drive this if nobody already plan to do this..
>
>
> On Thu, 22 Jul 2021 at 8:48 AM, Akira Ajisaka  wrote:
>
> > Hi all,
> >
> > Hadoop 3.2.2 was released half a year ago, and now, we have
> > accumulated more than 230 commits [1]. Therefore I want to start the
> > release work for 3.2.3.
> >
> > There is one blocker for 3.2.3 [2].
> > - https://issues.apache.org/jira/browse/HDFS-12920
> >
> > Is there anyone who would volunteer to be the 3.2.3 release manager?
> > Are there any other blockers? If any, please file an issue, raise the
> > blocker, and add the target version.
> >
> > [1]
> > https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%203.2.3
> > [2]
> > https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20cf%5B12310320%5D%20%3D%203.2.3
> >
> > Regards,
> > Akira
> >
> > -
> > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> >
> > --
>
>
>
> --Brahma Reddy Battula

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[DISCUSS] Hadoop 3.2.3 release

2021-07-21 Thread Akira Ajisaka
Hi all,

Hadoop 3.2.2 was released half a year ago, and now, we have
accumulated more than 230 commits [1]. Therefore I want to start the
release work for 3.2.3.

There is one blocker for 3.2.3 [2].
- https://issues.apache.org/jira/browse/HDFS-12920

Is there anyone who would volunteer to be the 3.2.3 release manager?
Are there any other blockers? If any, please file an issue, raise the
blocker, and add the target version.

[1] 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%203.2.3
[2] 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20cf%5B12310320%5D%20%3D%203.2.3

Regards,
Akira

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Change project style guidelines to allow line length 100

2021-07-21 Thread Akira Ajisaka
Based on the positive feedback, filed
https://issues.apache.org/jira/browse/HADOOP-17813 to update the
checkstyle rule.
I don't think it requires a vote thread.

Thanks and regards,
Akira

On Sun, Jun 13, 2021 at 1:14 AM Steve Loughran
 wrote:
>
> +1
>
> if you look closely the hadoop-azure module went to 100 lines a while back
> and all is good
>
> On Wed, 19 May 2021 at 22:13, Sean Busbey  wrote:
>
> > Hello!
> >
> > What do folks think about changing our line length guidelines to allow for
> > 100 character width?
> >
> > Currently, we tell folks to follow the sun style guide with some exception
> > unrelated to line length. That guide says width of 80 is the standard and
> > our current check style rules act as enforcement.
> >
> > Looking at the current trunk codebase our nightly build shows a total of
> > ~15k line length violations; it’s about 18% of identified checkstyle issues.
> >
> > The vast majority of those line length violations are <= 100 characters
> > long. 100 characters happens to be the length for the Google Java Style
> > Guide, another commonly adopted style guide for java projects, so I suspect
> > these longer lines leaking past the checkstyle precommit warning might be a
> > reflection of committers working across multiple java codebases.
> >
> > I don’t feel strongly about lines being longer, but I would like to move
> > towards more consistent style enforcement as a project. Updating our
> > project guidance to allow for 100 character lines would reduce the
> > likelihood that folks bringing in new contributions need a precommit test
> > cycle to get the formatting correct.
> >
> > Does anyone feel strongly about keeping the line length limit at 80
> > characters?
> >
> > Does anyone feel strongly about contributions coming in that clear up line
> > length violations?
> >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Tips for improving productivity, workflow in the Hadoop project?

2021-07-13 Thread Akira Ajisaka
Thank you Wei-Chiu for starting the discussion,

> 3. JIRA security
I'm +1 to use private JIRA issues to handle vulnerabilities.

> 5. Doc update
+1, I build the document daily and it helps me fixing documents:
https://aajisaka.github.io/hadoop-document/ It's great if the latest
document is built and published by the Apache Hadoop community.

My idea related to GitHub PR:
1. Disable the precommit jobs for JIRA, always use GitHub PR. It saves
costs to configure and debug the precommit jobs.
https://issues.apache.org/jira/browse/HADOOP-17798
2. Improve the pull request template for the contributors
https://issues.apache.org/jira/browse/HADOOP-17799

Regards,
Akira

On Tue, Jul 13, 2021 at 12:35 PM Wei-Chiu Chuang  wrote:
>
> I work on multiple projects and learned a bunch from those projects.There
> are nice add-ons that help with productivity. There are things we can do to
> help us manage the project better.
>
> 1. Add new issue types.
> We can add "Epic" jira type to organize a set of related jiras. This could
> be easier to manage than using a regular JIRA and call it "umbrella".
>
> 2. GitHub Actions
> I am seeing more projects moving to GitHub Actions for precommits. We don't
> necessarily need to migrate off Jenkins, but there are nice add-ons that
> can perform static analysis, catching potential issues. For example, Ozone
> adds SonarQube to post-commit, and exports the report to SonarCloud. Other
> add-ons are available to scan for docker images, vulnerabilities scans.
>
> 3. JIRA security
> It is possible to set up security level (public/private) in JIRA. This can
> be used to track vulnerability issues and be made only visible to
> committers. Example: INFRA-15258
> 
>
> 4. New JIRA fields
> It's possible to add new fields. For example, we can add a "Reviewer"
> field, which could help improve the attention to issues.
>
> 5. Doc update
> It is possible to set up automation such that the doc on the Hadoop website
> is refreshed for every commit, providing the latest doc to the public.
>
> 6. Webhook
> It's possible to set up webhook such that every commit in GitHub sends a
> notification to the ASF slack. It can be used for other kinds of
> automation. Sky's the limit.
>
> Thoughts? What else can do we?

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16109) Fix flaky some unit tests since they offen timeout

2021-07-04 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16109.
--
Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~tomscut] for your contribution.

> Fix flaky some unit tests since they offen timeout
> --
>
> Key: HDFS-16109
> URL: https://issues.apache.org/jira/browse/HDFS-16109
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Increase timeout for TestBootstrapStandby, TestFsVolumeList and 
> TestDecommissionWithBackoffMonitor since they offen timeout.
>  
> TestBootstrapStandby:
> {code:java}
> [ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 159.474 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] Tests 
> run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] 
> testRateThrottling(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby)
>   Time elapsed: 31.262 s  <<< 
> ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 
> 3 milliseconds at java.io.RandomAccessFile.writeBytes(Native Method) at 
> java.io.RandomAccessFile.write(RandomAccessFile.java:512) at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:947)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:910)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:699)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:642)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:387)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:795)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:673)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:760) 
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014) 
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989) 
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2261)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2231)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby.testRateThrottling(TestBootstrapStandby.java:297)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> TestFsVolumeList:
> {code:java}
> [ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 190.294 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] 
> Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] 
> testAddRplicaProcessorF

[jira] [Resolved] (HDFS-15653) dfshealth.html#tab-overview is not working

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-15653.
--
Resolution: Not A Problem

> dfshealth.html#tab-overview is not working
> --
>
> Key: HDFS-15653
> URL: https://issues.apache.org/jira/browse/HDFS-15653
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.3.0
> Environment: CentOS 7.4
> HDFS 3.3.0
>Reporter: Yakir Gibraltar
>Priority: Major
>  Labels: web-console, web-dashboard
> Attachments: image-2020-10-26-20-10-29-419.png, 
> image-2020-10-26-20-10-35-947.png
>
>
> Hi, in version 3.3.0, the URL of 
> http://:/dfshealth.html#tab-overview is broken.
>  The error in "Developer tools":
> {code:java}
> dfs-dust.js:121 Uncaught TypeError: $.get(...).error is not a function
> at Object. (dfs-dust.js:121)
> at Function.each (jquery-3.4.1.min.js:2)
> at load_json (dfs-dust.js:111)
> at load_overview (dfshealth.js:99)
> at load_page (dfshealth.js:452)
> at dfshealth.js:459
> at dfshealth.js:464
> (anonymous) @ dfs-dust.js:121
> each @ jquery-3.4.1.min.js:2
> load_json @ dfs-dust.js:111
> load_overview @ dfshealth.js:99
> load_page @ dfshealth.js:452
> (anonymous) @ dfshealth.js:459
> (anonymous) @ dfshealth.js:464
> {code}
> !image-2020-10-26-20-10-35-947.png!
>  
> Thank you, Yakir Gibraltar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15653) dfshealth.html#tab-overview is not working

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HDFS-15653:
--

> dfshealth.html#tab-overview is not working
> --
>
> Key: HDFS-15653
> URL: https://issues.apache.org/jira/browse/HDFS-15653
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.3.0
> Environment: CentOS 7.4
> HDFS 3.3.0
>Reporter: Yakir Gibraltar
>Priority: Major
>  Labels: web-console, web-dashboard
> Attachments: image-2020-10-26-20-10-29-419.png, 
> image-2020-10-26-20-10-35-947.png
>
>
> Hi, in version 3.3.0, the URL of 
> http://:/dfshealth.html#tab-overview is broken.
>  The error in "Developer tools":
> {code:java}
> dfs-dust.js:121 Uncaught TypeError: $.get(...).error is not a function
> at Object. (dfs-dust.js:121)
> at Function.each (jquery-3.4.1.min.js:2)
> at load_json (dfs-dust.js:111)
> at load_overview (dfshealth.js:99)
> at load_page (dfshealth.js:452)
> at dfshealth.js:459
> at dfshealth.js:464
> (anonymous) @ dfs-dust.js:121
> each @ jquery-3.4.1.min.js:2
> load_json @ dfs-dust.js:111
> load_overview @ dfshealth.js:99
> load_page @ dfshealth.js:452
> (anonymous) @ dfshealth.js:459
> (anonymous) @ dfshealth.js:464
> {code}
> !image-2020-10-26-20-10-35-947.png!
>  
> Thank you, Yakir Gibraltar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15272) Backport HDFS-12862 to branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HDFS-15272:
--

> Backport HDFS-12862 to branch-3.1
> -
>
> Key: HDFS-15272
> URL: https://issues.apache.org/jira/browse/HDFS-15272
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.4
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15272.branch-3.1.001.patch
>
>
> Backport HDFS-12862 CacheDirective becomes invalid when NN restart or 
> failover to branch-3.1.4



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15272) Backport HDFS-12862 to branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-15272.
--
Resolution: Won't Fix

branch-3.1 is EoL. Closing as won't fix.

> Backport HDFS-12862 to branch-3.1
> -
>
> Key: HDFS-15272
> URL: https://issues.apache.org/jira/browse/HDFS-15272
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.4
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15272.branch-3.1.001.patch
>
>
> Backport HDFS-12862 CacheDirective becomes invalid when NN restart or 
> failover to branch-3.1.4



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Hadoop 3.1.x EOL

2021-06-10 Thread Akira Ajisaka
This vote has passed with 18 binding +1. I'll update the JIRA and the wiki.

Thanks all for your participation.

On Tue, Jun 8, 2021 at 3:03 AM Steve Loughran  wrote:
>
>
>
> On Thu, 3 Jun 2021 at 07:14, Akira Ajisaka  wrote:
>>
>> Dear Hadoop developers,
>>
>> Given the feedback from the discussion thread [1], I'd like to start
>> an official vote
>> thread for the community to vote and start the 3.1 EOL process.
>>
>> What this entails:
>>
>> (1) an official announcement that no further regular Hadoop 3.1.x releases
>> will be made after 3.1.4.
>> (2) resolve JIRAs that specifically target 3.1.5 as won't fix.
>>
>> This vote will run for 7 days and conclude by June 10th, 16:00 JST [2].
>>
>> Committers are eligible to cast binding votes. Non-committers are welcomed
>> to cast non-binding votes.
>>
>> Here is my vote, +1
>
>
>
> +1 (binding)
>>
>>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

2021-06-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16059:


 Summary: dfsadmin -listOpenFiles -blockingDecommission can miss 
some files
 Key: HDFS-16059
 URL: https://issues.apache.org/jira/browse/HDFS-16059
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsadmin
Reporter: Akira Ajisaka


While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles 
-blockingDecommission" can drop some files.

[https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
{quote}If the DataNodes have the following open files and we want to list all 
the open files:

DN1: [1001, 1002, 1003, ... , 2000]
 DN2: [1, 2, 3, ... , 1000]

At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, 
... , 2000] because it reached max size (=1000), and next 
getFilesBlockingDecom(2000, "/") is called because the last inode Id of the 
previous result is 2000. That way the open files of DN2 is missed
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Hadoop 3.1.x EOL

2021-06-03 Thread Akira Ajisaka
Dear Hadoop developers,

Given the feedback from the discussion thread [1], I'd like to start
an official vote
thread for the community to vote and start the 3.1 EOL process.

What this entails:

(1) an official announcement that no further regular Hadoop 3.1.x releases
will be made after 3.1.4.
(2) resolve JIRAs that specifically target 3.1.5 as won't fix.

This vote will run for 7 days and conclude by June 10th, 16:00 JST [2].

Committers are eligible to cast binding votes. Non-committers are welcomed
to cast non-binding votes.

Here is my vote, +1

[1] https://s.apache.org/w9ilb
[2] 
https://www.timeanddate.com/worldclock/fixedtime.html?msg=4=20210610T16=248

Regards,
Akira

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] which release lines should we still consider actively maintained?

2021-06-03 Thread Akira Ajisaka
Thank you for your comments. I'll create a vote thread to mark 3.1.x EOL.

-Akira

On Tue, May 25, 2021 at 12:46 AM Ayush Saxena  wrote:
>
> +1, to mark 3.1.x EOL.
> Apache Hive does depends on 3.1.0 as of now, but due to guave upgrade on 
> branch-3.1, the attempt to migrate to latest 3.1.x didn’t work for me atleast 
> couple of months back. So, mostly 3.3.1 would be the only option replacing 
> 3.1.0 there or at worst 3.3.2 in a couple of months.
>
>
> -Ayush
>
> > On 24-May-2021, at 8:43 PM, Arpit Agarwal  
> > wrote:
> >
> > +1 to EOL 3.1.x at least.
> >
> >
> >> On May 23, 2021, at 9:51 PM, Wei-Chiu Chuang 
> >>  wrote:
> >>
> >> Sean,
> >>
> >> For reasons I don't understand, I never received emails from your new
> >> address in the mailing list. Only Akira's response.
> >>
> >> I was just able to start a thread like this.
> >>
> >> I am +1 to EOL 3.1.5.
> >> Reason? Spark is already on Hadoop 3.2. Hive and Tez are actively working
> >> to support Hadoop 3.3. HBase supports Hadoop 3.3 already. They are the most
> >> common Hadoop applications so I think a 3.1 isn't that necessarily
> >> important.
> >>
> >> With Hadoop 3.3.1, we have a number of improvements to support a better
> >> HDFS upgrade experience, so upgrading from Hadoop 3.1 should be relatively
> >> easy. Application upgrade takes some effort though (commons-lang ->
> >> commons-lang3 migration for example)
> >> I've been maintaining the HDFS code in branch-3.1, so from a
> >> HDFS perspective the branch is always in a ready to release state.
> >>
> >> The Hadoop 3.1 line is more than 3 years old. Maintaining this branch is
> >> getting trickier. I am +100 to reduce the number of actively maintained
> >> release line. IMO, 2 Hadoop 3 lines + 1 Hadoop 2 line is a good idea.
> >>
> >>
> >>
> >> For Hadoop 3.3 line: If no one beats me, I plan to make a 3.3.2 in 2-3
> >> months. And another one in another 2-3 months.
> >> The Hadoop 3.3.1 has nearly 700 commits not in 3.3.0. It is very difficult
> >> to make/validate a maint release with such a big divergence in the code.
> >>
> >>
> >>> On Mon, May 24, 2021 at 12:06 PM Akira Ajisaka  >>> <mailto:aajis...@apache.org>> wrote:
> >>>
> >>> Hi Sean,
> >>>
> >>> Thank you for starting the discussion.
> >>>
> >>> I think branch-2.10, branch-3.1, branch-3.2, branch-3.3, and trunk
> >>> (3.4.x) are actively maintained.
> >>>
> >>> The next releases will be:
> >>> - 3.4.0
> >>> - 3.3.1 (Thanks, Wei-Chiu!)
> >>> - 3.2.3
> >>> - 3.1.5
> >>> - 2.10.2
> >>>
> >>>> Are there folks willing to go through being release managers to get more
> >>> of these release lines on a steady cadence?
> >>>
> >>> Now I'm interested in becoming a release manager of 3.1.5.
> >>>
> >>>> If I were to take up maintenance release for one of them which should it
> >>> be?
> >>>
> >>> 3.2.3 or 2.10.2 seems to be a good choice.
> >>>
> >>>> Should we declare to our downstream users that some of these lines
> >>> aren’t going to get more releases?
> >>>
> >>> Now I think we don't need to declare that. I believe 3.3.1, 3.2.3,
> >>> 3.1.5, and 2.10.2 will be released in the near future.
> >>> There are some earlier discussions of 3.1.x EoL, so 3.1.5 may be a
> >>> final release of the 3.1.x release line.
> >>>
> >>>> Is there downstream facing documentation somewhere that I missed for
> >>> setting expectations about our release cadence and actively maintained
> >>> branches?
> >>>
> >>> As you commented, the confluence wiki pages for Hadoop releases were
> >>> out of date. Updated [1].
> >>>
> >>>> Do we have a backlog of work written up that could make the release
> >>> process easier for our release managers?
> >>>
> >>> The release process is documented and maintained:
> >>> https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease
> >>> Also, there are some backlogs [1], [2].
> >>>
> >>> [1]:
> >>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Active+Release+Lines
> >>> [2]: h

[jira] [Created] (HDFS-16050) Some dynamometer tests fail

2021-05-31 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16050:


 Summary: Some dynamometer tests fail
 Key: HDFS-16050
 URL: https://issues.apache.org/jira/browse/HDFS-16050
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


The following tests failed:
{quote}
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator
   hadoop.tools.dynamometer.TestDynamometerInfra
   hadoop.tools.dynamometer.blockgenerator.TestBlockGen
   hadoop.tools.dynamometer.TestDynamometerInfra
   hadoop.tools.dynamometer.blockgenerator.TestBlockGen
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator
{quote}
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/523/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer.txt
{quote}
[ERROR] 
testAuditWorkloadDirectParserWithOutput(org.apache.hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator)
  Time elapsed: 1.353 s  <<< ERROR!
java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2618)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2632)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1498)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:977)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16046) TestBalanceProcedureScheduler and TestDistCpProcedure timeout

2021-05-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16046.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> TestBalanceProcedureScheduler and TestDistCpProcedure timeout
> -
>
> Key: HDFS-16046
> URL: https://issues.apache.org/jira/browse/HDFS-16046
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf, test
>    Reporter: Akira Ajisaka
>    Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-05-28-11-41-16-733.png, screenshot-1.png, 
> screenshot-2.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following two tests timed out frequently in the qbt job.
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/520/testReport/org.apache.hadoop.tools.fedbalance.procedure/TestBalanceProcedureScheduler/testSchedulerDownAndRecoverJob/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 6 milliseconds
>  at java.lang.Object.wait(Native Method)
>  at java.lang.Object.wait(Object.java:502)
>  at 
> org.apache.hadoop.tools.fedbalance.procedure.BalanceJob.waitJobDone(BalanceJob.java:220)
>  at 
> org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedureScheduler.waitUntilDone(BalanceProcedureScheduler.java:189)
>  at 
> org.apache.hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler.testSchedulerDownAndRecoverJob(TestBalanceProcedureScheduler.java:331)
> {quote}
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/520/testReport/org.apache.hadoop.tools.fedbalance/TestDistCpProcedure/testSuccessfulDistCpProcedure/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 3 milliseconds
>  at java.lang.Object.wait(Native Method)
>  at java.lang.Object.wait(Object.java:502)
>  at 
> org.apache.hadoop.tools.fedbalance.procedure.BalanceJob.waitJobDone(BalanceJob.java:220)
>  at 
> org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedureScheduler.waitUntilDone(BalanceProcedureScheduler.java:189)
>  at 
> org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.testSuccessfulDistCpProcedure(TestDistCpProcedure.java:121)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Thirdparty 1.1.1 RC0

2021-05-27 Thread Akira Ajisaka
+1

- Verified checksums and signatures
- Built from source with -Psrc profile
- Checked the documents
- Compiled Hadoop trunk and branch-3.3 with Hadoop third-party 1.1.1.

-Akira

On Wed, May 26, 2021 at 5:29 PM Wei-Chiu Chuang  wrote:
>
> Hi folks,
>
> I have put together a release candidate (RC0) for Hadoop Thirdparty
> 1.1.1 which will be consumed by Hadoop 3.3.1 RC2.
>
>
> The RC is available at:
> https://people.apache.org/~weichiu/hadoop-thirdparty-1.1.1-RC0/
>
>
> The RC tag in svn is
> here:https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.1.1-RC0
>
> The maven artifacts are staged at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1316/
>
>
> Comparing to 1.1.0, there are two additional fixes:
>
> HADOOP-17707. Remove jaeger document from site index.
> 
>
> HADOOP-17730. Add back error_prone
> 
>
> You can find my public key
> at:https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
>
> Please try the release and vote. The vote will run for 5 days.
>
> Thanks
> Weichiu

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16046) TestBalanceProcedureScheduler timeouts

2021-05-27 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16046:


 Summary: TestBalanceProcedureScheduler timeouts
 Key: HDFS-16046
 URL: https://issues.apache.org/jira/browse/HDFS-16046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf, test
Reporter: Akira Ajisaka


https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/520/testReport/org.apache.hadoop.tools.fedbalance.procedure/TestBalanceProcedureScheduler/testSchedulerDownAndRecoverJob/
{quote}
org.junit.runners.model.TestTimedOutException: test timed out after 6 
milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.hadoop.tools.fedbalance.procedure.BalanceJob.waitJobDone(BalanceJob.java:220)
at 
org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedureScheduler.waitUntilDone(BalanceProcedureScheduler.java:189)
at 
org.apache.hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler.testSchedulerDownAndRecoverJob(TestBalanceProcedureScheduler.java:331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap

2021-05-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16031.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk.

> Possible Resource Leak in 
> org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
> -
>
> Key: HDFS-16031
> URL: https://issues.apache.org/jira/browse/HDFS-16031
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Assignee: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We notice a possible resource leak in 
> [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320].
>  If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and 
> {{bOut}} remain open since the exception isn't caught locally, and there is 
> no way for any caller to close them.
> I've submitted a pull request to fix it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] which release lines should we still consider actively maintained?

2021-05-23 Thread Akira Ajisaka
Hi Sean,

Thank you for starting the discussion.

I think branch-2.10, branch-3.1, branch-3.2, branch-3.3, and trunk
(3.4.x) are actively maintained.

The next releases will be:
- 3.4.0
- 3.3.1 (Thanks, Wei-Chiu!)
- 3.2.3
- 3.1.5
- 2.10.2

> Are there folks willing to go through being release managers to get more of 
> these release lines on a steady cadence?

Now I'm interested in becoming a release manager of 3.1.5.

> If I were to take up maintenance release for one of them which should it be?

3.2.3 or 2.10.2 seems to be a good choice.

> Should we declare to our downstream users that some of these lines aren’t 
> going to get more releases?

Now I think we don't need to declare that. I believe 3.3.1, 3.2.3,
3.1.5, and 2.10.2 will be released in the near future.
There are some earlier discussions of 3.1.x EoL, so 3.1.5 may be a
final release of the 3.1.x release line.

> Is there downstream facing documentation somewhere that I missed for setting 
> expectations about our release cadence and actively maintained branches?

As you commented, the confluence wiki pages for Hadoop releases were
out of date. Updated [1].

> Do we have a backlog of work written up that could make the release process 
> easier for our release managers?

The release process is documented and maintained:
https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease
Also, there are some backlogs [1], [2].

[1]: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Active+Release+Lines
[2]: https://cwiki.apache.org/confluence/display/HADOOP/Roadmap

Thanks,
Akira

On Fri, May 21, 2021 at 7:12 AM Sean Busbey  wrote:
>
>
> Hi folks!
>
> Which release lines do we as a community still consider actively maintained?
>
> I found an earlier discussion[1] where we had consensus to consider branches 
> that don’t get maintenance releases on a regular basis end-of-life for 
> practical purposes. The result of that discussion was written up in our wiki 
> docs in the “EOL Release Branches” page, summarized here
>
> >  If no volunteer to do a maintenance release in a short to mid-term (like 3 
> > months to 1 or 1.5 year).
>
> Looking at release lines that are still on our download page[3]:
>
> * Hadoop 2.10.z - last release 8 months ago
> * Hadoop 3.1.z - last release 9.5 months ago
> * Hadoop 3.2.z - last release 4.5 months ago
> * Hadoop 3.3.z - last release 10 months ago
>
> And then trunk holds 3.4 which hasn’t had a release since the branch-3.3 fork 
> ~14 months ago.
>
> I can see that Wei-Chiu has been actively working on getting the 3.3.1 
> release out[4] (thanks Wei-Chiu!) but I do not see anything similar for the 
> other release lines.
>
> We also have pages on the wiki for our project roadmap of release[5], but it 
> seems out of date since it lists in progress releases that have happened or 
> branches we have announced as end of life, i.e. 2.8.
>
> We also have a group of pages (sorry, I’m not sure what the confluence jargon 
> is for this) for “hadoop active release lines”[6] but this list has 2.8, 2.9, 
> 3.0, 3.1, and 3.3. So several declared end of life lines and no 2.10 or 3.2 
> despite those being our release lines with the most recent releases.
>
> Are there folks willing to go through being release managers to get more of 
> these release lines on a steady cadence?
>
> If I were to take up maintenance release for one of them which should it be?
>
> Should we declare to our downstream users that some of these lines aren’t 
> going to get more releases?
>
> Is there downstream facing documentation somewhere that I missed for setting 
> expectations about our release cadence and actively maintained branches?
>
> Do we have a backlog of work written up that could make the release process 
> easier for our release managers?
>
>
> [1]: https://s.apache.org/7c8jt
> [2]: https://s.apache.org/4no96
> [3]: https://hadoop.apache.org/releases.html
> [4]: https://s.apache.org/1bvwe
> [5]: https://cwiki.apache.org/confluence/display/HADOOP/Roadmap
> [6]: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Active+Release+Lines
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Change project style guidelines to allow line length 100

2021-05-19 Thread Akira Ajisaka
I'm +1 to allow <= 100 chars.

FYI: There were some discussions long before:
- 
https://lists.apache.org/thread.html/7813c2f8a49b1d1e7655dad180f2d915a280b2f4d562cfe981e1dd4e%401406489966%40%3Ccommon-dev.hadoop.apache.org%3E
- 
https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E

Thanks,
Akira

On Thu, May 20, 2021 at 6:36 AM Sean Busbey  wrote:
>
> Hello!
>
> What do folks think about changing our line length guidelines to allow for 
> 100 character width?
>
> Currently, we tell folks to follow the sun style guide with some exception 
> unrelated to line length. That guide says width of 80 is the standard and our 
> current check style rules act as enforcement.
>
> Looking at the current trunk codebase our nightly build shows a total of ~15k 
> line length violations; it’s about 18% of identified checkstyle issues.
>
> The vast majority of those line length violations are <= 100 characters long. 
> 100 characters happens to be the length for the Google Java Style Guide, 
> another commonly adopted style guide for java projects, so I suspect these 
> longer lines leaking past the checkstyle precommit warning might be a 
> reflection of committers working across multiple java codebases.
>
> I don’t feel strongly about lines being longer, but I would like to move 
> towards more consistent style enforcement as a project. Updating our project 
> guidance to allow for 100 character lines would reduce the likelihood that 
> folks bringing in new contributions need a precommit test cycle to get the 
> formatting correct.
>
> Does anyone feel strongly about keeping the line length limit at 80 
> characters?
>
> Does anyone feel strongly about contributions coming in that clear up line 
> length violations?
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] hadoop-thirdparty 1.1.0-RC0

2021-05-13 Thread Akira Ajisaka
+1

- Verified checksums and signatures
- Built from source with -Psrc profile
- Checked the documents

-Akira

On Thu, May 13, 2021 at 8:55 PM Wei-Chiu Chuang  wrote:
>
> Hello my fellow Hadoop developers,
>
> I am putting together the first release candidate (RC0) for
> Hadoop-thirdparty 1.1.0. This is going to be consumed by the upcoming
> Hadoop 3.3.1 release.
>
> The RC is available at:
> https://people.apache.org/~weichiu/hadoop-thirdparty-1.1.0-RC0/
> The RC tag in github is here:
> https://github.com/apache/hadoop-thirdparty/tree/release-1.1.0-RC0
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1309/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS or
> https://people.apache.org/keys/committer/weichiu.asc
>
>
> Please try the release and vote. The vote will run for 5 days until
> 2021/05/19 at 00:00 CST.
>
> Note: Our post commit automation builds the code, and pushes the SNAPSHOT
> artifacts to central Maven, which is consumed by Hadoop trunk and
> branch-3.3, so it is a good validation that things are working properly in
> hadoop-thirdparty.
>
> Thanks,
> Wei-Chiu

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16006) TestRouterFederationRename is flaky

2021-05-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16006:


 Summary: TestRouterFederationRename is flaky
 Key: HDFS-16006
 URL: https://issues.apache.org/jira/browse/HDFS-16006
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Akira Ajisaka


{quote}
[ERROR] Errors: 
[ERROR]   
TestRouterFederationRename.testCounter:440->Object.wait:502->Object.wait:-2 ? 
TestTimedOut
[ERROR]   TestRouterFederationRename.testSetup:145 ? Remote The directory /src 
cannot be...
[ERROR]   TestRouterFederationRename.testSetup:145 ? Remote The directory /src 
cannot be...
{quote}
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2970/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



  1   2   3   4   5   6   >