Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-07-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/204/

[Jul 21, 2021 7:31:44 AM] (821684824) YARN-10860. Make max container per 
heartbeat configs refreshable. Contributed by Eric Badger.
[Jul 22, 2021 8:15:00 AM] (noreply) HADOOP-17796. Upgrade jetty version to 
9.4.43 (#3208)
[Jul 22, 2021 12:30:43 PM] (821684824) YARN-10657. We should make max 
application per queue to support node label. Contributed by Andras Gyori.
[Jul 22, 2021 6:45:49 PM] (Sean Busbey) HADOOP-17813. Checkstyle - Allow line 
length: 100
[Jul 23, 2021 4:38:55 AM] (noreply) HADOOP-17808. ipc.Client to set interrupt 
flag after catching InterruptedException (#3219)




-1 overall


The following subsystems voted -1:
blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-07-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/

[Jul 22, 2021 8:15:00 AM] (noreply) HADOOP-17796. Upgrade jetty version to 
9.4.43 (#3208)
[Jul 22, 2021 12:30:43 PM] (821684824) YARN-10657. We should make max 
application per queue to support node label. Contributed by Andras Gyori.
[Jul 22, 2021 6:45:49 PM] (Sean Busbey) HADOOP-17813. Checkstyle - Allow line 
length: 100




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-tools/hadoop-azure 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

spotbugs :

   module:hadoop-tools 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

spotbugs :

   module:root 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

Failed junit tests :

   hadoop.yarn.csi.client.TestCsiClient 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-compile-javac-root.txt
 [364K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   spotbugs:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure-warnings.html
 [8.0K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/branch-spotbugs-hadoop-tools-warnings.html
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/branch-spotbugs-root-warnings.html
 [20K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/577/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt
 [8.0K]
  

[jira] [Resolved] (HDFS-16130) [FGL] Implement Create File with FGL

2021-07-23 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-16130.

Hadoop Flags: Reviewed
  Resolution: Fixed

I just committed this. Fixed a few checkstyle warnings.
Thank you [~prasad-acit].

> [FGL] Implement Create File with FGL
> 
>
> Key: HDFS-16130
> URL: https://issues.apache.org/jira/browse/HDFS-16130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: Fine-Grained Locking
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement FGL for Create File.
> Create API acquire global lock at mulitiple stages. Acquire the respective 
> partitioned lock and continue the create operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16128) [FGL] Add support for saving/loading an FS Image for PartitionedGSet

2021-07-23 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-16128.

Fix Version/s: Fine-Grained Locking
 Hadoop Flags: Reviewed
   Resolution: Fixed

I just committed this. Thank you [~xinglin].

> [FGL] Add support for saving/loading an FS Image for PartitionedGSet
> 
>
> Key: HDFS-16128
> URL: https://issues.apache.org/jira/browse/HDFS-16128
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: Fine-Grained Locking
>
>
> Add support to save Inodes stored in PartitionedGSet when saving an FS image 
> and load Inodes into PartitionedGSet from a saved FS image.
> h1. Saving FSImage
> *Original HDFS design*: iterate every inode in inodeMap and save them into 
> the FSImage file. 
> *FGL*: no change is needed here, since PartitionedGSet also provides an 
> iterator interface, to iterate over inodes stored in partitions. 
> h1. Loading an HDFS 
> *Original HDFS design*: it first loads the FSImage files and then loads edit 
> logs for recent changes. FSImage files contain different sections, including 
> INodeSections and INodeDirectorySections. An InodeSection contains serialized 
> Inodes objects and the INodeDirectorySection contains the parent inode for an 
> Inode. When loading an FSImage, the system first loads INodeSections and then 
> load the INodeDirectorySections, to set the parent inode for each inode. 
> After FSImage files are loaded, edit logs are then loaded. Edit log contains 
> recent changes to the filesystem, including Inodes creation/deletion. For a 
> newly created INode, the parent inode is set before it is added to the 
> inodeMap.
> *FGL*: when adding an Inode into the partitionedGSet, we need the parent 
> inode of an inode, in order to determine which partition to store that inode, 
> when NAMESPACE_KEY_DEPTH = 2. Thus, in FGL, when loading FSImage files, we 
> used a temporary LightweightGSet (inodeMapTemp), to store inodes. When 
> LoadFSImage is done, the parent inode for all existing inodes in FSImage 
> files is set. We can now move the inodes into a partitionedGSet. Load edit 
> logs can work as usual, as the parent inode for an inode is set before it is 
> added to the inodeMap. 
> In theory, PartitionedGSet can support to store inodes without setting its 
> parent inodes. All these inodes will be stored in the 0th partition. However, 
> we decide to use a temporary LightweightGSet (inodeMapTemp) to store these 
> inodes, to make this case more transparent.          
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16138) BlockReportProcessingThread exit doesnt print the acutal stack

2021-07-23 Thread Renukaprasad C (Jira)
Renukaprasad C created HDFS-16138:
-

 Summary: BlockReportProcessingThread exit doesnt print the acutal 
stack
 Key: HDFS-16138
 URL: https://issues.apache.org/jira/browse/HDFS-16138
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Renukaprasad C
Assignee: Renukaprasad C


BlockReportProcessingThread thread may gets exited with multiple reasons, but 
the current logging prints only the exception message with different stack 
which is difficult to debug the issue.

 

Existing logging:

2021-07-20 10:20:23,104 [Block report processor] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status 1: Block report processor 
encountered fatal exception: java.lang.AssertionError

2021-07-20 10:20:23,104 [Block report processor] ERROR util.ExitUtil 
(ExitUtil.java:terminate(213)) - Terminate called

1: Block report processor encountered fatal exception: java.lang.AssertionError

    at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:304)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5315)

Exception in thread "Block report processor" 1: Block report processor 
encountered fatal exception: java.lang.AssertionError

    at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:304)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5315)

 

Actual issue found at:

2021-07-20 10:20:23,101 [Block report processor] ERROR 
blockmanagement.BlockManager (BlockManager.java:run(5314)) - 
java.lang.AssertionError

java.lang.AssertionError

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3480)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:4280)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addBlock(BlockManager.java:4202)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processIncrementalBlockReport(BlockManager.java:4338)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processIncrementalBlockReport(BlockManager.java:4305)

    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processIncrementalBlockReport(FSNamesystem.java:4853)

    at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$2.run(NameNodeRpcServer.java:1657)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5334)

    at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5312)

 

This issue found while working on FGL branch. But, same issue can happen in 
Trunk also in any error scenario.

 

[~hemanthboyina] [~hexiaoqiao]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.2.3 release

2021-07-23 Thread Akira Ajisaka
Hi Brahma,

Thank you for volunteering!

-Akira

On Fri, Jul 23, 2021 at 5:57 PM Brahma Reddy Battula  wrote:
>
> Hi Akira,
>
> Thanks for bringing this..
>
> I want to drive this if nobody already plan to do this..
>
>
> On Thu, 22 Jul 2021 at 8:48 AM, Akira Ajisaka  wrote:
>
> > Hi all,
> >
> > Hadoop 3.2.2 was released half a year ago, and now, we have
> > accumulated more than 230 commits [1]. Therefore I want to start the
> > release work for 3.2.3.
> >
> > There is one blocker for 3.2.3 [2].
> > - https://issues.apache.org/jira/browse/HDFS-12920
> >
> > Is there anyone who would volunteer to be the 3.2.3 release manager?
> > Are there any other blockers? If any, please file an issue, raise the
> > blocker, and add the target version.
> >
> > [1]
> > https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%203.2.3
> > [2]
> > https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20cf%5B12310320%5D%20%3D%203.2.3
> >
> > Regards,
> > Akira
> >
> > -
> > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> >
> > --
>
>
>
> --Brahma Reddy Battula

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-07-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/

[Jul 22, 2021 8:33:09 PM] (Sean Busbey) HADOOP-17813. Checkstyle - Allow line 
length: 100




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-mvnsite-root.txt
  [616K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-javadoc-root.txt
  [32K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [432K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [120K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/368/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [96K]
   

[jira] [Created] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-23 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16137:
---

 Summary: Improve the comments related to FairCallQueue#queues
 Key: HDFS-16137
 URL: https://issues.apache.org/jira/browse/HDFS-16137
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: JiangHua Zhu


FairCallQueue#queues related comments are too simple:
   /* The queues */
   private final ArrayList> queues;
Can not visually see the meaning of FairCallQueue#queues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.2.3 release

2021-07-23 Thread Brahma Reddy Battula
Hi Akira,

Thanks for bringing this..

I want to drive this if nobody already plan to do this..


On Thu, 22 Jul 2021 at 8:48 AM, Akira Ajisaka  wrote:

> Hi all,
>
> Hadoop 3.2.2 was released half a year ago, and now, we have
> accumulated more than 230 commits [1]. Therefore I want to start the
> release work for 3.2.3.
>
> There is one blocker for 3.2.3 [2].
> - https://issues.apache.org/jira/browse/HDFS-12920
>
> Is there anyone who would volunteer to be the 3.2.3 release manager?
> Are there any other blockers? If any, please file an issue, raise the
> blocker, and add the target version.
>
> [1]
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%203.2.3
> [2]
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20cf%5B12310320%5D%20%3D%203.2.3
>
> Regards,
> Akira
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
> --



--Brahma Reddy Battula