Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/ [Apr 22, 2020 4:39:48 AM] (github) Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on [Apr 22, 2020 7:36:33 PM] (liuml07) HADOOP-17001. The suffix name of the unified compression class. [Apr 22, 2020 8:31:02 PM] (liuml07) HDFS-15276. Concat on INodeRefernce fails with illegal state exception. -1 overall The following subsystems voted -1: asflicense compile findbugs mvnsite pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference of effectiveDirective in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo, EnumSet, boolean) Dereferenced at FSNamesystem.java:effectiveDirective in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo, EnumSet, boolean) Dereferenced at FSNamesystem.java:[line 7444] Possible null pointer dereference of ret in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, boolean) Dereferenced at FSNamesystem.java:ret in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, boolean) Dereferenced at FSNamesystem.java:[line 3213] Possible null pointer dereference of res in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:res in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:[line 3248] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should be package protected At WebServiceClient.java: At WebServiceClient.java:[line 42] FindBugs : module:hadoop-cloud-storage-project/hadoop-cos org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Failed junit tests : hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.sls.TestSLSRunner hadoop.yarn.sls.appmaster.TestAMSimulator compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt [720K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt [720K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt [720K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-checkstyle-root.txt [16M] mvnsite: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-mvnsite-root.txt [284K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-shellcheck.txt [16K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-shelldocs.txt [96K]
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/664/ [Apr 22, 2020 9:53:15 PM] (liuml07) HDFS-15276. Concat on INodeRefernce fails with illegal state exception. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 515] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 383] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 389] Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory) unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl At DefaultMetricsFactory.java:[line 49] org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) unconditionally sets the field miniClusterMode At DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 92] Useless object stored in variable seqOs of method org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
Re: [Hadoop-3.3 Release update]- branch-3.3 has created
> Since blockers are not closed, I didn't cut the branch because multiple branches might confuse or sombody might miss to commit. The current situation is already confusing. The 3.3.1 version already exists in JIRA, so some committers wrongly commit non-critical issues to branch-3.3 and set the fix version to 3.3.1. I think now we should cut branch-3.3.0 and freeze source code except the blockers. -Akira On Tue, Apr 21, 2020 at 3:05 PM Brahma Reddy Battula wrote: > Sure, I will do that. > > Since blockers are not closed, I didn't cut the branch because > multiple branches might confuse or sombody might miss to commit.Shall I > wait till this weekend to create..? > > On Mon, Apr 20, 2020 at 11:57 AM Akira Ajisaka > wrote: > >> Hi Brahma, >> >> Thank you for preparing the release. >> Could you cut branch-3.3.0? I would like to backport some fixes for 3.3.1 >> and not for 3.3.0. >> >> Thanks and regards, >> Akira >> >> On Fri, Apr 17, 2020 at 11:11 AM Brahma Reddy Battula >> wrote: >> >>> Hi All, >>> >>> we are down to two blockers issues now (YARN-10194 and YARN-9848) which >>> are in patch available state.Hopefully we can out the RC soon. >>> >>> thanks to @Prabhu Joseph ,@masakate,@akira >>> and @Wei-Chiu Chuang and others for helping >>> resloving the blockers. >>> >>> >>> >>> On Tue, Apr 14, 2020 at 10:49 PM Brahma Reddy Battula >>> wrote: >>> @Prabhu Joseph >>> Have committed the YARN blocker YARN-10219 to trunk and cherry-picked to branch-3.3. Right now, there are two blocker Jiras - YARN-10233 and HADOOP-16982 which i will help to review and commit. Thanks. Looks you committed YARN-10219. Noted YARN-10233 and HADOOP-16982 as a blockers. (without YARN-10233 we have given so many releases,it's not newly introduced.).. Thanks @Vinod Kumar Vavilapalli ,@adam Antal, I noted YARN-9848 as a blocker as you mentioned above. @All, Currently following four blockers are pending for 3.3.0 RC. HADOOP-16963,YARN-10233,HADOOP-16982 and YARN-9848. On Tue, Apr 14, 2020 at 8:11 PM Vinod Kumar Vavilapalli < vino...@apache.org> wrote: > Looks like a really bad bug to me. > > +1 for revert and +1 for making that a 3.3.0 blocker. I think should > also revert it in a 3.2 maintenance release too. > > Thanks > +Vinod > > > On Apr 14, 2020, at 5:03 PM, Adam Antal > > > wrote: > > > > Hi everyone, > > > > Sorry for coming a bit late with this, but there's also one jira > that can > > have potential impact on clusters and we should talk about it. > > > > Steven Rand found this problem earlier and commented to > > https://issues.apache.org/jira/browse/YARN-4946. > > The bug has impact on the RM state store: the RM does not delete > apps - see > > more details in his comment here: > > > https://issues.apache.org/jira/browse/YARN-4946?focusedCommentId=16898599=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16898599 > > . > > (FYI He also created https://issues.apache.org/jira/browse/YARN-9848 > with > > the revert task). > > > > It might not be an actual blocker, but since there wasn't any > consensus > > about a follow up action, I thought we should decide how to proceed > before > > release 3.3.0. > > > > Regards, > > Adam > > > > On Tue, Apr 14, 2020 at 9:35 AM Prabhu Joseph < > prabhujose.ga...@gmail.com> > > wrote: > > > >> Thanks Brahma for the update. > >> > >> Have committed the YARN blocker YARN-10219 to trunk and > cherry-picked to > >> branch-3.3. Right now, there are two blocker Jiras - YARN-10233 and > >> HADOOP-16982 > >> which i will help to review and commit. Thanks. > >> > >> [image: Screen Shot 2020-04-14 at 1.01.51 PM.png] > >> > >> project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker, > >> Critical) AND resolution = Unresolved AND "Target Version/s" = > 3.3.0 ORDER > >> BY priority DESC > >> > >> > >> On Sun, Apr 12, 2020 at 12:19 AM Brahma Reddy Battula < > bra...@apache.org> > >> wrote: > >> > >>> *Pending for 3.3.0 Release:* > >>> > >>> One Blocker(HADOOP-16963) confirmation and following jira's are > open as > >>> these needs to merged to other branches(I am tracking the same, > Ideally > >>> this can be closed and can raise seperate jira's to track). > >>> > >>> > >>> 1–4 of 4Refresh results > >>> < > >>> > https://issues.apache.org/jira/issues/?jql=project%20in%20(%22Hadoop%20HDFS%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20(cf%5B12310320%5D%20%3D%203.3.0%20OR%20fixVersion%20%3D%203.3.0)%20ORDER%20BY%20priority%20DESC# > > >>> Columns > >>> Patch InfoKeyTSummaryAssigneeReporterP >