[jira] [Created] (HADOOP-13669) KMS Server should log exceptions before throwing
Xiao Chen created HADOOP-13669: -- Summary: KMS Server should log exceptions before throwing Key: HADOOP-13669 URL: https://issues.apache.org/jira/browse/HADOOP-13669 Project: Hadoop Common Issue Type: Improvement Reporter: Xiao Chen Assignee: Suraj Acharya -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Is anyone seeing this during trunk build?
It looks like *org.apache.hadoop.hdfs.StripeReader* is using a Java 8 lambda expression which commons bcel is still not comfortable with. As per https://issues.apache.org/jira/browse/BCEL-173 It should be fixed in commons release 6.0 of bcel. or maybe replace with bcel-findbugs ? as suggested by : https://github.com/RichardWarburton/lambda-behave/issues/31#issuecomment-86052095 On Thu, Sep 29, 2016 at 2:01 PM, Kihwal Leewrote: > This also shows up in the precommit builds. This is not failing the build, > so it might scroll over quickly before you realize. > Search for ClassFormatException > https://builds.apache.org/job/PreCommit-HDFS-Build/16928/ > artifact/patchprocess/branch-mvninstall-root.txt > > From: Ted Yu > To: Kihwal Lee > Cc: Hdfs-dev ; Hadoop Common < > common-dev@hadoop.apache.org> > Sent: Wednesday, September 28, 2016 7:16 PM > Subject: Re: Is anyone seeing this during trunk build? > > I used the same command but didn't see the error you saw. > > Here is my environment: > > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option > MaxPermSize=512M; support was removed in 8.0 > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Maven home: /Users/tyu/apache-maven-3.3.9 > Java version: 1.8.0_91, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac" > > FYI > > On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee > wrote: > > > I just noticed this during a trunk build. I was doing "mvn clean install > > -DskipTests". The build succeeds. > > Is anyone seeing this? I am using openjdk8u102. > > > > > > > > === > > [WARNING] Unable to process class org/apache/hadoop/hdfs/ > StripeReader.class > > in JarAnalyzer File /home1/kihwal/devel/apache/ > hadoop/hadoop-hdfs-project/ > > hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar > > org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in > > constant pool: 18 > >at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) > >at org.apache.bcel.classfile.ConstantPool.( > ConstantPool.java:67) > >at org.apache.bcel.classfile.ClassParser.readConstantPool( > > ClassParser.java:222) > >at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) > >at org.apache.maven.shared.jar.classes.JarClassesAnalysis. > > analyze(JarClassesAnalysis.java:92) > >at org.apache.maven.report.projectinfo.dependencies.Dependencies. > > getJarDependencyDetails(Dependencies.java:255) > >at org.apache.maven.report.projectinfo.dependencies. > > renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) > >at org.apache.maven.report.projectinfo.dependencies. > > renderer.DependenciesRenderer.renderSectionDependencyFileDet > > ails(DependenciesRenderer.java:536) > >at org.apache.maven.report.projectinfo.dependencies. > > renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) > >at org.apache.maven.reporting.AbstractMavenReportRenderer.render( > > AbstractMavenReportRenderer.java:79) > >at org.apache.maven.report.projectinfo.DependenciesReport. > > executeReport(DependenciesReport.java:186) > >at org.apache.maven.reporting.AbstractMavenReport.generate( > > AbstractMavenReport.java:190) > >at org.apache.maven.report.projectinfo.AbstractProjectInfoReport. > > execute(AbstractProjectInfoReport.java:202) > >at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( > > DefaultBuildPluginManager.java:101) > >at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > > MojoExecutor.java:209) > >at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > > MojoExecutor.java:153) > >at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > > MojoExecutor.java:145) > >at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > > buildProject(LifecycleModuleBuilder.java:84) > >at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > > buildProject(LifecycleModuleBuilder.java:59) > >at org.apache.maven.lifecycle.internal.LifecycleStarter. > > singleThreadedBuild(LifecycleStarter.java:183) > >at org.apache.maven.lifecycle.internal.LifecycleStarter. > > execute(LifecycleStarter.java:161) > >at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) > >at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) > >at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) > >at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) > >at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) > >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >at
[jira] [Reopened] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth reopened HADOOP-13081: I have reverted from trunk, branch-2 and branch-2.8. [~daryn], can you please comment if the plan stated above looks good to you? > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, > HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Is anyone seeing this during trunk build?
This also shows up in the precommit builds. This is not failing the build, so it might scroll over quickly before you realize. Search for ClassFormatException https://builds.apache.org/job/PreCommit-HDFS-Build/16928/artifact/patchprocess/branch-mvninstall-root.txt From: Ted YuTo: Kihwal Lee Cc: Hdfs-dev ; Hadoop Common Sent: Wednesday, September 28, 2016 7:16 PM Subject: Re: Is anyone seeing this during trunk build? I used the same command but didn't see the error you saw. Here is my environment: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Maven home: /Users/tyu/apache-maven-3.3.9 Java version: 1.8.0_91, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac" FYI On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee wrote: > I just noticed this during a trunk build. I was doing "mvn clean install > -DskipTests". The build succeeds. > Is anyone seeing this? I am using openjdk8u102. > > > > === > [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class > in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/ > hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar > org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in > constant pool: 18 > at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) > at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67) > at org.apache.bcel.classfile.ClassParser.readConstantPool( > ClassParser.java:222) > at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) > at org.apache.maven.shared.jar.classes.JarClassesAnalysis. > analyze(JarClassesAnalysis.java:92) > at org.apache.maven.report.projectinfo.dependencies.Dependencies. > getJarDependencyDetails(Dependencies.java:255) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderSectionDependencyFileDet > ails(DependenciesRenderer.java:536) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) > at org.apache.maven.reporting.AbstractMavenReportRenderer.render( > AbstractMavenReportRenderer.java:79) > at org.apache.maven.report.projectinfo.DependenciesReport. > executeReport(DependenciesReport.java:186) > at org.apache.maven.reporting.AbstractMavenReport.generate( > AbstractMavenReport.java:190) > at org.apache.maven.report.projectinfo.AbstractProjectInfoReport. > execute(AbstractProjectInfoReport.java:202) > at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( > DefaultBuildPluginManager.java:101) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:209) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:153) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:145) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:84) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:59) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > singleThreadedBuild(LifecycleStarter.java:183) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > execute(LifecycleStarter.java:161) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) > at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) > at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) > at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) > at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launchEnhanced(Launcher.java:290) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launch(Launcher.java:230) > at org.codehaus.plexus.classworlds.launcher.Launcher. > mainWithExitCode(Launcher.java:414) > at org.codehaus.plexus.classworlds.launcher.Launcher. > main(Launcher.java:357) > === >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/ [Sep 28, 2016 1:24:21 PM] (kihwal) HADOOP-11780. Prevent IPC reader thread death. Contributed by Daryn [Sep 28, 2016 6:47:37 PM] (arp) HDFS-10824. MiniDFSCluster#storageCapacities has no effects on real [Sep 28, 2016 9:56:41 PM] (rkanter) YARN-5400. Light cleanup in ZKRMStateStore (templedf via rkanter) [Sep 28, 2016 10:41:40 PM] (rkanter) MAPREDUCE-6718. add progress log to JHS during startup (haibochen via [Sep 28, 2016 10:57:23 PM] (kihwal) HDFS-10779. Rename does not need to re-solve destination. Contributed by [Sep 28, 2016 11:01:03 PM] (wang) HDFS-10914. Move remnants of oah.hdfs.client to hadoop-hdfs-client. [Sep 28, 2016 11:03:51 PM] (liuml07) HDFS-10892. Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'. [Sep 28, 2016 11:19:32 PM] (cnauroth) HADOOP-13599. s3a close() to be non-synchronized, so avoid risk of [Sep 29, 2016 10:35:00 AM] (stevel) HADOOP-13663 Index out of range in SysInfoWindows. Contributed by Inigo -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.qjournal.TestNNWithQJM hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.qjournal.TestSecureNNWithQJM hadoop.hdfs.TestFileAppend3 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.namenode.ha.TestHAAppend hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMRIntermediateDataEncryption org.apache.hadoop.mapred.TestMROpportunisticMaps compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/artifact/out/patch-compile-root.txt [308K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/artifact/out/patch-compile-root.txt [308K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/artifact/out/patch-compile-root.txt [308K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/109/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K]
[jira] [Resolved] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-11090. -- Resolution: Fixed I think we're safe to resolve this JIRA. CDH blacklists a few versions, but certified with JDK8 (based on heavily modified 2.6): https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#pcm_jdk HDP seems similar (2.7 based): http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Installing_HDP_AMB/content/_jdk_requirements.html We also bumped the required JDK version to JDK8 for 3.0.0-alpha1. If there are additional JDK8 issues, let's follow up with separate JIRAs. Thanks all. > [Umbrella] Support Java 8 in Hadoop > --- > > Key: HADOOP-11090 > URL: https://issues.apache.org/jira/browse/HADOOP-11090 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > > Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly > works with Java 8 is important for the Apache community. > > This JIRA is to track the issues/experiences encountered during Java 8 > migration. If you find a potential bug , please create a separate JIRA either > as a sub-task or linked into this JIRA. > If you find a Hadoop or JVM configuration tuning, you can create a JIRA as > well. Or you can add a comment here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)
+1 Thanks Sangjin! 1. Verified md5 checksums and signature on src, and release tar.gz. 2. Built from source. 3. Started up a pseudo distributed cluster. 4. Successfully ran a PI job. 5. Ran the balancer. 6. Inspected UI for RM, NN, JobHistory. On Tue, Sep 27, 2016 at 4:11 PM, Lei Xuwrote: > +1 > > The steps I've done: > > * Downloaded release tar and source tar, verified MD5. > * Run a HDFS cluster, and copy files between local filesystem and HDFS. > > > On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee wrote: > > Hi folks, > > > > I have created a release candidate RC0 for the Apache Hadoop 2.6.5 > release > > (the next maintenance release in the 2.6.x release line). Below are the > > details of this release candidate: > > > > The RC is available for validation at: > > http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/. > > > > The RC tag in git is release-2.6.5-RC0 and its git commit is > > 6939fc935fba5651fdb33386d88aeb8e875cf27a. > > > > The maven artifacts are staged via repository.apache.org at: > > https://repository.apache.org/content/repositories/orgapachehadoop-1048/ > . > > > > You can find my public key at > > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS. > > > > Please try the release and vote. The vote will run for the usual 5 days. > > Huge thanks to Chris Trezzo for spearheading the release management and > > doing all the work! > > > > Thanks, > > Sangjin > > > > -- > Lei (Eddy) Xu > Software Engineer, Cloudera > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/ [Sep 28, 2016 9:48:18 AM] (vvasudev) YARN-5662. Provide an option to enable ContainerMonitor. Contributed by [Sep 28, 2016 10:40:10 AM] (varunsaxena) YARN-5599. Publish AM launch command to ATS (Rohith Sharma K S via Varun [Sep 28, 2016 1:24:21 PM] (kihwal) HADOOP-11780. Prevent IPC reader thread death. Contributed by Daryn [Sep 28, 2016 6:47:37 PM] (arp) HDFS-10824. MiniDFSCluster#storageCapacities has no effects on real [Sep 28, 2016 9:56:41 PM] (rkanter) YARN-5400. Light cleanup in ZKRMStateStore (templedf via rkanter) [Sep 28, 2016 10:41:40 PM] (rkanter) MAPREDUCE-6718. add progress log to JHS during startup (haibochen via [Sep 28, 2016 10:57:23 PM] (kihwal) HDFS-10779. Rename does not need to re-solve destination. Contributed by [Sep 28, 2016 11:01:03 PM] (wang) HDFS-10914. Move remnants of oah.hdfs.client to hadoop-hdfs-client. [Sep 28, 2016 11:03:51 PM] (liuml07) HDFS-10892. Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'. [Sep 28, 2016 11:19:32 PM] (cnauroth) HADOOP-13599. s3a close() to be non-synchronized, so avoid risk of -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.net.TestDNS hadoop.hdfs.TestDFSShell hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [120K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [144K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [120K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/179/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org