[jira] [Created] (HADOOP-13668) Make InstrumentedLock require ReentrantLock
Arpit Agarwal created HADOOP-13668: -- Summary: Make InstrumentedLock require ReentrantLock Key: HADOOP-13668 URL: https://issues.apache.org/jira/browse/HADOOP-13668 Project: Hadoop Common Issue Type: Bug Reporter: Arpit Agarwal Assignee: Arpit Agarwal Make InstrumentedLock use ReentrantLock instead of Lock, so nested acquire/release calls can be instrumented correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Is anyone seeing this during trunk build?
I used the same command but didn't see the error you saw. Here is my environment: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Maven home: /Users/tyu/apache-maven-3.3.9 Java version: 1.8.0_91, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac" FYI On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Leewrote: > I just noticed this during a trunk build. I was doing "mvn clean install > -DskipTests". The build succeeds. > Is anyone seeing this? I am using openjdk8u102. > > > > === > [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class > in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/ > hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar > org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in > constant pool: 18 > at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) > at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67) > at org.apache.bcel.classfile.ClassParser.readConstantPool( > ClassParser.java:222) > at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) > at org.apache.maven.shared.jar.classes.JarClassesAnalysis. > analyze(JarClassesAnalysis.java:92) > at org.apache.maven.report.projectinfo.dependencies.Dependencies. > getJarDependencyDetails(Dependencies.java:255) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderSectionDependencyFileDet > ails(DependenciesRenderer.java:536) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) > at org.apache.maven.reporting.AbstractMavenReportRenderer.render( > AbstractMavenReportRenderer.java:79) > at org.apache.maven.report.projectinfo.DependenciesReport. > executeReport(DependenciesReport.java:186) > at org.apache.maven.reporting.AbstractMavenReport.generate( > AbstractMavenReport.java:190) > at org.apache.maven.report.projectinfo.AbstractProjectInfoReport. > execute(AbstractProjectInfoReport.java:202) > at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( > DefaultBuildPluginManager.java:101) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:209) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:153) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:145) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:84) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:59) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > singleThreadedBuild(LifecycleStarter.java:183) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > execute(LifecycleStarter.java:161) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) > at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) > at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) > at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) > at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launchEnhanced(Launcher.java:290) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launch(Launcher.java:230) > at org.codehaus.plexus.classworlds.launcher.Launcher. > mainWithExitCode(Launcher.java:414) > at org.codehaus.plexus.classworlds.launcher.Launcher. > main(Launcher.java:357) > === >
Is anyone seeing this during trunk build?
I just noticed this during a trunk build. I was doing "mvn clean install -DskipTests". The build succeeds. Is anyone seeing this? I am using openjdk8u102. === [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 18 at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67) at org.apache.bcel.classfile.ClassParser.readConstantPool(ClassParser.java:222) at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) at org.apache.maven.shared.jar.classes.JarClassesAnalysis.analyze(JarClassesAnalysis.java:92) at org.apache.maven.report.projectinfo.dependencies.Dependencies.getJarDependencyDetails(Dependencies.java:255) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderSectionDependencyFileDetails(DependenciesRenderer.java:536) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) at org.apache.maven.reporting.AbstractMavenReportRenderer.render(AbstractMavenReportRenderer.java:79) at org.apache.maven.report.projectinfo.DependenciesReport.executeReport(DependenciesReport.java:186) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:190) at org.apache.maven.report.projectinfo.AbstractProjectInfoReport.execute(AbstractProjectInfoReport.java:202) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:414) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:357) ===
Re: Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
Great suggestion! > On Sep 28, 2016, at 11:55 AM, Enis Söztutarwrote: > > Can Hadoop please shade ALL of the dependencies (including PB) in Hadoop-3 > so that we do not have this mess going forward. > > Enis > >> On Wed, Sep 28, 2016 at 11:19 AM, Jonathan Hsieh wrote: >> >> You can build hbase using the profile by adding this -D setting. >> >> mvn clean test install -Dhadoop.profile=3.0 >> >> At the moment the hbase's hadoop 3 profile has fails at mvn compile, >> install, and test phases. There are some issues that are patch available >> addressing these in HBase currently. >> >> compile https://issues.apache.org/jira/browse/HBASE-16711 >> test https://issues.apache.org/jira/browse/HBASE-6581 >> install https://issues.apache.org/jira/browse/HBASE-16712 >> >> IIRC, currently hbase's hadoop3 profile currently tracks the hadoop-3.0 >> SNAPSHOT artifacts. Sean, would these dependencies changes be targeted for >> this branch and land in the next hadoop 3 alpha/beta? >> Jon. >> >> On Wed, Sep 28, 2016 at 9:56 AM, Sean Mackrory >> wrote: >> >>> HBase folks, >>> >>> I'd like to work on upgrading some of the dependencies in Hadoop that are >>> starting to lag behind: >>> >>> jackson2 (https://issues.apache.org/jira/browse/HADOOP-12705) >>> jaxb-api (https://issues.apache.org/jira/browse/HADOOP-13659) >>> commons-configuration (https://issues.apache.org/ >> jira/browse/HADOOP-13660) >>> htrace (https://issues.apache.org/jira/browse/HADOOP-13661) >>> >>> Some of these (chiefly jackson) are known to be high risk upgrades and >> have >>> caused problems for HBase in the past. I wanted to give you a heads up >>> about this work and coordinate with you to make sure I do all the testing >>> you think is necessary to discover any potential problems well in advance >>> of the breakages. >>> >>> I was going to look at just running HBase's unit tests when built against >>> my Hadoop's trunk with my changes (Should I expect any problems here? Is >>> there a branch already focused on Hadoop 3 compatibility or anything?) >> and >>> Bigtop's integration tests too. Anything else you would add to this? >>> >> >> >> >> -- >> // Jonathan Hsieh (shay) >> // HBase Tech Lead, Software Engineer, Cloudera >> // j...@cloudera.com // @jmhsieh >> - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
> I'm working on this under HADOOP-11804, using HBase as my test application. Cool, glad to see progress on this. Enis On Wed, Sep 28, 2016 at 12:15 PM, Sean Busbeywrote: > On Wed, Sep 28, 2016 at 1:55 PM, Enis Söztutar wrote: > > Can Hadoop please shade ALL of the dependencies (including PB) in > Hadoop-3 > > so that we do not have this mess going forward. > > > > Enis > > > I'm working on this under HADOOP-11804, using HBase as my test application. > > -- > busbey >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/ [Sep 27, 2016 2:03:10 PM] (brahma) HDFS-10889. Remove outdated Fault Injection Framework documentaion. [Sep 27, 2016 4:29:24 PM] (iwasakims) HDFS-10426. TestPendingInvalidateBlock failed in trunk. Contributed by [Sep 27, 2016 5:02:15 PM] (arp) HDFS-10828. Fix usage of FsDatasetImpl object lock in ReplicaMap. (Arpit [Sep 27, 2016 6:26:45 PM] (wangda) HADOOP-13544. JDiff reports unncessarily show unannotated APIs and cause [Sep 27, 2016 6:54:55 PM] (wangda) YARN-3142. Improve locks in AppSchedulingInfo. (Varun Saxena via wangda) [Sep 27, 2016 9:55:28 PM] (yzhang) HDFS-10376. Enhance setOwner testing. (John Zhuge via Yongjun Zhang) [Sep 28, 2016 12:36:53 AM] (liuml07) HADOOP-13658. Replace config key literal strings with names I: hadoop [Sep 29, 2016 1:18:27 AM] (kai.zheng) Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This [Sep 28, 2016 2:28:41 AM] (aengineer) HDFS-10900. DiskBalancer: Complete the documents for the report command. [Sep 28, 2016 3:40:17 AM] (liuml07) HDFS-10915. Fix time measurement bug in TestDatanodeRestart. Contributed [Sep 28, 2016 4:35:06 AM] (aengineer) HDFS-9850. DiskBalancer: Explore removing references to FsVolumeSpi. [Sep 28, 2016 9:48:18 AM] (vvasudev) YARN-5662. Provide an option to enable ContainerMonitor. Contributed by [Sep 28, 2016 10:40:10 AM] (varunsaxena) YARN-5599. Publish AM launch command to ATS (Rohith Sharma K S via Varun -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestFileChecksum hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate hadoop.hdfs.tools.TestDFSAdminWithHA hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMRIntermediateDataEncryption org.apache.hadoop.mapred.TestMROpportunisticMaps compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] unit:
Re: Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
On Wed, Sep 28, 2016 at 1:55 PM, Enis Söztutarwrote: > Can Hadoop please shade ALL of the dependencies (including PB) in Hadoop-3 > so that we do not have this mess going forward. > > Enis I'm working on this under HADOOP-11804, using HBase as my test application. -- busbey - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
Can Hadoop please shade ALL of the dependencies (including PB) in Hadoop-3 so that we do not have this mess going forward. Enis On Wed, Sep 28, 2016 at 11:19 AM, Jonathan Hsiehwrote: > You can build hbase using the profile by adding this -D setting. > > mvn clean test install -Dhadoop.profile=3.0 > > At the moment the hbase's hadoop 3 profile has fails at mvn compile, > install, and test phases. There are some issues that are patch available > addressing these in HBase currently. > > compile https://issues.apache.org/jira/browse/HBASE-16711 > test https://issues.apache.org/jira/browse/HBASE-6581 > install https://issues.apache.org/jira/browse/HBASE-16712 > > IIRC, currently hbase's hadoop3 profile currently tracks the hadoop-3.0 > SNAPSHOT artifacts. Sean, would these dependencies changes be targeted for > this branch and land in the next hadoop 3 alpha/beta? > Jon. > > On Wed, Sep 28, 2016 at 9:56 AM, Sean Mackrory > wrote: > > > HBase folks, > > > > I'd like to work on upgrading some of the dependencies in Hadoop that are > > starting to lag behind: > > > > jackson2 (https://issues.apache.org/jira/browse/HADOOP-12705) > > jaxb-api (https://issues.apache.org/jira/browse/HADOOP-13659) > > commons-configuration (https://issues.apache.org/ > jira/browse/HADOOP-13660) > > htrace (https://issues.apache.org/jira/browse/HADOOP-13661) > > > > Some of these (chiefly jackson) are known to be high risk upgrades and > have > > caused problems for HBase in the past. I wanted to give you a heads up > > about this work and coordinate with you to make sure I do all the testing > > you think is necessary to discover any potential problems well in advance > > of the breakages. > > > > I was going to look at just running HBase's unit tests when built against > > my Hadoop's trunk with my changes (Should I expect any problems here? Is > > there a branch already focused on Hadoop 3 compatibility or anything?) > and > > Bigtop's integration tests too. Anything else you would add to this? > > > > > > -- > // Jonathan Hsieh (shay) > // HBase Tech Lead, Software Engineer, Cloudera > // j...@cloudera.com // @jmhsieh >
Re: Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
> On Sep 28, 2016, at 9:56 AM, Sean Mackrorywrote: > and > Bigtop's integration tests too. Anything else you would add to this? Be aware that bigtop does things to the jar layout that will break parts of hadoop 3. - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Upgrading Hadoop dependencies and catching potential incompatibilities for HBase
HBase folks, I'd like to work on upgrading some of the dependencies in Hadoop that are starting to lag behind: jackson2 (https://issues.apache.org/jira/browse/HADOOP-12705) jaxb-api (https://issues.apache.org/jira/browse/HADOOP-13659) commons-configuration (https://issues.apache.org/jira/browse/HADOOP-13660) htrace (https://issues.apache.org/jira/browse/HADOOP-13661) Some of these (chiefly jackson) are known to be high risk upgrades and have caused problems for HBase in the past. I wanted to give you a heads up about this work and coordinate with you to make sure I do all the testing you think is necessary to discover any potential problems well in advance of the breakages. I was going to look at just running HBase's unit tests when built against my Hadoop's trunk with my changes (Should I expect any problems here? Is there a branch already focused on Hadoop 3 compatibility or anything?) and Bigtop's integration tests too. Anything else you would add to this?
[jira] [Resolved] (HADOOP-13657) IPC Reader thread could silently die and leave NameNode unresponsive
[ https://issues.apache.org/jira/browse/HADOOP-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee resolved HADOOP-13657. - Resolution: Duplicate > IPC Reader thread could silently die and leave NameNode unresponsive > > > Key: HADOOP-13657 > URL: https://issues.apache.org/jira/browse/HADOOP-13657 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Zhe Zhang >Priority: Critical > > For each listening port, IPC {{Server#Listener#Reader}} is a single thread in > charge of moving {{Connection}} items from {{pendingConnections}} (capacity > 100) to the {{callQueue}}. > We have experienced an incident where the {{Reader}} thread for HDFS NameNode > died from runtime exception. Then the {{pendingConnections}} queue became > full and the NameNode port became inaccessible. > In our particular case, what killed {{Reader}} was a NPE caused by > https://bugs.openjdk.java.net/browse/JDK-8024883. But in general, other types > of runtime exceptions could cause this issue as well. > We should add logic to either make the {{Reader}} more robust in case of > runtime exceptions, or at least treat it as a FATAL exception so that > NameNode can fail over to standby, and admins get alerted of the real issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/ [Sep 27, 2016 11:07:59 AM] (naganarasimha_gr) YARN-5660. Wrong audit constants are used in Get/Put of priority in [Sep 27, 2016 2:03:10 PM] (brahma) HDFS-10889. Remove outdated Fault Injection Framework documentaion. [Sep 27, 2016 4:29:24 PM] (iwasakims) HDFS-10426. TestPendingInvalidateBlock failed in trunk. Contributed by [Sep 27, 2016 5:02:15 PM] (arp) HDFS-10828. Fix usage of FsDatasetImpl object lock in ReplicaMap. (Arpit [Sep 27, 2016 6:26:45 PM] (wangda) HADOOP-13544. JDiff reports unncessarily show unannotated APIs and cause [Sep 27, 2016 6:54:55 PM] (wangda) YARN-3142. Improve locks in AppSchedulingInfo. (Varun Saxena via wangda) [Sep 27, 2016 9:55:28 PM] (yzhang) HDFS-10376. Enhance setOwner testing. (John Zhuge via Yongjun Zhang) [Sep 28, 2016 12:36:53 AM] (liuml07) HADOOP-13658. Replace config key literal strings with names I: hadoop [Sep 29, 2016 1:18:27 AM] (kai.zheng) Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This [Sep 28, 2016 2:28:41 AM] (aengineer) HDFS-10900. DiskBalancer: Complete the documents for the report command. [Sep 28, 2016 3:40:17 AM] (liuml07) HDFS-10915. Fix time measurement bug in TestDatanodeRestart. Contributed [Sep 28, 2016 4:35:06 AM] (aengineer) HDFS-9850. DiskBalancer: Explore removing references to FsVolumeSpi. -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestDFSShell hadoop.hdfs.TestRenameWhileOpen hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13567) S3AFileSystem to override getStorageStatistics() and so serve up its statistics
[ https://issues.apache.org/jira/browse/HADOOP-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-13567. - Resolution: Duplicate Fix Version/s: 2.9.0 This was fixed during HADOOP-13560 while trying to use the storage stats in tests; resolving as duplicate > S3AFileSystem to override getStorageStatistics() and so serve up its > statistics > --- > > Key: HADOOP-13567 > URL: https://issues.apache.org/jira/browse/HADOOP-13567 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.9.0 > > > Although S3AFileSystem collects lots of statistics, these aren't available > programatically as {{getStoragetStatistics() }} isn't overridden. > It must be overridden and serve up the local FS stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
RE: [VOTE] HADOOP-12756 - Aliyun OSS Support branch merge
That will be great helpful for developers in China, whose application are based on AliYun OSS. +1(non-binding) Hao -Original Message- From: Zheng, Kai [mailto:kai.zh...@intel.com] Sent: Wednesday, September 28, 2016 10:35 AM To: common-dev@hadoop.apache.org Subject: [VOTE] HADOOP-12756 - Aliyun OSS Support branch merge Hi all, I would like to propose a merge vote for HADOOP-12756 branch to trunk. This branch develops support for Aliyun OSS (another cloud storage) in Hadoop. The voting starts now and will run for 7 days till Oct 5, 2016 07:00 PM PDT. Aliyun OSS is widely used among China's cloud users, and currently it is not easy to access data in Aliyun OSS from Hadoop. The branch develops a new module hadoop-aliyun and provides support for accessing data in Aliyun OSS cloud storage, which will enable more use cases and bring better use experience for Hadoop users. Like the existing s3a support, AliyunOSSFileSystem a new implementation of FileSystem backed by Aliyun OSS is provided. During the implementation, the contributors refer to the s3a support, keeping the same coding style and project structure. . The updated architecture document is here. [https://issues.apache.org/jira/secure/attachment/12829541/Aliyun-OSS-integration-v2.pdf] . The merge patch that is a diff against trunk is posted here, which builds cleanly with manual testing results posted in HADOOP-13584. [https://issues.apache.org/jira/secure/attachment/12829738/HADOOP-13584.004.patch] . The user documentation is also provided as part of the module. HADOOP-12756 has a set of sub-tasks and they are ordered in the same sequence as they were committed to HADOOP-12756. Hopefully this will make it easier for reviewing. What I want to emphasize is: this is a fundamental implementation aiming at guaranteeing functionality and stability. The major functionality has been running in production environments for some while. There're definitely performance optimizations that we can do like the community have done for the existing s3a and azure supports. Merging this to trunk would serve as a very good beginning for the following optimizations aligning with the related efforts together. The new hadoop-aliyun modlue is made possible owing to many people. Thanks to the contributors Mingfei Shi, Genmao Yu and Ling Zhou; thanks to Cheng Hao, Steve Loughran, Chris Nauroth, Yi Liu, Lei (Eddy) Xu, Uma Maheswara Rao G and Allen Wittenauer for their kind reviewing and guidance. Also thanks Arpit Agarwal, Andrew Wang and Anu Engineer for the great process discussions to bring this up. Please kindly vote. Thanks in advance! Regards, Kai - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
RE: desc error on official site http://hadoop.apache.org/
Thank you for the catch. This should go to the common-dev mailing list. Would you fire an issue to fix this? Regards, Kai -Original Message- From: 444...@qq.com [mailto:444...@qq.com] Sent: Wednesday, September 28, 2016 9:10 AM To: generalSubject: desc error on official site http://hadoop.apache.org/ It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ===> It is designed to scale up from single server to thousands of machines, each offering local computation and storage. - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org