Re: [VOTE] Release Apache Hadoop 2.7.3 RC0
Both sound like real problems to me, and I think it's appropriate to file JIRAs to track them. Jason From: Andrew WangTo: Karthik Kambatla Cc: larry mccay ; Vinod Kumar Vavilapalli ; "common-...@hadoop.apache.org" ; "hdfs-dev@hadoop.apache.org" ; "yarn-...@hadoop.apache.org" ; "mapreduce-...@hadoop.apache.org" Sent: Thursday, August 4, 2016 5:56 PM Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC0 Could a YARN person please comment on these two issues, one of which Vinay also hit? If someone already triaged or filed JIRAs, I missed it. On Mon, Jul 25, 2016 at 11:52 AM, Andrew Wang wrote: > I'll also add that, as a YARN newbie, I did hit two usability issues. > These are very unlikely to be regressions, and I can file JIRAs if they > seem fixable. > > * I didn't have SSH to localhost set up (new laptop), and when I tried to > run the Pi job, it'd exit my window manager session. I feel there must be a > more developer-friendly solution here. > * If you start the NodeManager and not the RM, the NM has a handler for > SIGTERM and SIGINT that blocked my Ctrl-C and kill attempts during startup. > I had to kill -9 it. > > On Mon, Jul 25, 2016 at 11:44 AM, Andrew Wang > wrote: > >> I got asked this off-list, so as a reminder, only PMC votes are binding >> on releases. Everyone is encouraged to vote on releases though! >> >> +1 (binding) >> >> * Downloaded source, built >> * Started up HDFS and YARN >> * Ran Pi job which as usual returned 4, and a little teragen >> >> On Mon, Jul 25, 2016 at 11:08 AM, Karthik Kambatla >> wrote: >> >>> +1 (binding) >>> >>> * Downloaded and build from source >>> * Checked LICENSE and NOTICE >>> * Pseudo-distributed cluster with FairScheduler >>> * Ran MR and HDFS tests >>> * Verified basic UI >>> >>> On Sun, Jul 24, 2016 at 1:07 PM, larry mccay wrote: >>> >>> > +1 binding >>> > >>> > * downloaded and built from source >>> > * checked LICENSE and NOTICE files >>> > * verified signatures >>> > * ran standalone tests >>> > * installed pseudo-distributed instance on my mac >>> > * ran through HDFS and mapreduce tests >>> > * tested credential command >>> > * tested webhdfs access through Apache Knox >>> > >>> > >>> > On Fri, Jul 22, 2016 at 10:15 PM, Vinod Kumar Vavilapalli < >>> > vino...@apache.org> wrote: >>> > >>> > > Hi all, >>> > > >>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.3. >>> > > >>> > > As discussed before, this is the next maintenance release to follow >>> up >>> > > 2.7.2. >>> > > >>> > > The RC is available for validation at: >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ < >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/> >>> > > >>> > > The RC tag in git is: release-2.7.3-RC0 >>> > > >>> > > The maven artifacts are available via repository.apache.org < >>> > > http://repository.apache.org/> at >>> > > https://repository.apache.org/content/repositories/ >>> orgapachehadoop-1040/ >>> > < >>> > > https://repository.apache.org/content/repositories/ >>> orgapachehadoop-1040/ >>> > > >>> > > >>> > > The release-notes are inside the tar-balls at location >>> > > hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. >>> I >>> > > hosted this at >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html < >>> > > http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html >>> > >>> > for >>> > > your quick perusal. >>> > > >>> > > As you may have noted, a very long fix-cycle for the License & Notice >>> > > issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop >>> > release) >>> > > to slip by quite a bit. This release's related discussion thread is >>> > linked >>> > > below: [1]. >>> > > >>> > > Please try the release and vote; the vote will run for the usual 5 >>> days. >>> > > >>> > > Thanks, >>> > > Vinod >>> > > >>> > > [1]: 2.7.3 release plan: >>> > > https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/ >>> msg24439.html >>> > < >>> > > http://markmail.org/thread/6yv2fyrs4jlepmmr> >>> > >>> >> >> >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/ [Aug 4, 2016 7:57:34 AM] (xiao) HADOOP-13443. KMS should check the type of underlying keyprovider of [Aug 4, 2016 8:38:34 AM] (vvasudev) YARN-5459. Add support for docker rm. Contributed by Shane Kumpf. [Aug 4, 2016 11:13:11 AM] (aajisaka) MAPREDUCE-6730. Use StandardCharsets instead of String overload in [Aug 4, 2016 2:07:34 PM] (kihwal) HDFS-10662. Optimize UTF8 string/byte conversions. Contributed by Daryn [Aug 4, 2016 3:45:55 PM] (kihwal) HADOOP-13442. Optimize UGI group lookups. Contributed by Daryn Sharp. [Aug 4, 2016 4:45:40 PM] (szetszwo) In Balancer, the target task should be removed when its size < 0. [Aug 4, 2016 4:53:44 PM] (kihwal) HDFS-10722. Fix race condition in [Aug 4, 2016 5:07:53 PM] (arp) HADOOP-13467. Shell#getSignalKillCommand should use the bash builtin on [Aug 4, 2016 7:25:39 PM] (arp) HADOOP-13466. Add an AutoCloseableLock class. (Chen Liang) [Aug 4, 2016 7:55:21 PM] (kihwal) HDFS-10343. BlockManager#createLocatedBlocks may return blocks on failed [Aug 4, 2016 8:22:48 PM] (kai.zheng) HDFS-10718. Prefer direct ByteBuffer in native RS encoder and decoder. [Aug 4, 2016 9:14:13 PM] (kihwal) HDFS-10673. Optimize FSPermissionChecker's internal path usage. [Aug 5, 2016 2:40:33 AM] (weichiu) HDFS-10588. False alarm in datanode log - ERROR - Disk Balancer is not -1 overall The following subsystems voted -1: asflicense mvnsite unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM hadoop.tracing.TestTracing hadoop.security.TestRefreshUserMappings hadoop.yarn.logaggregation.TestAggregatedLogFormat hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestYarnClient hadoop.mapreduce.v2.hs.server.TestHSAdminServer hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-compile-javac-root.txt [172K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-checkstyle-root.txt [16M] mvnsite: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-mvnsite-root.txt [112K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-javadoc-javadoc-root.txt [2.3M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [316K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [36K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K]
[jira] [Resolved] (HDFS-10715) NPE when applying AvailableSpaceBlockPlacementPolicy
[ https://issues.apache.org/jira/browse/HDFS-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HDFS-10715. -- Resolution: Fixed Fix Version/s: 2.8.0 Committed this to trunk, branch-2, and branch-2.8. Thanks [~zhuguangbin86] for the contribution! > NPE when applying AvailableSpaceBlockPlacementPolicy > > > Key: HDFS-10715 > URL: https://issues.apache.org/jira/browse/HDFS-10715 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0 > Environment: cdh5.8.0 >Reporter: Guangbin Zhu >Assignee: Guangbin Zhu > Fix For: 2.8.0 > > Attachments: HDFS-10715.001.patch, HDFS-10715.002.patch, > HDFS-10715.003.patch, HDFS-10715.004.patch > > > As HDFS-8131 introduced an AvailableSpaceBlockPlacementPolicy, but In some > cases, it caused NPE. > Here are my namenode daemon logs : > 2016-08-02 13:05:03,271 WARN org.apache.hadoop.ipc.Server: IPC Server handler > 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from > 10.132.89.79:14001 Call#56 Retry#0 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy.compareDataNode(AvailableSpaceBlockPlacementPolicy.java:95) > at > org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy.chooseDataNode(AvailableSpaceBlockPlacementPolicy.java:80) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:665) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:572) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:457) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:367) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:242) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:114) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:130) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1606) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3315) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:679) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:489) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) > I reviewed the source code, and found the bug in method chooseDataNode. > clusterMap.chooseRandom may return null, which cannot compare using equals > a.equals(b) method. > Though this exception can be caught, and then retry another call. I think > this bug should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10727) Block move is failed: opReplaceBlock - Mover
Senthilkumar created HDFS-10727: --- Summary: Block move is failed: opReplaceBlock - Mover Key: HDFS-10727 URL: https://issues.apache.org/jira/browse/HDFS-10727 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover Reporter: Senthilkumar Priority: Minor java.io.IOException: block move is failed: opReplaceBlock BP-1267816582-10.114.118.11-1392844883031:blk_-1614331364683562986_359655649 received exception java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.server.mover.PendingBlockMove.receiveResponse(PendingBlockMove.java:279) at org.apache.hadoop.hdfs.server.mover.PendingBlockMove.dispatch(PendingBlockMove.java:207) at org.apache.hadoop.hdfs.server.mover.PendingBlockMove.access$000(PendingBlockMove.java:36) at org.apache.hadoop.hdfs.server.mover.PendingBlockMove$1.run(PendingBlockMove.java:297) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10726) org.apache.hadoop.http.TestHttpServerLifecycle times out
Mingliang Liu created HDFS-10726: Summary: org.apache.hadoop.http.TestHttpServerLifecycle times out Key: HDFS-10726 URL: https://issues.apache.org/jira/browse/HDFS-10726 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0-alpha2 Reporter: Mingliang Liu {{org.apache.hadoop.http.TestHttpServerLifecycle}} times out in a few of r recent builds, e.g. # https://issues.apache.org/jira/browse/HADOOP-13470?focusedCommentId=15408969=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15408969 # https://issues.apache.org/jira/browse/HADOOP-13466?focusedCommentId=15408342=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15408342 # https://issues.apache.org/jira/browse/HADOOP-13444?focusedCommentId=15400162=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15400162 Can not reproduced locally so the test failure is intermittent. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10725) Caller context should always be constructed by a builder
Mingliang Liu created HDFS-10725: Summary: Caller context should always be constructed by a builder Key: HDFS-10725 URL: https://issues.apache.org/jira/browse/HDFS-10725 Project: Hadoop HDFS Issue Type: Sub-task Components: ipc Affects Versions: 2.8.0 Reporter: Mingliang Liu Assignee: Mingliang Liu Priority: Trivial Currently {{CallerContext}} is constructed by a builder. In this pattern, the constructor should be private so that caller context should always be constructed by a builder. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org