[jira] [Created] (HDDS-155) Implement KeyValueContainer
Bharat Viswanadham created HDDS-155: --- Summary: Implement KeyValueContainer Key: HDDS-155 URL: https://issues.apache.org/jira/browse/HDDS-155 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham This Jira is to add following: # Implement Container Interface # Use new directory layout proposed in the design document. a. Data location (chunks) b. Meta location (DB and .container files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/489/ [Jun 4, 2018 2:03:05 PM] (weichiu) HDFS-13155. BlockPlacementPolicyDefault.chooseTargetInOrder Not Checking [Jun 4, 2018 2:19:03 PM] (shahrs87) HDFS-13281 Namenode#createFile should be /.reserved/raw/ aware.. [Jun 4, 2018 3:55:01 PM] (xyao) HDDS-126. Fix findbugs warning in MetadataKeyFilters.java. Contributed [Jun 4, 2018 4:15:23 PM] (haibochen) YARN-8390. Fix API incompatible changes in FairScheduler's [Jun 4, 2018 4:28:09 PM] (inigoiri) YARN-8389. Improve the description of machine-list property in [Jun 4, 2018 4:51:04 PM] (xyao) HDDS-145. Freon times out because of because of wrong ratis port number [Jun 4, 2018 5:41:10 PM] (miklos.szegedi) YARN-8382. cgroup file leak in NM. Contributed by Hu Ziqian. [Jun 4, 2018 7:55:54 PM] (inigoiri) MAPREDUCE-7105. Fix TestNativeCollectorOnlyHandler.testOnCall on Windows [Jun 4, 2018 9:23:08 PM] (haibochen) YARN-8388. TestCGroupElasticMemoryController.testNormalExit() hangs on [Jun 4, 2018 10:32:03 PM] (rkanter) YARN-4677. RMNodeResourceUpdateEvent update from scheduler can lead to [Jun 4, 2018 11:06:28 PM] (eyang) YARN-8365. Set DNS query type according to client request. [Jun 4, 2018 11:37:07 PM] (Bharat) HADOOP-15137. ClassNotFoundException: [Jun 5, 2018 1:12:43 AM] (inigoiri) HDFS-13652. Randomize baseDir for MiniDFSCluster in TestBlockScanner. [Jun 5, 2018 1:21:38 AM] (inigoiri) HDFS-13649. Randomize baseDir for MiniDFSCluster in [Jun 5, 2018 1:28:11 AM] (inigoiri) HDFS-13650. Randomize baseDir for MiniDFSCluster in [Jun 5, 2018 4:13:47 AM] (xiao) HADOOP-15507. Add MapReduce counters about EC bytes read. [Jun 5, 2018 8:53:24 AM] (aajisaka) HDFS-13545. "guarded" is misspelled as "gaurded" in -1 overall The following subsystems voted -1: compile mvninstall pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec hadoop.fs.contract.rawlocal.TestRawlocalContractAppend hadoop.fs.TestFsShellCopy hadoop.fs.TestFsShellList hadoop.fs.TestLocalFileSystem hadoop.http.TestHttpServer hadoop.http.TestHttpServerLogs hadoop.io.compress.TestCodec hadoop.io.nativeio.TestNativeIO hadoop.ipc.TestIPC hadoop.ipc.TestSocketFactory hadoop.metrics2.impl.TestStatsDMetrics hadoop.security.TestSecurityUtil hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.token.TestDtUtilShell hadoop.util.TestDiskCheckerWithDiskIo hadoop.util.TestNativeCodeLoader hadoop.util.TestNodeHealthScriptRunner hadoop.util.TestWinUtils hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.balancer.TestBalancerRPCDelay hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage hadoop.hdfs.server.datanode.TestBlockScanner hadoop.hdfs.server.datanode.TestDataNodeFaultInjector hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestHAAppend hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.TestEditLogRace hadoop.hdfs.server.namenode.TestStartup hadoop.hdfs.TestDatanodeReport hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs hadoop.hdfs.TestDFSShell hadoop.hdfs.TestDFSStripedInputStream hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestDFSUpgradeFromImage hadoop.hdfs.TestFetchImage hadoop.hdfs.TestHDFSFileSystemContract hadoop.hdfs.TestPread hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy hadoop.hdfs.TestSecureEncryptionZoneWithKMS hadoop.hdfs.TestTrashWithSecureEncryptionZones hadoop.hdfs.tools.TestDFSAdmin
RE: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
Thanks for driving this release, Yongjun! - Verified checksums - Succeeded native package build on CentOS 7 - Started a cluster with 1 master and 5 slaves - Verified Web UI (NN, RM, JobHistory, Timeline) - Verified Teragen/Terasort jobs - Verified some operations for erasure coding Regards, Takanobu > -Original Message- > From: Gabor Bota [mailto:gabor.b...@cloudera.com] > Sent: Tuesday, June 05, 2018 9:18 PM > To: nvadiv...@hortonworks.com > Cc: Yongjun Zhang ; sbaner...@hortonworks.com; > Hadoop Common ; Hdfs-dev > ; mapreduce-...@hadoop.apache.org; > yarn-...@hadoop.apache.org > Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0) > > Thanks for the work Yongjun! > > +1 (non-binding) > >- checked out git tag release-3.0.3-RC0. Thanks for adding this Yongjun, >it worked. >- S3A integration (mvn verify) test run were successful on eu-west-1 >besides of one test issue reported in HADOOP-14927. >- built from source on Mac OS X 10.13.4, java version 8.0.171-oracle >- deployed on a 3 node cluster (HDFS HA, Non-HA YARN) >- verified pi job (yarn), teragen, terasort and teravalidate > > Regards, > Gabor Bota > > On Mon, Jun 4, 2018 at 7:38 PM Nandakumar Vadivelu < > nvadiv...@hortonworks.com> wrote: > > > Thanks Yongjun for all the hard work on this release. > > > > - Verified signatures and checksums > > - Tested both the binary and also built from source on a pseudo > > distributed cluster > > - Verified filesystem shell commands > > - Verified admin commands > > - Tested snapshot feature > > - Sanity check of NN and DN UI > > - Verified site documentation > > > > The following links in site documentation are broken > > - Changelog and Release Notes > > - Unix Shell API > > - core-default.xml > > - hdfs-default.xml > > - hdfs-rbf-default.xml > > - mapred-default.xml > > - yarn-default.xml > > > > Site documentation was generated using the below steps > > - mvn site:site > > - mkdir -p /tmp/site && mvn site:stage -DstagingDirectory=/tmp/site > > - Browse to file:///tmp/site/hadoop-project/index.html. > > > > Thanks, > > Nanda > > > > On 6/3/18, 8:55 AM, "Yongjun Zhang" wrote: > > > > Hi Gabor, > > > > I got the git tag in, it's release-3.0.3-RC0. Would you please > > give it a > > try? > > > > It should correspond to > > > > commit 37fd7d752db73d984dc31e0cdfd590d252f5e075 > > Author: Yongjun Zhang > > Date: Wed May 30 00:07:33 2018 -0700 > > > > Update version to 3.0.3 to prepare for 3.0.3 release > > > > > > Thanks, > > > > --Yongjun > > > > On Fri, Jun 1, 2018 at 4:17 AM, Gabor Bota > > > > wrote: > > > > > Hi Yongjun, > > > > > > Thank you for working on this release. Is there a git tag in the > > upstream > > > repo which can be checked out? I'd like to build the release > > from source. > > > > > > Regards, > > > Gabor > > > > > > On Fri, Jun 1, 2018 at 7:57 AM Shashikant Banerjee < > > > sbaner...@hortonworks.com> wrote: > > > > > >> Looks like the link with the filter seems to be private. I > > can't see the > > >> blocker list. > > >> https://issues.apache.org/jira/issues/?filter=12343997 > > >> > > >> Meanwhile , I will be working on testing the release. > > >> > > >> Thanks > > >> Shashi > > >> On 6/1/18, 11:18 AM, "Yongjun Zhang" wrote: > > >> > > >> Greetings all, > > >> > > >> I've created the first release candidate (RC0) for Apache > Hadoop > > >> 3.0.3. This is our next maintenance release to follow up 3.0.2. > > It > > >> includes > > >> about 249 > > >> important fixes and improvements, among which there are 8 > > blockers. > > >> See > > >> https://issues.apache.org/jira/issues/?filter=12343997 > > >> > > >> The RC artifacts are available at: > > >> https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/ > > >> > > >> The maven artifacts are available via > > >> https://repository.apache.org/content/repositories/ > > >> orgapachehadoop-1126 > > >> > > >> Please try the release and vote; the vote will run for the > > usual 5 > > >> working > > >> days, ending on 06/07/2018 PST time. Would really appreciate > > your > > >> participation here. > > >> > > >> I bumped into quite some issues along the way, many thanks > to > > quite a > > >> few > > >> people who helped, especially Sammi Chen, Andrew Wang, Junping > > Du, > > >> Eddy Xu. > > >> > > >> Thanks, > > >> > > >> --Yongjun > > >> > > >> > > >> > > > > > >
[jira] [Created] (HDDS-154) Ozone documentation updates
Arpit Agarwal created HDDS-154: -- Summary: Ozone documentation updates Key: HDDS-154 URL: https://issues.apache.org/jira/browse/HDDS-154 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Arpit Agarwal Follow up updates to Ozone documentation from HDDS-147: # Describe how to start Ozone module separately without HDFS datanodes (see HDDS-94) # Update command docs to describe the different formats in which the service address can be specified on the command line. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-153) Add HA-aware proxy for OM client
Xiaoyu Yao created HDDS-153: --- Summary: Add HA-aware proxy for OM client Key: HDDS-153 URL: https://issues.apache.org/jira/browse/HDDS-153 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Xiaoyu Yao This allows the client to talk to OMs in RATIS ring when failover (leader change) happens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-152) Support HA for Ozone Manager
Xiaoyu Yao created HDDS-152: --- Summary: Support HA for Ozone Manager Key: HDDS-152 URL: https://issues.apache.org/jira/browse/HDDS-152 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Xiaoyu Yao Ozone Manager(OM) provide the name services on top of HDDS(SCM). This ticket is opened to add HA support for OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-151) Add HA support for Ozone
Xiaoyu Yao created HDDS-151: --- Summary: Add HA support for Ozone Key: HDDS-151 URL: https://issues.apache.org/jira/browse/HDDS-151 Project: Hadoop Distributed Data Store Issue Type: New Feature Reporter: Xiaoyu Yao This includes HA for OM and SCM and their clients. For OM and SCM, our initial proposal is to use RATIS to ensure consistent/reliable replication of metadata. We will post a design doc and create a separate branch for the feature development. cc: [~anu], [~jnpandey], [~msingh] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/ [Jun 4, 2018 5:33:34 AM] (xiao) HDFS-13339. Volume reference can't be released and may lead to deadlock [Jun 4, 2018 2:03:05 PM] (weichiu) HDFS-13155. BlockPlacementPolicyDefault.chooseTargetInOrder Not Checking [Jun 4, 2018 2:19:03 PM] (shahrs87) HDFS-13281 Namenode#createFile should be /.reserved/raw/ aware.. [Jun 4, 2018 3:55:01 PM] (xyao) HDDS-126. Fix findbugs warning in MetadataKeyFilters.java. Contributed [Jun 4, 2018 4:15:23 PM] (haibochen) YARN-8390. Fix API incompatible changes in FairScheduler's [Jun 4, 2018 4:28:09 PM] (inigoiri) YARN-8389. Improve the description of machine-list property in [Jun 4, 2018 4:51:04 PM] (xyao) HDDS-145. Freon times out because of because of wrong ratis port number [Jun 4, 2018 5:41:10 PM] (miklos.szegedi) YARN-8382. cgroup file leak in NM. Contributed by Hu Ziqian. [Jun 4, 2018 7:55:54 PM] (inigoiri) MAPREDUCE-7105. Fix TestNativeCollectorOnlyHandler.testOnCall on Windows [Jun 4, 2018 9:23:08 PM] (haibochen) YARN-8388. TestCGroupElasticMemoryController.testNormalExit() hangs on [Jun 4, 2018 10:32:03 PM] (rkanter) YARN-4677. RMNodeResourceUpdateEvent update from scheduler can lead to [Jun 4, 2018 11:06:28 PM] (eyang) YARN-8365. Set DNS query type according to client request. [Jun 4, 2018 11:37:07 PM] (Bharat) HADOOP-15137. ClassNotFoundException: [Jun 5, 2018 1:12:43 AM] (inigoiri) HDFS-13652. Randomize baseDir for MiniDFSCluster in TestBlockScanner. [Jun 5, 2018 1:21:38 AM] (inigoiri) HDFS-13649. Randomize baseDir for MiniDFSCluster in [Jun 5, 2018 1:28:11 AM] (inigoiri) HDFS-13650. Randomize baseDir for MiniDFSCluster in -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener; locked 75% of time Unsynchronized access at AllocationFileLoaderService.java:75% of time Unsynchronized access at AllocationFileLoaderService.java:[line 117] Failed junit tests : hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-compile-javac-root.txt [336K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/802/artifact/out/xml.txt [4.0K] findbugs:
[jira] [Created] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading
Wang XL created HDFS-13657: -- Summary: INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading Key: HDFS-13657 URL: https://issues.apache.org/jira/browse/HDFS-13657 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Wang XL The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are reserved for potential future usage, but code\{{public static final long LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the same time , operator '-' priority is higher than '<<', \{{2 << 14 - 1}} is not equal to \{{(2 <<14) - 1}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
Thanks for the work Yongjun! +1 (non-binding) - checked out git tag release-3.0.3-RC0. Thanks for adding this Yongjun, it worked. - S3A integration (mvn verify) test run were successful on eu-west-1 besides of one test issue reported in HADOOP-14927. - built from source on Mac OS X 10.13.4, java version 8.0.171-oracle - deployed on a 3 node cluster (HDFS HA, Non-HA YARN) - verified pi job (yarn), teragen, terasort and teravalidate Regards, Gabor Bota On Mon, Jun 4, 2018 at 7:38 PM Nandakumar Vadivelu < nvadiv...@hortonworks.com> wrote: > Thanks Yongjun for all the hard work on this release. > > - Verified signatures and checksums > - Tested both the binary and also built from source on a pseudo > distributed cluster > - Verified filesystem shell commands > - Verified admin commands > - Tested snapshot feature > - Sanity check of NN and DN UI > - Verified site documentation > > The following links in site documentation are broken > - Changelog and Release Notes > - Unix Shell API > - core-default.xml > - hdfs-default.xml > - hdfs-rbf-default.xml > - mapred-default.xml > - yarn-default.xml > > Site documentation was generated using the below steps > - mvn site:site > - mkdir -p /tmp/site && mvn site:stage -DstagingDirectory=/tmp/site > - Browse to file:///tmp/site/hadoop-project/index.html. > > Thanks, > Nanda > > On 6/3/18, 8:55 AM, "Yongjun Zhang" wrote: > > Hi Gabor, > > I got the git tag in, it's release-3.0.3-RC0. Would you please give it > a > try? > > It should correspond to > > commit 37fd7d752db73d984dc31e0cdfd590d252f5e075 > Author: Yongjun Zhang > Date: Wed May 30 00:07:33 2018 -0700 > > Update version to 3.0.3 to prepare for 3.0.3 release > > > Thanks, > > --Yongjun > > On Fri, Jun 1, 2018 at 4:17 AM, Gabor Bota > wrote: > > > Hi Yongjun, > > > > Thank you for working on this release. Is there a git tag in the > upstream > > repo which can be checked out? I'd like to build the release from > source. > > > > Regards, > > Gabor > > > > On Fri, Jun 1, 2018 at 7:57 AM Shashikant Banerjee < > > sbaner...@hortonworks.com> wrote: > > > >> Looks like the link with the filter seems to be private. I can't > see the > >> blocker list. > >> https://issues.apache.org/jira/issues/?filter=12343997 > >> > >> Meanwhile , I will be working on testing the release. > >> > >> Thanks > >> Shashi > >> On 6/1/18, 11:18 AM, "Yongjun Zhang" wrote: > >> > >> Greetings all, > >> > >> I've created the first release candidate (RC0) for Apache Hadoop > >> 3.0.3. This is our next maintenance release to follow up 3.0.2. > It > >> includes > >> about 249 > >> important fixes and improvements, among which there are 8 > blockers. > >> See > >> https://issues.apache.org/jira/issues/?filter=12343997 > >> > >> The RC artifacts are available at: > >> https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/ > >> > >> The maven artifacts are available via > >> https://repository.apache.org/content/repositories/ > >> orgapachehadoop-1126 > >> > >> Please try the release and vote; the vote will run for the > usual 5 > >> working > >> days, ending on 06/07/2018 PST time. Would really appreciate > your > >> participation here. > >> > >> I bumped into quite some issues along the way, many thanks to > quite a > >> few > >> people who helped, especially Sammi Chen, Andrew Wang, Junping > Du, > >> Eddy Xu. > >> > >> Thanks, > >> > >> --Yongjun > >> > >> > >> > > >
[jira] [Resolved] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility
[ https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] DENG FEI resolved HDFS-9513. Resolution: Workaround > DataNodeManager#getDataNodeStorageInfos not backward compatibility > -- > > Key: HDFS-9513 > URL: https://issues.apache.org/jira/browse/HDFS-9513 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, namenode >Affects Versions: 2.2.0, 2.7.1 > Environment: 2.2.0 HDFS Client &2.7.1 HDFS Cluster >Reporter: DENG FEI >Assignee: DENG FEI >Priority: Blocker > Attachments: HDFS-9513-20160621.patch, patch.HDFS-9513.20151207, > patch.HDFS-9513.20151216-2.7.2 > > > We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is > 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster). > The compatible case happened datasteamer do pipeline recovery, the NN need > DN's storageInfo to update pipeline, and the storageIds is pair of > pipleline's DN,but HDFS support storage type feature from 2.3.0 > [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version > not have storageId ,although the protobuf serialization make the protocol > compatible,but the client will throw remote exception as > ArrayIndexOutOfBoundsException. > > the exception stack is below: > {noformat} > 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: > Failed to close file XXX > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > 0 > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13370) Ozone: ContainerID package name dose not correspond to the file path
[ https://issues.apache.org/jira/browse/HDFS-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] DENG FEI resolved HDFS-13370. - Resolution: Invalid > Ozone: ContainerID package name dose not correspond to the file path > - > > Key: HDFS-13370 > URL: https://issues.apache.org/jira/browse/HDFS-13370 > Project: Hadoop HDFS > Issue Type: Task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: DENG FEI >Priority: Minor > Attachments: HDFS-13370-HDFS-7240-000.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-150) add LocatedFileStatus for OzoneFileSystem, to return datanode information
Mukul Kumar Singh created HDDS-150: -- Summary: add LocatedFileStatus for OzoneFileSystem, to return datanode information Key: HDDS-150 URL: https://issues.apache.org/jira/browse/HDDS-150 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Client Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh To optimize reads for OzoneFileSystem, LocatedFilesStatus should be returned by the Filesystem to application and clients. This information can now be used to spawn new clients nearest to a particular Ozone datanode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-149) OzoneFileSystem should return data locality and optimize reads for data locality
Mukul Kumar Singh created HDDS-149: -- Summary: OzoneFileSystem should return data locality and optimize reads for data locality Key: HDDS-149 URL: https://issues.apache.org/jira/browse/HDDS-149 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh In order to optimize the reads and write for OzoneFileSytem, the filesystem will need to propagate the data locality information to the client and to the applications. This jira will track all the tasks required for this optimization. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13656) Logging more info when client completes file error
Yiqun Lin created HDFS-13656: Summary: Logging more info when client completes file error Key: HDFS-13656 URL: https://issues.apache.org/jira/browse/HDFS-13656 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 3.0.2 Reporter: Yiqun Lin Attachments: error-hdfs.png We found the error when dfs client complete file. !error-hdfs.png! Now the error log is too simple and cannot provide enough info for debugging. And this error will lead write operation failed. It will be good to print retry time and file path info. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-148) Remove ContainerReportManager and ContainerReportManagerImpl
Nanda kumar created HDDS-148: Summary: Remove ContainerReportManager and ContainerReportManagerImpl Key: HDDS-148 URL: https://issues.apache.org/jira/browse/HDDS-148 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone Datanode Reporter: Nanda kumar Assignee: Nanda kumar {{ContainerReportManager}} and {{ContainerReportManagerImpl}} are not used anywhere, these classes can be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/488/ [Jun 3, 2018 6:38:26 PM] (sunilg) YARN-8276. [UI2] After version field became mandatory, form-based [Jun 4, 2018 5:33:34 AM] (xiao) HDFS-13339. Volume reference can't be released and may lead to deadlock -1 overall The following subsystems voted -1: compile mvninstall pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec hadoop.fs.contract.rawlocal.TestRawlocalContractAppend hadoop.fs.TestFsShellCopy hadoop.fs.TestFsShellList hadoop.fs.TestLocalFileSystem hadoop.http.TestHttpServer hadoop.http.TestHttpServerLogs hadoop.io.compress.TestCodec hadoop.io.nativeio.TestNativeIO hadoop.ipc.TestSocketFactory hadoop.metrics2.impl.TestStatsDMetrics hadoop.security.TestSecurityUtil hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.token.TestDtUtilShell hadoop.util.TestDiskCheckerWithDiskIo hadoop.util.TestNativeCodeLoader hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage hadoop.hdfs.server.datanode.TestDataNodeFaultInjector hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.server.diskbalancer.TestDiskBalancer hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestHASafeMode hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.TestCacheDirectives hadoop.hdfs.server.namenode.TestEditLogRace hadoop.hdfs.server.namenode.TestNamenodeCapacityReport hadoop.hdfs.server.namenode.TestReencryption hadoop.hdfs.server.namenode.TestStartup hadoop.hdfs.TestDatanodeReport hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs hadoop.hdfs.TestDFSClientRetries hadoop.hdfs.TestDFSShell hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestDFSUpgradeFromImage hadoop.hdfs.TestErasureCodingMultipleRacks hadoop.hdfs.TestFetchImage hadoop.hdfs.TestFileConcurrentReader hadoop.hdfs.TestHDFSFileSystemContract hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.TestLocalDFS hadoop.hdfs.TestMaintenanceState hadoop.hdfs.TestPersistBlocks hadoop.hdfs.TestPread hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy hadoop.hdfs.TestSecureEncryptionZoneWithKMS hadoop.hdfs.TestTrashWithSecureEncryptionZones hadoop.hdfs.tools.TestDFSAdmin hadoop.hdfs.tools.TestDFSAdminWithHA hadoop.hdfs.web.TestWebHDFS hadoop.hdfs.web.TestWebHdfsUrl hadoop.fs.http.server.TestHttpFSServerWebServer hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch hadoop.yarn.server.nodemanager.containermanager.linux.privileged.TestPrivilegedOperationExecutor hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupElasticMemoryController hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupsHandlerImpl hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestJavaSandboxLinuxContainerRuntime hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl hadoop.yarn.server.nodemanager.containermanager.TestAuxServices hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.nodemanager.nodelabels.TestScriptBasedNodeLabelsProvider