[jira] [Created] (HDDS-325) Add event watcher for delete blocks command
Lokesh Jain created HDDS-325: Summary: Add event watcher for delete blocks command Key: HDDS-325 URL: https://issues.apache.org/jira/browse/HDDS-325 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode, SCM Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to add watcher for deleteBlocks command. It removes the current rpc call required for datanode to send the acknowledgement for deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/860/ No changes -1 overall The following subsystems voted -1: docker Powered by Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-309) closePipelineIfNoOpenContainers should remove pipeline from activePipelines list.
[ https://issues.apache.org/jira/browse/HDDS-309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan resolved HDDS-309. -- Resolution: Not A Problem > closePipelineIfNoOpenContainers should remove pipeline from activePipelines > list. > - > > Key: HDDS-309 > URL: https://issues.apache.org/jira/browse/HDDS-309 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: chencan >Priority: Minor > > Function closePipeline remove pipeline from pipelineMap and > node2PipelineMap.If closePipeline is called by > closePipelineIfNoOpenContainers,Are we supposed to remove pipeline from > activePipelines? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/549/ No changes -1 overall The following subsystems voted -1: compile mvninstall pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.key.kms.server.TestKMS hadoop.cli.TestAclCLI hadoop.cli.TestAclCLIWithPosixAclInheritance hadoop.cli.TestCacheAdminCLI hadoop.cli.TestCryptoAdminCLI hadoop.cli.TestDeleteCLI hadoop.cli.TestErasureCodingCLI hadoop.cli.TestHDFSCLI hadoop.cli.TestXAttrCLI hadoop.fs.contract.hdfs.TestHDFSContractAppend hadoop.fs.contract.hdfs.TestHDFSContractConcat hadoop.fs.contract.hdfs.TestHDFSContractCreate hadoop.fs.contract.hdfs.TestHDFSContractDelete hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus hadoop.fs.contract.hdfs.TestHDFSContractMkdir hadoop.fs.contract.hdfs.TestHDFSContractOpen hadoop.fs.contract.hdfs.TestHDFSContractPathHandle hadoop.fs.contract.hdfs.TestHDFSContractRename hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory hadoop.fs.contract.hdfs.TestHDFSContractSeek hadoop.fs.contract.hdfs.TestHDFSContractSetTimes hadoop.fs.loadGenerator.TestLoadGenerator hadoop.fs.permission.TestStickyBit hadoop.fs.shell.TestHdfsTextCommand hadoop.fs.TestEnhancedByteBufferAccess hadoop.fs.TestFcHdfsCreateMkdir hadoop.fs.TestFcHdfsPermission hadoop.fs.TestFcHdfsSetUMask hadoop.fs.TestGlobPaths hadoop.fs.TestHDFSFileContextMainOperations hadoop.fs.TestHDFSMultipartUploader hadoop.fs.TestHdfsNativeCodeLoader hadoop.fs.TestResolveHdfsSymlink hadoop.fs.TestSWebHdfsFileContextMainOperations hadoop.fs.TestSymlinkHdfsDisable hadoop.fs.TestSymlinkHdfsFileContext hadoop.fs.TestSymlinkHdfsFileSystem hadoop.fs.TestUnbuffer hadoop.fs.TestUrlStreamHandler hadoop.fs.TestWebHdfsFileContextMainOperations hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.fs.viewfs.TestViewFileSystemLinkFallback hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash hadoop.fs.viewfs.TestViewFileSystemWithAcls hadoop.fs.viewfs.TestViewFileSystemWithTruncate hadoop.fs.viewfs.TestViewFileSystemWithXAttrs hadoop.fs.viewfs.TestViewFsAtHdfsRoot hadoop.fs.viewfs.TestViewFsDefaultValue hadoop.fs.viewfs.TestViewFsFileStatusHdfs hadoop.fs.viewfs.TestViewFsHdfs hadoop.fs.viewfs.TestViewFsWithAcls hadoop.fs.viewfs.TestViewFsWithXAttrs hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy hadoop.hdfs.client.impl.TestBlockReaderRemote hadoop.hdfs.client.impl.TestClientBlockVerification hadoop.hdfs.crypto.TestHdfsCryptoStreams hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer hadoop.hdfs.qjournal.client.TestEpochsAreUnique hadoop.hdfs.qjournal.client.TestQJMWithFaults hadoop.hdfs.qjournal.client.TestQuorumJournalManager hadoop.hdfs.qjournal.server.TestJournal hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeMXBean hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.qjournal.TestMiniJournalCluster hadoop.hdfs.qjournal.TestNNWithQJM hadoop.hdfs.qjournal.TestSecureNNWithQJM hadoop.hdfs.security.TestDelegationToken hadoop.hdfs.security.TestDelegationTokenForProxyUser hadoop.hdfs.security.token.block.TestBlockToken hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.server.balancer.TestBalancerRPCDelay hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy hadoop.hdfs.server.blockmanagement.TestBlockManager hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
[jira] [Created] (HDDS-324) Use pipeline name as Ratis groupID to allow datanode to report pipeline info
Mukul Kumar Singh created HDDS-324: -- Summary: Use pipeline name as Ratis groupID to allow datanode to report pipeline info Key: HDDS-324 URL: https://issues.apache.org/jira/browse/HDDS-324 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Affects Versions: 0.2.1 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: 0.2.1 Currently Ozone creates a random pipeline id for every pipeline where a pipeline consist of 3 nodes in a ratis ring. Ratis on the other hand uses the notion of RaftGroupID which is a unique id for the nodes in a ratis ring. When a datanode sends information to SCM, the pipeline for the node is currently identified using dn2PipelineMap. With correct use of RaftGroupID, we can eliminate the use of dn2PipelineMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org