[jira] [Created] (HDFS-12537) Ozone: Reduce key creation overhead in Corona
Lokesh Jain created HDFS-12537: -- Summary: Ozone: Reduce key creation overhead in Corona Key: HDFS-12537 URL: https://issues.apache.org/jira/browse/HDFS-12537 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Currently Corona creates random key values for each key. This creates a lot of overhead. An option should be provided to use a single key value. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12521) Ozone: SCM should read all Container info into memory when booting up
[ https://issues.apache.org/jira/browse/HDFS-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDFS-12521. Resolution: Duplicate > Ozone: SCM should read all Container info into memory when booting up > - > > Key: HDFS-12521 > URL: https://issues.apache.org/jira/browse/HDFS-12521 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Lokesh Jain > Labels: performance > > When SCM boots up it should read all containers into memory. This is a > performance optimization that allows delays on SCM side. This JIRA tracks > that issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-12521) Ozone: SCM should read all Container info into memory when booting up
[ https://issues.apache.org/jira/browse/HDFS-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reopened HDFS-12521: > Ozone: SCM should read all Container info into memory when booting up > - > > Key: HDFS-12521 > URL: https://issues.apache.org/jira/browse/HDFS-12521 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Lokesh Jain > Labels: performance > > When SCM boots up it should read all containers into memory. This is a > performance optimization that allows delays on SCM side. This JIRA tracks > that issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12616) Ozone: SCM: Open containers are not reused for block allocation after restart
[ https://issues.apache.org/jira/browse/HDFS-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDFS-12616. Resolution: Duplicate > Ozone: SCM: Open containers are not reused for block allocation after restart > - > > Key: HDFS-12616 > URL: https://issues.apache.org/jira/browse/HDFS-12616 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Lokesh Jain > > When SCM is restarted, previously opened containers are not loaded by > {{ContainerStateManager}}. This causes creation of new container for > {{BlockManangerImpl#allocateBlock}} call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12678) Ozone: Corona: Add statistical information to json output
Lokesh Jain created HDFS-12678: -- Summary: Ozone: Corona: Add statistical information to json output Key: HDFS-12678 URL: https://issues.apache.org/jira/browse/HDFS-12678 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-24) Ozone: Rename directory in ozonefs should be atomic
Lokesh Jain created HDDS-24: --- Summary: Ozone: Rename directory in ozonefs should be atomic Key: HDDS-24 URL: https://issues.apache.org/jira/browse/HDDS-24 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Lokesh Jain Assignee: Lokesh Jain Currently rename in ozonefs is not atomic. While rename takes place another client might be adding a new file into the directory. Further if rename fails midway the directory will be in an inconsistent state. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-36) Ozone: Enable checkstyle for HDDS
Lokesh Jain created HDDS-36: --- Summary: Ozone: Enable checkstyle for HDDS Key: HDDS-36 URL: https://issues.apache.org/jira/browse/HDDS-36 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Before merge we had disabled the checkstyle for hadoop-hdds and hadoop-ozone. Since we have our own subprojects now we can enable them with hdds profile. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-36) Ozone: Enable checkstyle for HDDS
[ https://issues.apache.org/jira/browse/HDDS-36?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-36. - Resolution: Not A Problem > Ozone: Enable checkstyle for HDDS > - > > Key: HDDS-36 > URL: https://issues.apache.org/jira/browse/HDDS-36 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > > Before merge we had disabled the checkstyle for hadoop-hdds and hadoop-ozone. > Since we have our own subprojects now we can enable them with hdds profile. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-47) Add acceptance tests for Ozone Shell
Lokesh Jain created HDDS-47: --- Summary: Add acceptance tests for Ozone Shell Key: HDDS-47 URL: https://issues.apache.org/jira/browse/HDDS-47 Project: Hadoop Distributed Data Store Issue Type: Test Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-72) Add numDeletedBlocks field in ContainerInfo
Lokesh Jain created HDDS-72: --- Summary: Add numDeletedBlocks field in ContainerInfo Key: HDDS-72 URL: https://issues.apache.org/jira/browse/HDDS-72 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to add numDeletedBlocks field in SCMContainerInfo and ContainerInfo in hdds.proto and StorageContainerDatanodeProtocol.proto respectively. This field would serve as a generation Id for block deletes as it is monotonically increasing in nature and would also give information about the number of deleted blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-73) Add acceptance tests for Ozone Shell
Lokesh Jain created HDDS-73: --- Summary: Add acceptance tests for Ozone Shell Key: HDDS-73 URL: https://issues.apache.org/jira/browse/HDDS-73 Project: Hadoop Distributed Data Store Issue Type: Test Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to add acceptance tests related to http, o3 scheme and various server port combinations in shell commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-109) Add reconnect logic for XceiverClientGrpc
Lokesh Jain created HDDS-109: Summary: Add reconnect logic for XceiverClientGrpc Key: HDDS-109 URL: https://issues.apache.org/jira/browse/HDDS-109 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 We need to add reconnect logic in XceiverClientGrpc which allows it to reconnect in case of DN restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-113) Rest and Rpc Client should verify resource name using HddsClientUtils
Lokesh Jain created HDDS-113: Summary: Rest and Rpc Client should verify resource name using HddsClientUtils Key: HDDS-113 URL: https://issues.apache.org/jira/browse/HDDS-113 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Currently Rest and Rpc client do not verify resource name. They should do so using HddsClientUtils#verifyResourceName. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-112) OzoneShell should support commands with url without scheme
Lokesh Jain created HDDS-112: Summary: OzoneShell should support commands with url without scheme Key: HDDS-112 URL: https://issues.apache.org/jira/browse/HDDS-112 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Currently urls need to have o3 or http scheme in OzoneShell. This Jira aims to support urls without scheme in OzoneShell. For such cases default o3 scheme would be used. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-159) RestClient: Implement list operations for volume, bucket and keys
Lokesh Jain created HDDS-159: Summary: RestClient: Implement list operations for volume, bucket and keys Key: HDDS-159 URL: https://issues.apache.org/jira/browse/HDDS-159 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Attachments: HDDS-159.001.patch Currently RestClient does not provide implementation for list volume, list buckets and list keys. This Jira aims to add the implementation and necessary tests for the same. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-178) DeleteBlocks should not be handled by open containers
Lokesh Jain created HDDS-178: Summary: DeleteBlocks should not be handled by open containers Key: HDDS-178 URL: https://issues.apache.org/jira/browse/HDDS-178 Project: Hadoop Distributed Data Store Issue Type: Task Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain In the case of open containers deleteBlocks command just adds an entry in the log but does not delete the blocks. These blocks are deleted only when container is closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12735) Make ContainerStateMachine#applyTransaction async
Lokesh Jain created HDFS-12735: -- Summary: Make ContainerStateMachine#applyTransaction async Key: HDFS-12735 URL: https://issues.apache.org/jira/browse/HDFS-12735 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ContainerStateMachine#applyTransaction makes a synchronous call to dispatch client requests. Idea is to have a thread pool which dispatches client requests and returns a CompletableFuture. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13000) Ozone: OzoneFileSystem: Implement seek functionality for rest client
Lokesh Jain created HDFS-13000: -- Summary: Ozone: OzoneFileSystem: Implement seek functionality for rest client Key: HDFS-13000 URL: https://issues.apache.org/jira/browse/HDFS-13000 Project: Hadoop HDFS Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain This jira aims to add seekable functionality in rest client input stream. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13038) User with no permission on file is able to run getfacl for that file
Lokesh Jain created HDFS-13038: -- Summary: User with no permission on file is able to run getfacl for that file Key: HDFS-13038 URL: https://issues.apache.org/jira/browse/HDFS-13038 Project: Hadoop HDFS Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently any user with EXECUTE permission can run getfacl on a file or directory. This Jira adds a check for READ access of user on the inode path. {code:java} [root@host ~]$ hdfs dfs -copyFromLocal /etc/a.txt /tmp [root@host ~]$ hdfs dfs -setfacl -m user:abc:--- /tmp/a.txt {code} Since user abc does not have read permission on the file 'cat' command throws Permission Denied error but getfacl executes normally. {code:java} [abc@host ~]$ hdfs dfs -cat /tmp/a.txt cat: Permission denied: user=abc, access=READ, inode="/tmp/a.txt":abc:hdfs:-rw-r--r-- [abc@host ~]$ hdfs dfs -getfacl /tmp/a.txt # file: /tmp/a.txt # owner:root # group: hdfs user::rw- user:abc:--- group::r-- mask::r-- other::r-- {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13134) Ozone: Format open containers on datanode restart
Lokesh Jain created HDFS-13134: -- Summary: Ozone: Format open containers on datanode restart Key: HDFS-13134 URL: https://issues.apache.org/jira/browse/HDFS-13134 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Once a datanode is restarted its open containers should be formatted. Only the open containers whose pipeline has a replication factor of three will need to be formatted. The format command is sent by SCM to the datanode after the corresponding containers have been successfully replicated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
Lokesh Jain created HDFS-13141: -- Summary: WebHDFS: Add support for getting snasphottable directory list Key: HDFS-13141 URL: https://issues.apache.org/jira/browse/HDFS-13141 Project: Hadoop HDFS Issue Type: Task Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-308) SCM should identify a container with pending deletes using container reports
Lokesh Jain created HDDS-308: Summary: SCM should identify a container with pending deletes using container reports Key: HDDS-308 URL: https://issues.apache.org/jira/browse/HDDS-308 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 SCM should fire an event when it finds using container report that a container's deleteTransactionID does not match SCM's deleteTransactionId. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-325) Add event watcher for delete blocks command
Lokesh Jain created HDDS-325: Summary: Add event watcher for delete blocks command Key: HDDS-325 URL: https://issues.apache.org/jira/browse/HDDS-325 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode, SCM Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to add watcher for deleteBlocks command. It removes the current rpc call required for datanode to send the acknowledgement for deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-273) DeleteLog entries should be purged only after corresponding dns commit the transaction
Lokesh Jain created HDDS-273: Summary: DeleteLog entries should be purged only after corresponding dns commit the transaction Key: HDDS-273 URL: https://issues.apache.org/jira/browse/HDDS-273 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Attachments: HDDS-273.001.patch Currently DeleteBlockLogImpl removes an entry corresponding to a deleteBlock as soon as it receives an acknowledgement from a single DN. The entries should be deleted only after all corresponding dns give an acknowledgement for the delete command. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-279) DeleteBlocks command should not be sent for open containers
Lokesh Jain created HDDS-279: Summary: DeleteBlocks command should not be sent for open containers Key: HDDS-279 URL: https://issues.apache.org/jira/browse/HDDS-279 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Currently SCM sends delete blocks command for open containers as well. This Jira aims to disable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13765) Fix javadoc for FSDirMkdirOp#createParentDirectories
Lokesh Jain created HDFS-13765: -- Summary: Fix javadoc for FSDirMkdirOp#createParentDirectories Key: HDFS-13765 URL: https://issues.apache.org/jira/browse/HDFS-13765 Project: Hadoop HDFS Issue Type: Bug Components: documentation Reporter: Lokesh Jain Assignee: Lokesh Jain Javadoc needs to be fixed for FSDirMkdirOp#createParentDirectories. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-422) ContainerStateMachine.readStateMachineData throws OverlappingFileLockException
Lokesh Jain created HDDS-422: Summary: ContainerStateMachine.readStateMachineData throws OverlappingFileLockException Key: HDDS-422 URL: https://issues.apache.org/jira/browse/HDDS-422 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 {code:java} 2018-09-06 23:11:41,386 ERROR org.apache.ratis.server.impl.LogAppender: GRpcLogAppender(d95c60fd-0e23-4237-8135-e05a326b952d_9858 -> 954e7a3b-b20e-43a5-8f82-4381872aa7bb_9858) hit IOException while loadin g raft log org.apache.ratis.server.storage.RaftLogIOException: d95c60fd-0e23-4237-8135-e05a326b952d_9858: Failed readStateMachineData for (t:39, i:667)SMLOGENTRY, client-CD988394E416, cid=90 at org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:360) at org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:165) at org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:214) at org.apache.ratis.grpc.server.GRpcLogAppender.appendLog(GRpcLogAppender.java:148) at org.apache.ratis.grpc.server.GRpcLogAppender.runAppenderImpl(GRpcLogAppender.java:92) at org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:101) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) at sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178) at sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185) at sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118) at org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:176) at org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:161) at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:598) at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:201) at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142) at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:217) at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:289) at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$3(ContainerStateMachine.java:359) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-416) Fix bug in ChunkInputStreamEntry
Lokesh Jain created HDDS-416: Summary: Fix bug in ChunkInputStreamEntry Key: HDDS-416 URL: https://issues.apache.org/jira/browse/HDDS-416 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 ChunkInputStreamEntry maintains currentPosition field. This field is redundant and can be replaced by getPos(). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-386) Create a datanode cli
Lokesh Jain created HDDS-386: Summary: Create a datanode cli Key: HDDS-386 URL: https://issues.apache.org/jira/browse/HDDS-386 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 For block deletion we need a debug cli on the datanode to know the state of the containers and number of chunks present in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler
Lokesh Jain created HDDS-251: Summary: Integrate BlockDeletingService in KeyValueHandler Key: HDDS-251 URL: https://issues.apache.org/jira/browse/HDDS-251 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also fixes the unit tests related to delete blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling
Lokesh Jain created HDDS-254: Summary: Fix TestStorageContainerManager#testBlockDeletingThrottling Key: HDDS-254 URL: https://issues.apache.org/jira/browse/HDDS-254 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks
Lokesh Jain created HDDS-253: Summary: SCMBlockDeletingService should publish events for delete blocks Key: HDDS-253 URL: https://issues.apache.org/jira/browse/HDDS-253 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 SCMBlockDeletingService should publish events for delete Blocks command. Currently it directly makes a call to SCMNodeManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13052) WebHDFS: Add support for snasphot diff
Lokesh Jain created HDFS-13052: -- Summary: WebHDFS: Add support for snasphot diff Key: HDFS-13052 URL: https://issues.apache.org/jira/browse/HDFS-13052 Project: Hadoop HDFS Issue Type: Task Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to implement snapshot diff operation for webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13271) WebHDFS: Add constructor in SnapshottableDirectoryStatus with HdfsFileStatus as argument
Lokesh Jain created HDFS-13271: -- Summary: WebHDFS: Add constructor in SnapshottableDirectoryStatus with HdfsFileStatus as argument Key: HDFS-13271 URL: https://issues.apache.org/jira/browse/HDFS-13271 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Reporter: Lokesh Jain Assignee: Lokesh Jain This jira aims to add a constructor in SnapshottableDirectoryStatus which takes HdfsFileStatus as a argument. This constructor will be used in JsonUtilClient#toSnapshottableDirectoryStatus for creating a SnapshottableDirectoryStatus object. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13280) WebHDFS: Fix NPE in get snasphottable directory list call
Lokesh Jain created HDFS-13280: -- Summary: WebHDFS: Fix NPE in get snasphottable directory list call Key: HDFS-13280 URL: https://issues.apache.org/jira/browse/HDFS-13280 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Reporter: Lokesh Jain Assignee: Lokesh Jain WebHdfs throws NPE when snapshottable directory status list is null. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13249) Document webhdfs support for getting snapshottable directory list
Lokesh Jain created HDFS-13249: -- Summary: Document webhdfs support for getting snapshottable directory list Key: HDFS-13249 URL: https://issues.apache.org/jira/browse/HDFS-13249 Project: Hadoop HDFS Issue Type: Bug Components: documentation, webhdfs Reporter: Lokesh Jain Assignee: Lokesh Jain This ticket is opened to document the WebHDFS: Add support for getting snasphottable directory list from HDFS-13141 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13394) Ozone: ContainerID has incorrect package name
[ https://issues.apache.org/jira/browse/HDFS-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDFS-13394. Resolution: Not A Problem > Ozone: ContainerID has incorrect package name > - > > Key: HDFS-13394 > URL: https://issues.apache.org/jira/browse/HDFS-13394 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > Labels: newbie > > {{ContainerID}} package name and the directory structure where the class is > present doesn't match. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient
Lokesh Jain created HDFS-13431: -- Summary: Ozone: Ozone Shell should use RestClient and RpcClient Key: HDFS-13431 URL: https://issues.apache.org/jira/browse/HDFS-13431 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and RpcClient instead of OzoneRestClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13444) Ozone: Fix checkstyle issues in HDFS-7240
Lokesh Jain created HDFS-13444: -- Summary: Ozone: Fix checkstyle issues in HDFS-7240 Key: HDFS-13444 URL: https://issues.apache.org/jira/browse/HDFS-13444 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Attachments: HDFS-7240.007.patch -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13458) Ozone: Fix javadoc issues in HDFS-7240
Lokesh Jain created HDFS-13458: -- Summary: Ozone: Fix javadoc issues in HDFS-7240 Key: HDFS-13458 URL: https://issues.apache.org/jira/browse/HDFS-13458 Project: Hadoop HDFS Issue Type: Bug Reporter: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13456) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)
Lokesh Jain created HDFS-13456: -- Summary: Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT) Key: HDFS-13456 URL: https://issues.apache.org/jira/browse/HDFS-13456 Project: Hadoop HDFS Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13228) Ozone: Add support for rename key within a bucket for rpc client
Lokesh Jain created HDFS-13228: -- Summary: Ozone: Add support for rename key within a bucket for rpc client Key: HDFS-13228 URL: https://issues.apache.org/jira/browse/HDFS-13228 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain This jira aims to implement rename operation on a key within a bucket for rpc client. OzoneFilesystem currently rewrites a key on rename. Addition of this operation would simplify renames in OzoneFilesystem as renames would just be a db update in ksm. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13229) Ozone: Add support for rename key within a bucket for rest client
Lokesh Jain created HDFS-13229: -- Summary: Ozone: Add support for rename key within a bucket for rest client Key: HDFS-13229 URL: https://issues.apache.org/jira/browse/HDFS-13229 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Lokesh Jain Assignee: Lokesh Jain This jira aims to add support for rename key within a bucket for rest client. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-587) Add new classes for pipeline management
Lokesh Jain created HDDS-587: Summary: Add new classes for pipeline management Key: HDDS-587 URL: https://issues.apache.org/jira/browse/HDDS-587 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.3.0 This Jira adds new classes and corresponding unit tests for pipeline management in SCM. The old classes will be removed in a subsequent jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-656) Add logic for pipeline report and action processing in new pipeline code
Lokesh Jain created HDDS-656: Summary: Add logic for pipeline report and action processing in new pipeline code Key: HDDS-656 URL: https://issues.apache.org/jira/browse/HDDS-656 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain As part of pipeline refactoring, new pipeline management classes were added as part of HDDS-587. This Jira adds logic for pipeline report and action processing in the new code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-694) Plugin new Pipeline management code in SCM
Lokesh Jain created HDDS-694: Summary: Plugin new Pipeline management code in SCM Key: HDDS-694 URL: https://issues.apache.org/jira/browse/HDDS-694 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to plugin new pipeline management code in SCM. It removes the old pipeline related classes as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-668) Replica Manager should use replica with latest delete transactionID
Lokesh Jain created HDDS-668: Summary: Replica Manager should use replica with latest delete transactionID Key: HDDS-668 URL: https://issues.apache.org/jira/browse/HDDS-668 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently replica manager does not use delete trasactionID for choosing the replica which will be replicated. This Jira aims to store delete transactionID for each replica so that replica manager can choose replica with latest delete transactionID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-422) ContainerStateMachine.readStateMachineData throws OverlappingFileLockException
[ https://issues.apache.org/jira/browse/HDDS-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-422. -- Resolution: Not A Bug > ContainerStateMachine.readStateMachineData throws OverlappingFileLockException > -- > > Key: HDDS-422 > URL: https://issues.apache.org/jira/browse/HDDS-422 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Labels: Arches-Deferral-Candidate > > > {code:java} > 2018-09-06 23:11:41,386 ERROR org.apache.ratis.server.impl.LogAppender: > GRpcLogAppender(d95c60fd-0e23-4237-8135-e05a326b952d_9858 -> > 954e7a3b-b20e-43a5-8f82-4381872aa7bb_9858) hit IOException while loadin > g raft log > org.apache.ratis.server.storage.RaftLogIOException: > d95c60fd-0e23-4237-8135-e05a326b952d_9858: Failed readStateMachineData for > (t:39, i:667)SMLOGENTRY, client-CD988394E416, cid=90 > at > org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:360) > at > org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:165) > at > org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:214) > at > org.apache.ratis.grpc.server.GRpcLogAppender.appendLog(GRpcLogAppender.java:148) > at > org.apache.ratis.grpc.server.GRpcLogAppender.runAppenderImpl(GRpcLogAppender.java:92) > at org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:101) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.nio.channels.OverlappingFileLockException > at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) > at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) > at > sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178) > at > sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185) > at > sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118) > at > org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:176) > at > org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:161) > at > org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:598) > at > org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:201) > at > org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:217) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:289) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$3(ContainerStateMachine.java:359) > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-771) ChunkGroupOutputStream stream entries need to be properly updated on closed container exception
Lokesh Jain created HDDS-771: Summary: ChunkGroupOutputStream stream entries need to be properly updated on closed container exception Key: HDDS-771 URL: https://issues.apache.org/jira/browse/HDDS-771 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ChunkGroupOutputStream does not increment the currentStreamIndex when a chunk write completes but there is no data in the buffer. This leads to overwriting of stream entry. We also need to update the bcsid in case of closed container exception. The stream entry's bcsid needs to be updated with the bcsid of the committed block. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-744) Fix ASF license warning in PipelineNotFoundException class
Lokesh Jain created HDDS-744: Summary: Fix ASF license warning in PipelineNotFoundException class Key: HDDS-744 URL: https://issues.apache.org/jira/browse/HDDS-744 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-802) Container State Manager should get open pipelines for allocating container
Lokesh Jain created HDDS-802: Summary: Container State Manager should get open pipelines for allocating container Key: HDDS-802 URL: https://issues.apache.org/jira/browse/HDDS-802 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 ContainerStateManager#allocateContainer currenlty calls getPipelines(type, factor) which returns pipelines of all states. This Jira aims to add another api getPipelines(type, factor, state) which can be called by container state manager to get only the open pipelines. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-734) Remove create container logic from OzoneClient
[ https://issues.apache.org/jira/browse/HDDS-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-734. -- Resolution: Duplicate This issue has been fixed via HDDS-733. > Remove create container logic from OzoneClient > -- > > Key: HDDS-734 > URL: https://issues.apache.org/jira/browse/HDDS-734 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > > After HDDS-733, the container will be created as part of the first chunk > write, we don't need explicit container creation code in {{OzoneClient}} > anymore. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-735) Remove ALLOCATED and CREATING state from ContainerStateManager
[ https://issues.apache.org/jira/browse/HDDS-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-735. -- Resolution: Duplicate This issue has been fixed via HDDS-733. > Remove ALLOCATED and CREATING state from ContainerStateManager > -- > > Key: HDDS-735 > URL: https://issues.apache.org/jira/browse/HDDS-735 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > > After HDDS-733 and HDDS-734, we don't need ALLOCATED and CREATING state for > containers in SCM. The container will move to OPEN state as soon as it is > allocated in SCM. Since the container creation happens as part of the first > chunk write and container creation operation in datanode idempotent we don't > have to worry about giving out the same container to multiple clients as soon > as it is allocated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-844) Add logic for pipeline teardown after timeout
Lokesh Jain created HDDS-844: Summary: Add logic for pipeline teardown after timeout Key: HDDS-844 URL: https://issues.apache.org/jira/browse/HDDS-844 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Affects Versions: 0.4.0 Reporter: Lokesh Jain Assignee: Lokesh Jain On receiving pipeline action we close the pipeline and wait for all containers to get closed. Currently pipeline is destroyed on datanodes only after all the containers have been closed. There is a possibility for containers to never get to CLOSED state if there is a two node failure. In such scenarios the pipeline needs to be destroyed and removed from SCM after a timeout. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-397) Handle deletion for keys with no blocks
Lokesh Jain created HDDS-397: Summary: Handle deletion for keys with no blocks Key: HDDS-397 URL: https://issues.apache.org/jira/browse/HDDS-397 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Keys which do not contain blocks can be deleted directly from OzoneManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager
Lokesh Jain created HDDS-561: Summary: Move Node2ContainerMap and Node2PipelineMap to NodeManager Key: HDDS-561 URL: https://issues.apache.org/jira/browse/HDDS-561 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-959) KeyOutputStream should handle retry failures
Lokesh Jain created HDDS-959: Summary: KeyOutputStream should handle retry failures Key: HDDS-959 URL: https://issues.apache.org/jira/browse/HDDS-959 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain With ratis version updated to 0.4.0-a8c4ca0-SNAPSHOT, retry failures are fatal for a raft client. If an operation in raft client does not succeed after maximum number of retries(RaftRetryFailureException) all subsequent operations are failed with AlreadyClosedException. This jira aims to handle such exceptions. Since we maintain a cache for clients in XceiverClientManager, the corresponding client needs to be invalidated in the cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-982) Fix TestContainerDataYaml#testIncorrectContainerFile
Lokesh Jain created HDDS-982: Summary: Fix TestContainerDataYaml#testIncorrectContainerFile Key: HDDS-982 URL: https://issues.apache.org/jira/browse/HDDS-982 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-981) Block allocation should involve pipeline selection and then container selection
Lokesh Jain created HDDS-981: Summary: Block allocation should involve pipeline selection and then container selection Key: HDDS-981 URL: https://issues.apache.org/jira/browse/HDDS-981 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Currently SCM maintains a list of preallocated containers and allocates blocks from these containers. This approach does not work well with dynamics of the cluster where new nodes are being added and pipelines are destroyed. New containers are not created until all the preallocated containers are exhausted. The Jira aims to establish a criteria in block allocation where first a pipeline is selected amongst the available pipelines and then a container is selected in that pipeline. In order to handle the dynamics of the cluster a fixed interval pipeline creator job can be launched which creates pipelines in the system. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-912) Update ozone to latest ratis snapshot build (0.4.0-3b0be02-SNAPSHOT)
Lokesh Jain created HDDS-912: Summary: Update ozone to latest ratis snapshot build (0.4.0-3b0be02-SNAPSHOT) Key: HDDS-912 URL: https://issues.apache.org/jira/browse/HDDS-912 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain We can update ratis snapshot build in ozone to 0.4.0-3b0be02-SNAPSHOT. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-871) XceiverClientRatis#watchForCommit closes raft client at every call
[ https://issues.apache.org/jira/browse/HDDS-871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-871. -- Resolution: Duplicate > XceiverClientRatis#watchForCommit closes raft client at every call > -- > > Key: HDDS-871 > URL: https://issues.apache.org/jira/browse/HDDS-871 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > Currently XceiverClientRatis#watchForCommit closes raft client at every call. > We should cache the raft client and reuse it for future watchForCommit calls. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-886) Unnecessary buffer copy in HddsDispatcher#dispatch
Lokesh Jain created HDDS-886: Summary: Unnecessary buffer copy in HddsDispatcher#dispatch Key: HDDS-886 URL: https://issues.apache.org/jira/browse/HDDS-886 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 In HddsDispatcher#dispatch precondition not null check converts container command to a string object. This is done even for a write chunk command which means we copy the chunk data to a string. {code:java} // code placeholderpublic ContainerCommandResponseProto dispatch( ContainerCommandRequestProto msg) { LOG.trace("Command {}, trace ID: {} ", msg.getCmdType().toString(), msg.getTraceID()); Preconditions.checkNotNull(msg.toString()); {code} The precondition needs to check only the msg. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-871) XceiverClientRatis#watchForCommit closes raft client at every call
Lokesh Jain created HDDS-871: Summary: XceiverClientRatis#watchForCommit closes raft client at every call Key: HDDS-871 URL: https://issues.apache.org/jira/browse/HDDS-871 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 Currently XceiverClientRatis#watchForCommit closes raft client at every call. We should cache the raft client and reuse it for future watchForCommit calls. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-869) Fix log message in XceiverClientRatis#sendCommandAsync
Lokesh Jain created HDDS-869: Summary: Fix log message in XceiverClientRatis#sendCommandAsync Key: HDDS-869 URL: https://issues.apache.org/jira/browse/HDDS-869 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 The log message in XceiverClientRatis#sendCommandAsync is wrong. We should not print data in case of write chunk request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-971) ContainerDataConstructor throws exception on QUASI_CLOSED and UNHEALTHY container state
Lokesh Jain created HDDS-971: Summary: ContainerDataConstructor throws exception on QUASI_CLOSED and UNHEALTHY container state Key: HDDS-971 URL: https://issues.apache.org/jira/browse/HDDS-971 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain We need to define QUASI_CLOSED and UNHEALTHY state in ContainerDataConstructor class logic. Currently the code uses switch-case for determining the state. The logic can be replaced with {code:java} ContainerProtos.ContainerDataProto.State.valueOf(state) {code} call. The Jira also fixes test failure TestKeys#testPutAndGetKeyWithDnRestart. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-464) Fix TestCloseContainerHandlingByClient
Lokesh Jain created HDDS-464: Summary: Fix TestCloseContainerHandlingByClient Key: HDDS-464 URL: https://issues.apache.org/jira/browse/HDDS-464 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and AssertionError respectively. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-475) Block Allocation returns same BlockID on different keys creation
Lokesh Jain created HDDS-475: Summary: Block Allocation returns same BlockID on different keys creation Key: HDDS-475 URL: https://issues.apache.org/jira/browse/HDDS-475 Project: Hadoop Distributed Data Store Issue Type: Bug Affects Versions: 0.2.1 Reporter: Lokesh Jain Assignee: Lokesh Jain BlockManagerImpl#allocateBlock returns same BlockID. This leads to different key creations getting the same blockId. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto
Lokesh Jain created HDDS-433: Summary: ContainerStateMachine#readStateMachineData should properly build LogEntryProto Key: HDDS-433 URL: https://issues.apache.org/jira/browse/HDDS-433 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Affects Versions: 0.2.1 Reporter: Lokesh Jain Assignee: Lokesh Jain ContainerStateMachine#readStateMachineData returns LogEntryProto with index set to 0. This leads to exception in Ratis. The LogEntryProto to return should be built over the input LogEntryProto. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-480) RaftLog should make sure appendEntries term are incremental in nature
Lokesh Jain created HDDS-480: Summary: RaftLog should make sure appendEntries term are incremental in nature Key: HDDS-480 URL: https://issues.apache.org/jira/browse/HDDS-480 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently raft log does not make sure that any appendEntry has a term greater than or equal to the last applied entry's term in the log. This Jira aims to add that check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDDS-1134) OzoneFileSystem#create should allocate alteast one block for future writes.
[ https://issues.apache.org/jira/browse/HDDS-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reopened HDDS-1134: --- Reopening issue as it was not fixed in HDDS-1300. > OzoneFileSystem#create should allocate alteast one block for future writes. > --- > > Key: HDDS-1134 > URL: https://issues.apache.org/jira/browse/HDDS-1134 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Attachments: HDDS-1134.001.patch > > > While opening a new key, OM should at least allocate one block for the key, > this should be done in case the client is not sure about the number of block. > However for users of OzoneFS, if the key is being created for a directory, > then no blocks should be allocated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1341) TestContainerReplication#testContainerReplication fails intermittently
Lokesh Jain created HDDS-1341: - Summary: TestContainerReplication#testContainerReplication fails intermittently Key: HDDS-1341 URL: https://issues.apache.org/jira/browse/HDDS-1341 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain The test fails intermittently. The link to the test report can be found below. https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/ {code:java} java.lang.AssertionError: Container is not replicated to the destination datanode at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:621) at org.apache.hadoop.ozone.container.TestContainerReplication.testContainerReplication(TestContainerReplication.java:139) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1343) TestNodeFailure times out intermittently
Lokesh Jain created HDDS-1343: - Summary: TestNodeFailure times out intermittently Key: HDDS-1343 URL: https://issues.apache.org/jira/browse/HDDS-1343 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain TestNodeFailure times out while waiting for cluster to be ready. This is done in cluster setup. {code:java} java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:140) at org.apache.hadoop.hdds.scm.pipeline.TestNodeFailure.init(TestNodeFailure.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} 5 datanodes out of 6 are able to heartbeat in the test result [https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1342) TestOzoneManagerHA#testOMProxyProviderFailoverOnConnectionFailure fails intermittently
Lokesh Jain created HDDS-1342: - Summary: TestOzoneManagerHA#testOMProxyProviderFailoverOnConnectionFailure fails intermittently Key: HDDS-1342 URL: https://issues.apache.org/jira/browse/HDDS-1342 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain The test fails intermittently. The link to the test report can be found below. [https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/] {code:java} java.net.ConnectException: Call From ea902c1cb730/172.17.0.3 to localhost:10174 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515) at org.apache.hadoop.ipc.Client.call(Client.java:1457) at org.apache.hadoop.ipc.Client.call(Client.java:1367) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:310) at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.createVolume(OzoneManagerProtocolClientSideTranslatorPB.java:343) at org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:275) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54) at com.sun.proxy.$Proxy86.createVolume(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66) at com.sun.proxy.$Proxy86.createVolume(Unknown Source) at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:100) at org.apache.hadoop.ozone.om.TestOzoneManagerHA.createVolumeTest(TestOzoneManagerHA.java:162) at org.apache.hadoop.ozone.om.TestOzoneManagerHA.testOMProxyProviderFailoverOnConnectionFailure(TestOzoneManagerHA.java:237) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at
[jira] [Created] (HDDS-1130) Make BenchMarkBlockManager multi-threaded
Lokesh Jain created HDDS-1130: - Summary: Make BenchMarkBlockManager multi-threaded Key: HDDS-1130 URL: https://issues.apache.org/jira/browse/HDDS-1130 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 Currently BenchMarkBlockManager is run by a single thread. We can make it multi-threaded in order to have a better understanding of allocateBlock call performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1140) TestSCMChillModeManager is failing with NullPointerException
[ https://issues.apache.org/jira/browse/HDDS-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-1140. --- Resolution: Duplicate > TestSCMChillModeManager is failing with NullPointerException > > > Key: HDDS-1140 > URL: https://issues.apache.org/jira/browse/HDDS-1140 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > > TestSCMChillModeManager is failing with the following exception > {code} > [ERROR] > testDisableChillMode(org.apache.hadoop.hdds.scm.chillmode.TestSCMChillModeManager) > Time elapsed: 0.012 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.utils.Scheduler.scheduleWithFixedDelay(Scheduler.java:78) > at > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils.scheduleFixedIntervalPipelineCreator(RatisPipelineUtils.java:211) > at > org.apache.hadoop.hdds.scm.chillmode.SCMChillModeManager.exitChillMode(SCMChillModeManager.java:137) > at > org.apache.hadoop.hdds.scm.chillmode.SCMChillModeManager.(SCMChillModeManager.java:93) > at > org.apache.hadoop.hdds.scm.chillmode.TestSCMChillModeManager.testDisableChillMode(TestSCMChillModeManager.java:134) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
Lokesh Jain created HDDS-1208: - Summary: ContainerStateMachine should set chunk data as state machine data for ratis Key: HDDS-1208 URL: https://issues.apache.org/jira/browse/HDDS-1208 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ContainerStateMachine sets ContainerCommandRequestProto as state machine data. This requires converting the ContainerCommandRequestProto to a bytestring which leads to redundant buffer copy in case of write chunk request. This can be avoided by setting the chunk data as the state machine data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1171) Add benchmark for OM and OM client in Genesis
Lokesh Jain created HDDS-1171: - Summary: Add benchmark for OM and OM client in Genesis Key: HDDS-1171 URL: https://issues.apache.org/jira/browse/HDDS-1171 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 This Jira aims to add benchmark for OM and OM client in Genesis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1221) Introduce fine grained lock in Ozone Manager for key operations
Lokesh Jain created HDDS-1221: - Summary: Introduce fine grained lock in Ozone Manager for key operations Key: HDDS-1221 URL: https://issues.apache.org/jira/browse/HDDS-1221 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ozone manager acquires bucket lock for key operations in OM. We can introduce fine grained lock for key operations in ozone manager. This would help in increasing throughput for key operations in a bucket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
Lokesh Jain created HDDS-1220: - Summary: KeyManager#openKey should release the bucket lock before doing an allocateBlock Key: HDDS-1220 URL: https://issues.apache.org/jira/browse/HDDS-1220 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently KeyManager#openKey makes an allocateBlock call without releasing the bucket lock. Since allocateBlock requires a rpc connection to SCM, the handler thread in OM would hold the bucket lock until rpc is complete. Since allocateBlock call does not require a bucket lock to be held, allocateBlock call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
[ https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-1248. --- Resolution: Duplicate > TestSecureOzoneRpcClient fails intermittently > - > > Key: HDDS-1248 > URL: https://issues.apache.org/jira/browse/HDDS-1248 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > > TestSecureOzoneRpcClient fails intermittently with the following exception. > {code:java} > java.io.IOException: Unexpected Storage Container Exception: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543) > at >
[jira] [Created] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
Lokesh Jain created HDDS-1248: - Summary: TestSecureOzoneRpcClient fails intermittently Key: HDDS-1248 URL: https://issues.apache.org/jira/browse/HDDS-1248 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Fix For: 0.4.0 TestSecureOzoneRpcClient fails intermittently with the following exception. {code:java} java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) at org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) at org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) at org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:333) ... 35 more Caused by: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at
[jira] [Reopened] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
[ https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reopened HDDS-1248: --- The test calls BlockTokenIdentifier#setTestStub(true) in TestSecureOzoneRpcClient#testKeyOpFailureWithoutBlockToken. Since testStub is true all the concurrently running tests fail with Block token verification failed exception. > TestSecureOzoneRpcClient fails intermittently > - > > Key: HDDS-1248 > URL: https://issues.apache.org/jira/browse/HDDS-1248 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > > TestSecureOzoneRpcClient fails intermittently with the following exception. > {code:java} > java.io.IOException: Unexpected Storage Container Exception: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at >
[jira] [Created] (HDDS-1301) Optimize recursive ozone filesystem apis
Lokesh Jain created HDDS-1301: - Summary: Optimize recursive ozone filesystem apis Key: HDDS-1301 URL: https://issues.apache.org/jira/browse/HDDS-1301 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to optimise recursive apis in ozone file system. These are the apis which have a recursive flag which requires an operation to be performed on all the children of the directory. The Jira would add support for recursive apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1300) Optimize non-recursive ozone filesystem apis
Lokesh Jain created HDDS-1300: - Summary: Optimize non-recursive ozone filesystem apis Key: HDDS-1300 URL: https://issues.apache.org/jira/browse/HDDS-1300 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone Filesystem, Ozone Manager Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to optimise non recursive apis in ozone file system. The Jira would add support for such apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1106) Introduce queryMap in PipelineManager
Lokesh Jain created HDDS-1106: - Summary: Introduce queryMap in PipelineManager Key: HDDS-1106 URL: https://issues.apache.org/jira/browse/HDDS-1106 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 In Genesis benchmark for block allocation it was found that BlockManager#allocateBlock call was very slow for higher number of pipelines. This happens because allocateBlock call gets list of pipelines with a particular replication type, replication factor and state. This list is calculated by traversing the entries of a map. This Jira aims to optimize the call by introducing query map in Pipeline Manager. The pipeline manager would keep a maintain a list of pipelines for every query type i.e. for every replication type, replication factor. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1097) Add genesis benchmark for BlockManager#allocateBlock
Lokesh Jain created HDDS-1097: - Summary: Add genesis benchmark for BlockManager#allocateBlock Key: HDDS-1097 URL: https://issues.apache.org/jira/browse/HDDS-1097 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 This Jira aims to add a genesis benchmark test for BlockManager#allocateBlock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1081) Closing state containers should not be added to pipeline on SCM restart
Lokesh Jain created HDDS-1081: - Summary: Closing state containers should not be added to pipeline on SCM restart Key: HDDS-1081 URL: https://issues.apache.org/jira/browse/HDDS-1081 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 Currently while loading containers from db during scm start, containers in CLOSING state are added to pipeline. This can lead to error as the pipeline may have been removed from SCM db in such cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1028) Improve logging in SCMPipelineManager
Lokesh Jain created HDDS-1028: - Summary: Improve logging in SCMPipelineManager Key: HDDS-1028 URL: https://issues.apache.org/jira/browse/HDDS-1028 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Fix For: 0.4.0 Currently SCMPipelineManager does not log events like pipeline creation and deletion. It would be a good idea to log such events. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1004) SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events
Lokesh Jain created HDDS-1004: - Summary: SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events Key: HDDS-1004 URL: https://issues.apache.org/jira/browse/HDDS-1004 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain SCMContainerManager#updateContainerStateInternal currently fails for QUASI_CLOSE and FORCE_CLOSE events. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1047) Fix TestRatisPipelineProvider#testCreatePipelineWithFactor
Lokesh Jain created HDDS-1047: - Summary: Fix TestRatisPipelineProvider#testCreatePipelineWithFactor Key: HDDS-1047 URL: https://issues.apache.org/jira/browse/HDDS-1047 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1045) Fix TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked
[ https://issues.apache.org/jira/browse/HDDS-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-1045. --- Resolution: Duplicate > Fix TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked > > > Key: HDDS-1045 > URL: https://issues.apache.org/jira/browse/HDDS-1045 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked is > failing with stack overflow error. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1045) Fix TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked
Lokesh Jain created HDDS-1045: - Summary: Fix TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked Key: HDDS-1045 URL: https://issues.apache.org/jira/browse/HDDS-1045 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Fix For: 0.4.0 TestCloseContainerByPipeline#testIfCloseContainerCommandHandlerIsInvoked is failing with stack overflow error. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1405) ITestOzoneContractCreate is failing
Lokesh Jain created HDDS-1405: - Summary: ITestOzoneContractCreate is failing Key: HDDS-1405 URL: https://issues.apache.org/jira/browse/HDDS-1405 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain ITestOzoneContractCreate and ITestOzoneContractMkdir are failing with FileAlreadyExistsException. The issue is around the file imported in BasicOzoneClientAdapterImpl. The class needs to import org.apache.hadoop.fs.FileAlreadyExistsException but currently imports java.nio.file.FileAlreadyExistsException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1626) Optimize allocateBlock for cases when excludeList is provided
Lokesh Jain created HDDS-1626: - Summary: Optimize allocateBlock for cases when excludeList is provided Key: HDDS-1626 URL: https://issues.apache.org/jira/browse/HDDS-1626 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to optimize allocateBlock for cases when excludeList is provided. This includes the case when excludeList is empty or the cases when it is not empty. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12735) Make ContainerStateMachine#applyTransaction async
[ https://issues.apache.org/jira/browse/HDFS-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDFS-12735. Resolution: Duplicate > Make ContainerStateMachine#applyTransaction async > - > > Key: HDFS-12735 > URL: https://issues.apache.org/jira/browse/HDFS-12735 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Labels: performance > Attachments: HDFS-12735-HDFS-7240.000.patch, > HDFS-12735-HDFS-7240.001.patch, HDFS-12735-HDFS-7240.002.patch > > > Currently ContainerStateMachine#applyTransaction makes a synchronous call to > dispatch client requests. Idea is to have a thread pool which dispatches > client requests and returns a CompletableFuture. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1503) Reduce garbage generated by non-netty threads in datanode ratis server
Lokesh Jain created HDDS-1503: - Summary: Reduce garbage generated by non-netty threads in datanode ratis server Key: HDDS-1503 URL: https://issues.apache.org/jira/browse/HDDS-1503 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain We use GRPC protocol for rpc communication in Ratis. By default thread caches are generated even for non-netty threads. This Jira aims to add a default JVM parameter for disabling thread caches for non-netty threads in datanode ratis server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1481) Cleanup BasicOzoneFileSystem#mkdir
Lokesh Jain created HDDS-1481: - Summary: Cleanup BasicOzoneFileSystem#mkdir Key: HDDS-1481 URL: https://issues.apache.org/jira/browse/HDDS-1481 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Filesystem Reporter: Lokesh Jain Assignee: Lokesh Jain Currently BasicOzoneFileSystem#mkdir does not have the optimizations made in HDDS-1300. The changes for this function were missed in HDDS-1460. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1766) ContainerStateMachine is unable to increment lastAppliedIndex
Lokesh Jain created HDDS-1766: - Summary: ContainerStateMachine is unable to increment lastAppliedIndex Key: HDDS-1766 URL: https://issues.apache.org/jira/browse/HDDS-1766 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain ContainerStateMachine#updateLastApplied currently updates the lastAppliedTermIndex using applyTransactionCompletionMap. There are null entries in the applyTransactionCompletionMap causing the lastAppliedIndex to not be incremented. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1767) ContainerStateMachine should have its own executors for executing applyTransaction calls
Lokesh Jain created HDDS-1767: - Summary: ContainerStateMachine should have its own executors for executing applyTransaction calls Key: HDDS-1767 URL: https://issues.apache.org/jira/browse/HDDS-1767 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ContainerStateMachine uses the executors provided by XceiverServerRatis for executing applyTransaction calls. This would result in two or more ContainerStateMachine to share the same set of executors. Delay or load in one ContainerStateMachine would adversely affect the performance of other state machines in such a case. It is better to have separate set of executors for each ContainerStateMachine. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1461) Optimize listStatus api in OzoneFileStatus
Lokesh Jain created HDDS-1461: - Summary: Optimize listStatus api in OzoneFileStatus Key: HDDS-1461 URL: https://issues.apache.org/jira/browse/HDDS-1461 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone Filesystem, Ozone Manager Reporter: Lokesh Jain Assignee: Lokesh Jain Currently in listStatus we make multiple getFileStatus calls. This can be optimized by converting to a single rpc call for listStatus. Also currently listStatus has to traverse a directory recursively in order to list its immediate children. This happens because in OzoneManager all the metadata is stored in rocksdb sorted on keynames. The Jira also aims to fix this by using seek api provided by rocksdb. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org