[jira] [Commented] (HDDS-3001) NFS support for Ozone
[ https://issues.apache.org/jira/browse/HDDS-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114504#comment-17114504 ] Prashant Pogde commented on HDDS-3001: -- [~maobaolong] Yes I will access Ozone through OzoneFileSystem. > NFS support for Ozone > - > > Key: HDDS-3001 > URL: https://issues.apache.org/jira/browse/HDDS-3001 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Filesystem >Affects Versions: 0.5.0 >Reporter: Prashant Pogde >Assignee: Prashant Pogde >Priority: Major > Labels: pull-request-available > Attachments: NFS Support for Ozone.pdf > > > Provide NFS support for Ozone -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3001) NFS support for Ozone
[ https://issues.apache.org/jira/browse/HDDS-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114492#comment-17114492 ] maobaolong commented on HDDS-3001: -- [~ppogde] It's great, you mean you plan to access Ozone through OzoneFilesystem or BasicOzoneFilesystem, is it? > NFS support for Ozone > - > > Key: HDDS-3001 > URL: https://issues.apache.org/jira/browse/HDDS-3001 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Filesystem >Affects Versions: 0.5.0 >Reporter: Prashant Pogde >Assignee: Prashant Pogde >Priority: Major > Labels: pull-request-available > Attachments: NFS Support for Ozone.pdf > > > Provide NFS support for Ozone -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3644) Failed to delete chunk file due to chunk size mismatch
[ https://issues.apache.org/jira/browse/HDDS-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-3644: --- Description: LOGs {noformat} 2020-05-19 13:45:30,493 [BlockDeletingService#8] WARN org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Chunk file doe not exist. chunk info :ChunkInfo{chunkName='104079328540607246_chunk_1, offset=0, len=4194304} 2020-05-19 13:45:30,493 [BlockDeletingService#8] ERROR org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} 2020-05-19 13:45:30,494 [BlockDeletingService#8] ERROR org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService: Failed to delete files for block #deleting#104079328540607246 org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} at org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy.deleteChunks(FilePerChunkStrategy.java:286) at org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerDispatcher.deleteChunks(ChunkManagerDispatcher.java:111) at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.deleteBlock(KeyValueHandler.java:1043) at org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService$BlockDeletingTask.lambda$call$0(BlockDeletingService.java:286) Caused by: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} {noformat} chunk_1 is 4MB and chunk_2 is 1MB in block info. chunk_1 doesn't exit(might been deleted successfully) and chunk_2 is 5MB on disk. was: LOGs 2020-05-19 13:45:30,493 [BlockDeletingService#8] WARN org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Chunk file doe not exist. chunk info :ChunkInfo{chunkName='104079328540607246_chunk_1, offset=0, len=4194304} 2020-05-19 13:45:30,493 [BlockDeletingService#8] ERROR org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} 2020-05-19 13:45:30,494 [BlockDeletingService#8] ERROR org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService: Failed to delete files for block #deleting#104079328540607246 org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} at org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy.deleteChunks(FilePerChunkStrategy.java:286) at org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerDispatcher.deleteChunks(ChunkManagerDispatcher.java:111) at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.deleteBlock(KeyValueHandler.java:1043) at org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService$BlockDeletingTask.lambda$call$0(BlockDeletingService.java:286) Caused by: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Not Supported Operation. Trying to delete a chunk that is in shared file. chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, len=1048576} chunk_1 is 4MB and chunk_2 is 1MB in block info. chunk_1 doesn't exit(might been deleted successfully) and chunk_2 is 5MB on disk. > Failed to delete chunk file due to chunk size mismatch > -- > > Key: HDDS-3644 > URL: https://issues.apache.org/jira/browse/HDDS-3644 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Sammi Chen >Priority: Major > > LOGs > {noformat} > 2020-05-19 13:45:30,493 [BlockDeletingService#8] WARN > org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Chunk > file doe not exist. chunk info > :ChunkInfo{chunkName='104079328540607246_chunk_1, offset=0, len=4194304} > 2020-05-19 13:45:30,493 [BlockDeletingService#8] ERROR >
[GitHub] [hadoop-ozone] prashantpogde opened a new pull request #956: Hdds 2720
prashantpogde opened a new pull request #956: URL: https://github.com/apache/hadoop-ozone/pull/956 ## What changes were proposed in this pull request? Adding Fault Injection Service in Ozone:tools/FaultInjectionService directory ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2720 ## How was this patch tested? This is an independent tool which will be used to do fault injection testing for Ozone. Its build/test process is independent of Ozone and should not impact Ozone in any way. For using this tool, please follow README in tools/FaultInjectionService. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc edited a comment on pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
captainzmc edited a comment on pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#issuecomment-632563724 Hi @bharatviswa504, Could you please help to review this PR? Let's see if there is any effect on OM HA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on a change in pull request #956: HDDS-2720. Ozone Failure injection Service
elek commented on a change in pull request #956: URL: https://github.com/apache/hadoop-ozone/pull/956#discussion_r429106622 ## File path: tools/FaultInjectionService/AUTHORS ## @@ -0,0 +1,12 @@ +Current Maintainer Review comment: I am not sure if we need this file (but I can be convinced). Until now we didn't follow this pattern. There are no maintainers of SCM / Freon / OM. The community itself is the maintainers but we can check the git history for first contact. This file is also requires additional maintenance. I would suggest to remove it. ## File path: tools/FaultInjectionService/README.md ## @@ -0,0 +1,53 @@ +NoiseInjector +== + +About +-- +TBD Review comment: Just one sentence please ## File path: tools/FaultInjectionService/README.md ## @@ -0,0 +1,53 @@ +NoiseInjector +== + +About +-- +TBD + +Development Status +-- +TBD Review comment: One word, please ;-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc commented on pull request #940: HDDS-3614. Remove S3Table from OmMetadataManager.
captainzmc commented on pull request #940: URL: https://github.com/apache/hadoop-ozone/pull/940#issuecomment-632582077 LGTM +1. The S3Table is no longer useful after HDDS-3385, and the original mapping relationship should not be needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc edited a comment on pull request #940: HDDS-3614. Remove S3Table from OmMetadataManager.
captainzmc edited a comment on pull request #940: URL: https://github.com/apache/hadoop-ozone/pull/940#issuecomment-632582077 LGTM +1. The S3Table is no longer useful after HDDS-3385, and the original mapping relationship will not be needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #934: HDDS-3605. Support close all pipelines.
maobaolong commented on a change in pull request #934: URL: https://github.com/apache/hadoop-ozone/pull/934#discussion_r429128650 ## File path: hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ClosePipelineSubcommand.java ## @@ -38,14 +40,59 @@ @CommandLine.ParentCommand private PipelineCommands parent; - @CommandLine.Parameters(description = "ID of the pipeline to close") + @CommandLine.Parameters(description = "ID of the pipeline to close," + + "'ALL' means all pipeline") private String pipelineId; + @CommandLine.Option(names = {"-ffc", "--filterByFactor"}, Review comment: I've got some help from @timmylicheng This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #934: HDDS-3605. Support close all pipelines.
maobaolong commented on a change in pull request #934: URL: https://github.com/apache/hadoop-ozone/pull/934#discussion_r429128112 ## File path: hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ClosePipelineSubcommand.java ## @@ -38,14 +40,59 @@ @CommandLine.ParentCommand private PipelineCommands parent; - @CommandLine.Parameters(description = "ID of the pipeline to close") + @CommandLine.Parameters(description = "ID of the pipeline to close," + + "'ALL' means all pipeline") private String pipelineId; + @CommandLine.Option(names = {"-ffc", "--filterByFactor"}, + description = "Filter listed pipelines by Factor(ONE/one)", + defaultValue = "", + required = false) + private String factor; + + @CommandLine.Option(names = {"-fst", "--filterByState"}, + description = "Filter listed pipelines by State(OPEN/CLOSE)", + defaultValue = "", + required = false) + private String state; + @Override public Void call() throws Exception { try (ScmClient scmClient = parent.getParent().createScmClient()) { - scmClient.closePipeline( - HddsProtos.PipelineID.newBuilder().setId(pipelineId).build()); + if (pipelineId.equalsIgnoreCase("ALL")) { +if (Strings.isNullOrEmpty(factor) && Strings.isNullOrEmpty(state)) { + scmClient.listPipelines().forEach(pipeline -> { +try { + scmClient.closePipeline( + HddsProtos.PipelineID.newBuilder() + .setId(pipeline.getId().getId().toString()).build()); +} catch (IOException e) { + throw new IllegalStateException( + "met a exception while closePipeline", e); +} + }); +} else { + scmClient.listPipelines().stream() + .filter(p -> ((Strings.isNullOrEmpty(factor) || Review comment: Yeah, thank you for point it, the first condition is redundancy, remove it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3630) Merge rocksdb into one in datanode
[ https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113815#comment-17113815 ] runzhiwang commented on HDDS-3630: -- [~msingh] Thanks for reminding, I will create a google doc to share the design. > Merge rocksdb into one in datanode > -- > > Key: HDDS-3630 > URL: https://issues.apache.org/jira/browse/HDDS-3630 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: runzhiwang >Assignee: runzhiwang >Priority: Major > > Currently, one rocksdb for one container. one container has 5GB capacity. > 10TB data need more than 2000 rocksdb in one datanode. It's difficult to > limit the memory of 2000 rocksdb. So maybe we should only use one rocksdb for > all containers. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
ChenSammi commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429091904 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; Review comment: Agree. Use list of KeyArgs instead of add a new keyNameList field. ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; Review comment: Agree. Use list of KeyArgs instead of add a new keyNameList field. ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; Review comment: Agree. Use list of KeyArgs instead of add a new keyNameList field. ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; Review comment: Agree. Use list of KeyArgs instead of add a new keyNameList field. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3385) Simplify S3 -> Ozone volume mapping
[ https://issues.apache.org/jira/browse/HDDS-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113843#comment-17113843 ] Marton Elek commented on HDDS-3385: --- [~avijayan] What is the difference between incompatible and ozone-incompatible labels and which one should be used? > Simplify S3 -> Ozone volume mapping > --- > > Key: HDDS-3385 > URL: https://issues.apache.org/jira/browse/HDDS-3385 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Attila Doroszlai >Priority: Critical > Labels: imcompatible, ozone-incompatible, pull-request-available > Fix For: 0.6.0 > > > See the design doc for more details: > https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/docs/content/design/ozone-volume-management.md -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc commented on pull request #957: HDDS-3477. Disable partial chunk write during flush() call in ozone client by default.
captainzmc commented on pull request #957: URL: https://github.com/apache/hadoop-ozone/pull/957#issuecomment-632574456 Hi @bshashikant, Could you please help to review this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429095553 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java ## @@ -248,6 +253,34 @@ private void testDeleteCreatesFakeParentDir() throws Exception { assertEquals(parentKey, parentKeyInfo.getName()); } Review comment: It would be great if we add a failure case here like there is an unknown key in a list of known keys. We would like to also test exceptions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc commented on pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
captainzmc commented on pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#issuecomment-632563724 Hi @bharatviswa504, Could you please help me review this PR? Let's see if there is any effect on OM HA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3477) Disable partial chunk write during flush() call in ozone client by default
[ https://issues.apache.org/jira/browse/HDDS-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-3477: - Labels: pull-request-available (was: ) > Disable partial chunk write during flush() call in ozone client by default > -- > > Key: HDDS-3477 > URL: https://issues.apache.org/jira/browse/HDDS-3477 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Shashikant Banerjee >Assignee: mingchao zhao >Priority: Major > Labels: pull-request-available > Fix For: 0.6.0 > > > Currently, Ozone client flushes the partial chunks as well during flush() > call by default. > [https://github.com/apache/hadoop-ozone/pull/716] proposes to add a > configuration to disallow partial chunk flush during flush() call. This Jira > aims to enable the config on by default to mimic the default hdfs flush() > behaviour and fix any failing unit tests associated with the change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc opened a new pull request #957: HDDS-3477. Disable partial chunk write during flush() call in ozone client by default.
captainzmc opened a new pull request #957: URL: https://github.com/apache/hadoop-ozone/pull/957 ## What changes were proposed in this pull request? Currently, Ozone client flushes the partial chunks as well during flush() call by default. https://github.com/apache/hadoop-ozone/pull/716 proposes to add a configuration to disallow partial chunk flush during flush() call. This Jira aims to enable the config on by default to mimic the default hdfs flush() behaviour and fix any failing unit tests associated with the change. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-3477 ## How was this patch tested? The affected UT has been modified. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429085745 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; Review comment: Can we have a list of KeyArgs here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429089141 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java ## @@ -256,11 +263,16 @@ public Builder setSortDatanodesInPipeline(boolean sort) { return this; } +public Builder setKeyNameList(List keyList) { + this.keyNameList = keyList; + return this; +} + public OmKeyArgs build() { return new OmKeyArgs(volumeName, bucketName, keyName, dataSize, type, Review comment: Why do we keep keyName and keyNameList at the same time? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429088871 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java ## @@ -157,6 +159,7 @@ public void testFileSystem() throws Exception { Review comment: Not related to this class. But you wanna visit TestOzoneManagerHAWithData and see if HA needs a test case for deleting a list of keys. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2720) Ozone Failure injection Service
[ https://issues.apache.org/jira/browse/HDDS-2720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2720: - Labels: pull-request-available (was: ) > Ozone Failure injection Service > --- > > Key: HDDS-2720 > URL: https://issues.apache.org/jira/browse/HDDS-2720 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Datanode, Ozone Filesystem, Ozone Manager, Ozone > Recon, SCM >Affects Versions: 0.5.0 >Reporter: Prashant Pogde >Assignee: Prashant Pogde >Priority: Major > Labels: pull-request-available > Attachments: OzoneNoiseInjection.pdf > > > This will be used to track development for failure injection service. This > can be used to inject various failures/delays in an ozone cluster and > validate ozone in presence of these failures or extreme conditions. > Attached document provides a brief overview for this failure injection > service and how it could be leveraged to validate ozone in stressful > environments. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc closed pull request #957: HDDS-3477. Disable partial chunk write during flush() call in ozone client by default.
captainzmc closed pull request #957: URL: https://github.com/apache/hadoop-ozone/pull/957 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429086029 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -848,6 +850,7 @@ message DeleteKeyResponse { // (similar to a cookie). optional uint64 ID = 3; optional uint64 openVersion = 4; +repeated KeyInfo keyInfoList = 5; Review comment: Can we use repeated KeyArgs here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429085904 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -695,6 +695,8 @@ message KeyArgs { // This will be set by leader OM in HA and update the original request. optional FileEncryptionInfoProto fileEncryptionInfo = 15; +// This is a key list to support the batch operation of keys. +repeated string keyNames = 16; Review comment: Why not remain KeyArgs as for single key? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
timmylicheng commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429087591 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java ## @@ -111,51 +114,53 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, boolean acquiredLock = false; OMClientResponse omClientResponse = null; Result result = null; +List omKeyInfoList= new ArrayList<>(); try { - // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, - IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY); - - String objectKey = omMetadataManager.getOzoneKey( - volumeName, bucketName, keyName); - + if (keyNameList.size() == 0) { +throw new OMException("Key not found", KEY_NOT_FOUND); + } acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK, - volumeName, bucketName); - + volumeName, bucketName); // Validate bucket and volume exists or not. validateBucketAndVolume(omMetadataManager, volumeName, bucketName); - - OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey); - if (omKeyInfo == null) { -throw new OMException("Key not found", KEY_NOT_FOUND); + Table keyTable = omMetadataManager.getKeyTable(); + for (String keyName : keyNameList) { +// check Acl +checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, +IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY); +String objectKey = omMetadataManager.getOzoneKey( +volumeName, bucketName, keyName); +OmKeyInfo omKeyInfo = keyTable.get(objectKey); +if (omKeyInfo == null) { + throw new OMException("Key not found", KEY_NOT_FOUND); +} + +// Check if this transaction is a replay of ratis logs. +if (isReplay(ozoneManager, omKeyInfo, trxnLogIndex)) { Review comment: We need to iterate all omKeyInfo in the list as well here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #944: HDDS-3609. Avoid to use Hadoop3.x IOUtils in Ozone Client
elek commented on pull request #944: URL: https://github.com/apache/hadoop-ozone/pull/944#issuecomment-632572966 Thanks the feedback @adoroszlai I started to write my first answer yesterday: "good idea, let me check the version of IO commons used by hadoop 2.x" But I started to think. Do we really need one additional dependency? IOUtils is better than guava but we need to shade all the dependencies on the client side. I agree with the server side, let's use common, but client side, It can be better to maintain our own logic. But *currently* we have IOUtils on the classpath anyway. I updated the patch based on the suggestions, but in the future we can revisit that approach (using IOUtils) when we agree to remove IOUtils from the dependencies. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc commented on pull request #957: HDDS-3477. Disable partial chunk write during flush() call in ozone client by default.
captainzmc commented on pull request #957: URL: https://github.com/apache/hadoop-ozone/pull/957#issuecomment-632579658 Hi @bshashikant, Could you please help to review this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] captainzmc removed a comment on pull request #957: HDDS-3477. Disable partial chunk write during flush() call in ozone client by default.
captainzmc removed a comment on pull request #957: URL: https://github.com/apache/hadoop-ozone/pull/957#issuecomment-632574456 Hi @bshashikant, Could you please help to review this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong commented on pull request #934: HDDS-3605. Support close all pipelines.
maobaolong commented on pull request #934: URL: https://github.com/apache/hadoop-ozone/pull/934#issuecomment-632583943 @xiaoyuyao Thank you for your suggestion - i've add two list to show the result of this batch operation - Add a tests - Remove the redundancy condition. PTAL. Please take another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2426) Support recover-trash to an existing bucket.
[ https://issues.apache.org/jira/browse/HDDS-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YiSheng Lien updated HDDS-2426: --- Description: Support recovering trash to an existing bucket. *Note* 1. We should also add a config key that prevents this mode, so admins can force the recovery to a new bucket always. >> Yeah, we should add a config to enable admins always recover trash to new >> buckets. But the config checking would be implemented in Ozone CLI part 2. A new table *(TrashTable)* is introduced to implement this jira was: Support recovering trash to an existing bucket. *Note* 1. We should also add a config key that prevents this mode, so admins can force the recovery to a new bucket always. >> Yeah, we should add a config to enable admins always recover trash to new >> buckets. But the config checking would be implemented in Ozone CLI part 2. A new table *(TrashTable)* is introduced to implement this jira.Text > Support recover-trash to an existing bucket. > - > > Key: HDDS-2426 > URL: https://issues.apache.org/jira/browse/HDDS-2426 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Manager >Reporter: Anu Engineer >Assignee: YiSheng Lien >Priority: Major > > Support recovering trash to an existing bucket. > *Note* > 1. We should also add a config key that prevents this mode, so admins can > force the recovery to a new bucket always. > >> Yeah, we should add a config to enable admins always recover trash to new > buckets. > But the config checking would be implemented in Ozone CLI part > 2. A new table *(TrashTable)* is introduced to implement this jira -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2426) Support recover-trash to an existing bucket.
[ https://issues.apache.org/jira/browse/HDDS-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2426: - Labels: pull-request-available (was: ) > Support recover-trash to an existing bucket. > - > > Key: HDDS-2426 > URL: https://issues.apache.org/jira/browse/HDDS-2426 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Manager >Reporter: Anu Engineer >Assignee: YiSheng Lien >Priority: Major > Labels: pull-request-available > > Support recovering trash to an existing bucket. > *Note* > 1. We should also add a config key that prevents this mode, so admins can > force the recovery to a new bucket always. > >> Yeah, we should add a config to enable admins always recover trash to new > buckets. > But the config checking would be implemented in Ozone CLI part > 2. A new table *(TrashTable)* is introduced to implement this jira -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] cxorm opened a new pull request #958: HDDS-2426. Support recover-trash to an existing bucket.
cxorm opened a new pull request #958: URL: https://github.com/apache/hadoop-ozone/pull/958 ## What changes were proposed in this pull request? This PR composed of changes including: - provide trash-enabled config when creating buckets. - provide recover-window config when creating buckets. - global config of above settings. - core logic of recovering trash to existing buckets. ### Proposed fix. - In `OmMetadataManagerImpl#getPendingDeletionKeys` we only check last item in repeatedOmKeyInfo. Cause once the last item is trash-enabled, we keep this key from purging in OM DB. > If the life time of key in deletedTable is less than recoverWindow, we didn't purge it in this time. > So we use the difference between currentTime and modificationTime of key to check the situation. - Using **trashTable** to track trash. > In part of deleting key, > add cache-update of trashTable in `OMKeyDeleteRequest` and DB-operation of trashTable in `OMKeyDeleteResponse` > In part of recovering trash, > for `OMTrashRecoverRequest` > remove OMKeyInfo in RepeatedOmKeyInfo from cache of trashTable and deletedTable. **(not remove all RepeatedOmKeyInfo.)** > add OMKeyInfo to cache of keyTable. > for `OMTrashRecoverResponse` > remove OMKeyInfo in RepeatedOmKeyInfo of trashTable and deletedTable in OM DB. > add OMKeyInfo to KeyTable in OM DB. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2426 ## How was this patch tested? Add UTs to track the OMTrashRecoverRequest and OMTrashRecoverResponse. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #767: HDDS-3064. Get Key is hung when READ delay is injected in chunk file path
lokeshj1703 commented on pull request #767: URL: https://github.com/apache/hadoop-ozone/pull/767#issuecomment-632648835 @bshashikant Thanks for the contribution! @elek @adoroszlai Thanks for the reviews! I have committed the PR to master branch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 closed pull request #767: HDDS-3064. Get Key is hung when READ delay is injected in chunk file path
lokeshj1703 closed pull request #767: URL: https://github.com/apache/hadoop-ozone/pull/767 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2823) SCM HA Support
[ https://issues.apache.org/jira/browse/HDDS-2823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-2823: -- Target Version/s: 0.7.0 > SCM HA Support > --- > > Key: HDDS-2823 > URL: https://issues.apache.org/jira/browse/HDDS-2823 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Sammi Chen >Assignee: Li Cheng >Priority: Major > > OM HA is close to feature complete now. It's time to support SCM HA, to make > sure there is no SPoF in the system. > > Design doc: > https://docs.google.com/document/d/1vr_z6mQgtS1dtI0nANoJlzvF1oLV-AtnNJnxAgg69rM/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3630) Merge rocksdb into one in datanode
[ https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] runzhiwang updated HDDS-3630: - Description: Currently, one rocksdb for one container. one container has 5GB capacity. 10TB data need more than 2000 rocksdb in one datanode. It's difficult to limit the memory of 2000 rocksdb. So maybe we should use one rocksdb for each disk. (was: Currently, one rocksdb for one container. one container has 5GB capacity. 10TB data need more than 2000 rocksdb in one datanode. It's difficult to limit the memory of 2000 rocksdb. So maybe we should only use one rocksdb for all containers.) > Merge rocksdb into one in datanode > -- > > Key: HDDS-3630 > URL: https://issues.apache.org/jira/browse/HDDS-3630 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: runzhiwang >Assignee: runzhiwang >Priority: Major > > Currently, one rocksdb for one container. one container has 5GB capacity. > 10TB data need more than 2000 rocksdb in one datanode. It's difficult to > limit the memory of 2000 rocksdb. So maybe we should use one rocksdb for each > disk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #954: HDDS-3638. Add a cat command to show the text of a file in the Ozone server
adoroszlai commented on a change in pull request #954: URL: https://github.com/apache/hadoop-ozone/pull/954#discussion_r429213813 ## File path: hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/keys/CatKeyHandler.java ## @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.shell.keys; + +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CHUNK_SIZE_KEY; + +import org.apache.hadoop.conf.StorageUnit; +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientException; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.shell.OzoneAddress; +import picocli.CommandLine.Command; + +import java.io.IOException; +import java.io.InputStream; + +/** + * Cat an existing key. + */ +@Command(name = "cat", +description = "Copies a specific Ozone key to standard output") +public class CatKeyHandler extends KeyHandler { + + @Override + protected void execute(OzoneClient client, OzoneAddress address) + throws IOException, OzoneClientException { + +String volumeName = address.getVolumeName(); +String bucketName = address.getBucketName(); +String keyName = address.getKeyName(); + +int chunkSize = (int) getConf().getStorageSize(OZONE_SCM_CHUNK_SIZE_KEY, +"4KB", StorageUnit.BYTES); Review comment: This will use `4KB` buffer only if chunk size is not configured, which I don't think can happen. I guess @xiaoyuyao's intention was to always use 4KB buffer, regardless of chunk size setting. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #900: HDDS-3500. Hide OMFailoverProxyProvider usage behind an interface
bharatviswa504 commented on a change in pull request #900: URL: https://github.com/apache/hadoop-ozone/pull/900#discussion_r429255662 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OmTransportFactory.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.ozone.om.protocolPB; + +import java.io.IOException; +import java.util.Iterator; +import java.util.ServiceLoader; + +import org.apache.hadoop.hdds.conf.ConfigurationSource; +import org.apache.hadoop.security.UserGroupInformation; + +/** + * Factory pattern to create object for RPC communication with OM. + */ +public interface OmTransportFactory { + + OmTransport createOmTransport(ConfigurationSource source, + UserGroupInformation ugi, String omServiceId) throws IOException; + + static OmTransport create(ConfigurationSource conf, + UserGroupInformation ugi, String omServiceId) throws IOException { +OmTransportFactory factory = createFactory(); + +return factory.createOmTransport(conf, ugi, omServiceId); + } + + static OmTransportFactory createFactory() throws IOException { +ServiceLoader transportFactoryServiceLoader = +ServiceLoader.load(OmTransportFactory.class); +Iterator iterator = Review comment: Question: what is alternative way other then having meta-inf/services to load class for OmTransportFactory? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3644) Failed to delete chunk file due to chunk size mismatch
[ https://issues.apache.org/jira/browse/HDDS-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114120#comment-17114120 ] Attila Doroszlai commented on HDDS-3644: [~Sammi], after trying various scenarios I think having chunk_2 file of size 5MB can only happen in mixed version setup (post-HDDS-2717 client writes data to pre-HDDS-2717 datanode). I can reproduce it with that setup: {code} 3029181 4096 -rw-r--r-- 1 hadoop users 4194304 May 22 14:35 /data/hdds/hdds/ec1bcfdf-c076-48e6-b4f7-15f89b3acb5b/current/containerDir0/1/chunks/104212603391115264_chunk_1 3029180 1024 -rw-r--r-- 1 hadoop users 5242880 May 22 14:35 /data/hdds/hdds/ec1bcfdf-c076-48e6-b4f7-15f89b3acb5b/current/containerDir0/1/chunks/104212603391115264_chunk_2 {code} Then the block deleting problem can happen if datanode is also upgraded to post-HDDS-2717 version. New client sends correct offset for each chunk, and depending on the chunk layout the new datanode uses it (for file-per-block) or ignores it (for file-per-chunk). Old datanode, however, uses chunk offset to seek the write location in the chunk file, which it could safely do because old client always sent offset=0. > Failed to delete chunk file due to chunk size mismatch > -- > > Key: HDDS-3644 > URL: https://issues.apache.org/jira/browse/HDDS-3644 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Sammi Chen >Priority: Major > > LOGs > {noformat} > 2020-05-19 13:45:30,493 [BlockDeletingService#8] WARN > org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Chunk > file doe not exist. chunk info > :ChunkInfo{chunkName='104079328540607246_chunk_1, offset=0, len=4194304} > 2020-05-19 13:45:30,493 [BlockDeletingService#8] ERROR > org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy: Not > Supported Operation. Trying to delete a chunk that is in shared file. chunk > info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, > len=1048576} > 2020-05-19 13:45:30,494 [BlockDeletingService#8] ERROR > org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService: > Failed to delete files for block #deleting#104079328540607246 > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Not Supported Operation. Trying to delete a chunk that is in shared file. > chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, > len=1048576} > at > org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy.deleteChunks(FilePerChunkStrategy.java:286) > at > org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerDispatcher.deleteChunks(ChunkManagerDispatcher.java:111) > at > org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.deleteBlock(KeyValueHandler.java:1043) > at > org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService$BlockDeletingTask.lambda$call$0(BlockDeletingService.java:286) > Caused by: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Not Supported Operation. Trying to delete a chunk that is in shared file. > chunk info : ChunkInfo{chunkName='104079328540607246_chunk_2, offset=4194304, > len=1048576} > {noformat} > chunk_1 is 4MB and chunk_2 is 1MB in block info. > chunk_1 doesn't exit(might been deleted successfully) and chunk_2 is 5MB on > disk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2949) mkdir : store directory entries in a separate table
[ https://issues.apache.org/jira/browse/HDDS-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-2949: Labels: backward-incompatible (was: ozone-incompatible) > mkdir : store directory entries in a separate table > --- > > Key: HDDS-2949 > URL: https://issues.apache.org/jira/browse/HDDS-2949 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Manager >Reporter: Supratim Deka >Assignee: Rakesh Radhakrishnan >Priority: Major > Labels: backward-incompatible > > As of HDDS-2940, all the directories from the path prefix get created as > entries in the key table. as per the namespace proposal attached to > HDDS-2939, directory entries need to be stored in a separate "directory" > table. Files will continue to be stored in the key table, which can be > thought of as the "file" table. > The advantage of a separate directory table is to make directory lookup more > efficient - the entire table would fit into memory for a typical file based > dataset. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong commented on pull request #954: HDDS-3638. Add a cat command to show the text of a file in the Ozone server
maobaolong commented on pull request #954: URL: https://github.com/apache/hadoop-ozone/pull/954#issuecomment-632741195 @adoroszlai Thank you for the clarification, i use a const 4096 as chunk size, i think i have address @xiaoyuyao 's comments now, sorry for the misunderstand. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #954: HDDS-3638. Add a cat command to show the text of a file in the Ozone server
maobaolong commented on a change in pull request #954: URL: https://github.com/apache/hadoop-ozone/pull/954#discussion_r429300590 ## File path: hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/keys/CatKeyHandler.java ## @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.shell.keys; + +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CHUNK_SIZE_KEY; + +import org.apache.hadoop.conf.StorageUnit; +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientException; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.shell.OzoneAddress; +import picocli.CommandLine.Command; + +import java.io.IOException; +import java.io.InputStream; + +/** + * Cat an existing key. + */ +@Command(name = "cat", +description = "Copies a specific Ozone key to standard output") +public class CatKeyHandler extends KeyHandler { + + @Override + protected void execute(OzoneClient client, OzoneAddress address) + throws IOException, OzoneClientException { + +String volumeName = address.getVolumeName(); +String bucketName = address.getBucketName(); +String keyName = address.getKeyName(); + +int chunkSize = (int) getConf().getStorageSize(OZONE_SCM_CHUNK_SIZE_KEY, +"4KB", StorageUnit.BYTES); Review comment: @adoroszlai Got it, done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3172) Use DBStore instead of MetadataStore in SCM
[ https://issues.apache.org/jira/browse/HDDS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-3172: Labels: backward-incompatible pull-request-available (was: backward-incompatible ozone-incompatible pull-request-available) > Use DBStore instead of MetadataStore in SCM > > > Key: HDDS-3172 > URL: https://issues.apache.org/jira/browse/HDDS-3172 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Critical > Labels: backward-incompatible, pull-request-available > Fix For: 0.6.0 > > Time Spent: 10m > Remaining Estimate: 0h > > The MetadataStore interface provides a generic view to any key / value store > with a LevelDB and RocksDB implementation. > Since the early version of MetadataStore we also go the DBStore interface > which is more andvanced (it supports DB profiles and ColumnFamilies). > To simplify the introduction of new features (like versioning or rocksdb > tuning) we should use the new interface everywhere instead of the old > interface. > We should update SCM and Datanode to use the DBStore instead of > MetadataStore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3385) Simplify S3 -> Ozone volume mapping
[ https://issues.apache.org/jira/browse/HDDS-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114060#comment-17114060 ] Aravindan Vijayan commented on HDDS-3385: - [~elek] We can go with *backward-incompatible* as tagged in HDDS-3172 since that correctly captures the type of the JIRA. I will remove the "ozone-incompatible" label. > Simplify S3 -> Ozone volume mapping > --- > > Key: HDDS-3385 > URL: https://issues.apache.org/jira/browse/HDDS-3385 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Attila Doroszlai >Priority: Critical > Labels: imcompatible, ozone-incompatible, pull-request-available > Fix For: 0.6.0 > > > See the design doc for more details: > https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/docs/content/design/ozone-volume-management.md -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3385) Simplify S3 -> Ozone volume mapping
[ https://issues.apache.org/jira/browse/HDDS-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-3385: Labels: backward-incompatible imcompatible pull-request-available (was: imcompatible ozone-incompatible pull-request-available) > Simplify S3 -> Ozone volume mapping > --- > > Key: HDDS-3385 > URL: https://issues.apache.org/jira/browse/HDDS-3385 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Attila Doroszlai >Priority: Critical > Labels: backward-incompatible, imcompatible, > pull-request-available > Fix For: 0.6.0 > > > See the design doc for more details: > https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/docs/content/design/ozone-volume-management.md -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #900: HDDS-3500. Hide OMFailoverProxyProvider usage behind an interface
bharatviswa504 commented on pull request #900: URL: https://github.com/apache/hadoop-ozone/pull/900#issuecomment-632701755 I have one question, other than that it LGTM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3645) Add a replication type option for putkey command
maobaolong created HDDS-3645: Summary: Add a replication type option for putkey command Key: HDDS-3645 URL: https://issues.apache.org/jira/browse/HDDS-3645 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone CLI Affects Versions: 0.6.0 Reporter: maobaolong Assignee: maobaolong If we can put a key into Ozone by different type {RATIS, STAND_ALONE}, but i have to change the ozone-site.xml file, so, add a option can be better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] prashantpogde commented on a change in pull request #956: HDDS-2720. Ozone Failure injection Service
prashantpogde commented on a change in pull request #956: URL: https://github.com/apache/hadoop-ozone/pull/956#discussion_r429499942 ## File path: tools/FaultInjectionService/AUTHORS ## @@ -0,0 +1,12 @@ +Current Maintainer Review comment: @elek Thank you for reviewing it. - Yes I will remove AUTHORS file and change the directory name to lower case. - Its supported only on linux and some dependencies require building them from the source. I will add an automated build script as next set of enhancement and create a separate PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] prashantpogde commented on a change in pull request #956: HDDS-2720. Ozone Failure injection Service
prashantpogde commented on a change in pull request #956: URL: https://github.com/apache/hadoop-ozone/pull/956#discussion_r429499987 ## File path: tools/FaultInjectionService/README.md ## @@ -0,0 +1,53 @@ +NoiseInjector +== + +About +-- +TBD Review comment: yup, will change. ## File path: tools/FaultInjectionService/README.md ## @@ -0,0 +1,53 @@ +NoiseInjector +== + +About +-- +TBD + +Development Status +-- +TBD Review comment: yup, will change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong opened a new pull request #962: HDDS-3645. Add a replication type option for putkey command
maobaolong opened a new pull request #962: URL: https://github.com/apache/hadoop-ozone/pull/962 ## What changes were proposed in this pull request? If we can put a key into Ozone by different type {RATIS, STAND_ALONE} , but i have to change the ozone-site.xml file, so, add an option can be better. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-3645 ## How was this patch tested? ```bash bin/ozone sh key put -r 3 -t STAND_ALONE /myvol/mybucket/NOTICE.txt NOTICE.txt bin/ozone sh key put -r 3 -t RATIS /myvol/mybucket/NOTICE2.txt NOTICE.txt ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3645) Add a replication type option for putkey command
[ https://issues.apache.org/jira/browse/HDDS-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-3645: - Labels: pull-request-available (was: ) > Add a replication type option for putkey command > > > Key: HDDS-3645 > URL: https://issues.apache.org/jira/browse/HDDS-3645 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone CLI >Affects Versions: 0.6.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: pull-request-available > > If we can put a key into Ozone by different type {RATIS, STAND_ALONE}, but i > have to change the ozone-site.xml file, so, add a option can be better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-3646) Add a copy command to copy key to a new one.
maobaolong created HDDS-3646: Summary: Add a copy command to copy key to a new one. Key: HDDS-3646 URL: https://issues.apache.org/jira/browse/HDDS-3646 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: maobaolong Assignee: maobaolong It should support specify the replication factor and type. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] maobaolong opened a new pull request #963: HDDS-3646. Add a copy command to copy key to a new one.
maobaolong opened a new pull request #963: URL: https://github.com/apache/hadoop-ozone/pull/963 ## What changes were proposed in this pull request? A copy command can copy an exist key into a new one, It should support specify the replication factor and type. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-3646 ## How was this patch tested? ```bash bin/ozone sh key rename ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3646) Add a copy command to copy key to a new one.
[ https://issues.apache.org/jira/browse/HDDS-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-3646: - Labels: pull-request-available (was: ) > Add a copy command to copy key to a new one. > - > > Key: HDDS-3646 > URL: https://issues.apache.org/jira/browse/HDDS-3646 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: pull-request-available > > It should support specify the replication factor and type. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] smengcl commented on pull request #865: HDDS-2969. Implement ofs://: Add contract test
smengcl commented on pull request #865: URL: https://github.com/apache/hadoop-ozone/pull/865#issuecomment-632967135 > @smengcl Please rebase the branch to pick up this change from master #867. That should fix the CI issue. > [HDDS-3373](https://issues.apache.org/jira/browse/HDDS-3373). Intermittent failure in TestOMRatisLogParser (#867) > > This seems to be an addendum commit in addition to the one that is currently in [HDDS-2665](https://issues.apache.org/jira/browse/HDDS-2665)-ofs branch. > [HDDS-3373](https://issues.apache.org/jira/browse/HDDS-3373). Intermittent failure in TestDnRatisLogParser and TestOMRatisLogParser (#858) I attempted to rebase the OFS branch a few day ago locally but unfortunately it won't compile, implying changes to OFS classes is required due to other master branch work. I will do the rebase in another jira. In the mean time I have picked up the addendum patch for HDDS-3373 in another dev branch just to run the tests: https://github.com/smengcl/hadoop-ozone/commits/HDDS-2969-with-HDDS-3373-addendum This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429341429 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -830,7 +832,7 @@ message LookupKeyResponse { message RenameKeyRequest{ required KeyArgs keyArgs = 1; -required string toKeyName = 2; +optional string toKeyName = 2; Review comment: why do we need to change toKeyName from required to optional? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429346575 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java ## @@ -196,7 +201,7 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, case FAILURE: omMetrics.incNumKeyDeleteFails(); LOG.error("Key delete failed. Volume:{}, Bucket:{}, Key{}. Exception:{}", - volumeName, bucketName, keyName, exception); + volumeName, bucketName, keyNameList, exception); Review comment: same as above. we might only want to print the key that failed in the deletion instead of the whole list upon failures. print the whole list can be a debug or trace level log. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429358962 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -1966,6 +1966,14 @@ jar and false for the ozone-filesystem-lib-current.jar + +ozone.fs.iterate.batch-size +1 +OZONE, OZONEFS + + iterate batch size of delete and rename when use BasicOzoneFileSystem. Review comment: Can we update the title of the JIRA to reflect batch rename? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #938: HDDS-3608. NPE while process a pipeline report when PipelineQuery absent in query2OpenPipelines
xiaoyuyao commented on pull request #938: URL: https://github.com/apache/hadoop-ozone/pull/938#issuecomment-632776850 > @xiaoyuyao Sorry, i don't find your comments inline? Could you show me where is the inline comments you mean? The comments is just above: query2OpenPipelines has been initialized in initializeQueryMap() based on the RepType/ReFactor and has never been removed from the map. Correct me if I'm wrong, the pipelineList should never be null unless different RepType/Factor are specified in the query, which is currently impossible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429340218 ## File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto ## @@ -677,7 +677,7 @@ message ListBucketsResponse { message KeyArgs { required string volumeName = 1; required string bucketName = 2; -required string keyName = 3; Review comment: Change required to option will be an incompatible change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429348515 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -272,15 +272,15 @@ public boolean createDirectory(String keyName) throws IOException { /** * Helper method to delete an object specified by key name in bucket. * - * @param keyName key name to be deleted + * @param keyNameList key name list to be deleted * @return true if the key is deleted, false otherwise */ @Override - public boolean deleteObject(String keyName) { -LOG.trace("issuing delete for key {}", keyName); + public boolean deleteObject(List keyNameList) { Review comment: Should we define a new method called deleteObjects to be backward compatible? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #954: HDDS-3638. Add a cat command to show the text of a file in the Ozone server
xiaoyuyao commented on pull request #954: URL: https://github.com/apache/hadoop-ozone/pull/954#issuecomment-632806385 LGTM, +1 pending CI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #855: HDDS-3474. Create transactionInfo Table in OmMetadataManager.
bharatviswa504 merged pull request #855: URL: https://github.com/apache/hadoop-ozone/pull/855 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #855: HDDS-3474. Create transactionInfo Table in OmMetadataManager.
bharatviswa504 commented on pull request #855: URL: https://github.com/apache/hadoop-ozone/pull/855#issuecomment-632831227 Thank You @hanishakoneru for the review and @elek for the discussion and review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429339639 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java ## @@ -256,11 +263,16 @@ public Builder setSortDatanodesInPipeline(boolean sort) { return this; } +public Builder setKeyNameList(List keyList) { + this.keyNameList = keyList; + return this; +} + public OmKeyArgs build() { return new OmKeyArgs(volumeName, bucketName, keyName, dataSize, type, Review comment: I think other operations such as createKey still assume a single KeyName. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429346155 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java ## @@ -187,7 +192,7 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, case SUCCESS: omMetrics.decNumKeys(); LOG.debug("Key deleted. Volume:{}, Bucket:{}, Key:{}", volumeName, - bucketName, keyName); + bucketName, keyNameList); Review comment: This will only print the address of the keNameList. You may want to expand the list and also protect the LOG.debug with a if LOG.isDebugEnabled(). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3617) SCM security
[ https://issues.apache.org/jira/browse/HDDS-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114262#comment-17114262 ] Xiaoyu Yao commented on HDDS-3617: -- Thanks [~maobaolong] for open the issue. Please check the service level authorization added by HDDS-1038. SCM service is intended to open for internal service like OM/DN only. With proper acl setting in hadoop-policy.xml, you should be able to ensure only authorized service user/admin have access to these services. > SCM security > > > Key: HDDS-3617 > URL: https://issues.apache.org/jira/browse/HDDS-3617 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Affects Versions: 0.6.0 >Reporter: maobaolong >Priority: Major > > Now the absence of security of SCM is a risk. SCM don't know who request a > powerful operation, and do it anyway, especially some admin operation, such > as close pipeline, create pipeline, safemode exit and so on. > I think we should do some works on it. > - Authentication. Verify the user information > - Authorization. Check the permission of the user have the right to access. > - Whitelist and Blacklist to simple way to check permission. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-3272) Smoke Test: hdfs commands failing on hadoop 27 docker-compose
[ https://issues.apache.org/jira/browse/HDDS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek reassigned HDDS-3272: - Assignee: Marton Elek > Smoke Test: hdfs commands failing on hadoop 27 docker-compose > - > > Key: HDDS-3272 > URL: https://issues.apache.org/jira/browse/HDDS-3272 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Affects Versions: 0.5.0 >Reporter: Dinesh Chitlangia >Assignee: Marton Elek >Priority: Blocker > > Discovered by [~bharat] when testing 0.5.0-beta RC2. > > > issue when running hdfs commands on hadoop 27 > docker-compose. I see the same test failing when running the smoke test. > $ docker exec -it c7fe17804044 bash > bash-4.4$ hdfs dfs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk > 2020-03-22 04:40:14 WARN NativeCodeLoader:60 - Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > 2020-03-22 04:40:15 INFO MetricsConfig:118 - Loaded properties from > hadoop-metrics2.properties > 2020-03-22 04:40:16 INFO MetricsSystemImpl:374 - Scheduled Metric snapshot > period at 10 second(s). > 2020-03-22 04:40:16 INFO MetricsSystemImpl:191 - XceiverClientMetrics > metrics system started > -put: Fatal internal error > java.lang.NullPointerException: client is null > at java.util.Objects.requireNonNull(Objects.java:228) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:201) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:227) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:305) > at > org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:315) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:599) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:452) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:463) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:486) > at > org.apache.hadoop.ozone.[client.io|http://client.io/].BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:481) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:455) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:508) > at > org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) > at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:120) > at > org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466) > at > org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391) > at > org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328) > at > org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263) > at > org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317) > at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289) > at > org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) > at > org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220) > at > org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267) > at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201) > at org.apache.hadoop.fs.shell.Command.run(Command.java:165) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) > The same command when using ozone fs is working fine. > docker exec -it fe5d39cf6eed bash > bash-4.2$ ozone fs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk > 2020-03-22 04:41:10,999 [main] INFO impl.MetricsConfig: Loaded properties > from hadoop-metrics2.properties > 2020-03-22
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429342829 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -382,6 +382,21 @@ public void deleteKey(String key) throws IOException { proxy.deleteKey(volumeName, name, key); } + /** + * Deletes the given list of keys from the bucket. + * @param keyList List of the key name to be deleted. + * @throws IOException + */ + public void deleteKeys(List keyList) throws IOException { Review comment: When we delete a list of keys, upon failure in the middle, can we return a list of deleted keys and undeleted keys? This may not be an issue when you delete a single key but when batch deleting, it is hard to recover from the failures without that information. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429344837 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java ## @@ -111,51 +114,53 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, boolean acquiredLock = false; OMClientResponse omClientResponse = null; Result result = null; +List omKeyInfoList= new ArrayList<>(); try { - // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, - IAccessAuthorizer.ACLType.DELETE, OzoneObj.ResourceType.KEY); - - String objectKey = omMetadataManager.getOzoneKey( - volumeName, bucketName, keyName); - + if (keyNameList.size() == 0) { +throw new OMException("Key not found", KEY_NOT_FOUND); + } acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK, Review comment: This is OK with o3fs as we will mount a single bucket. In the context of ofs where you can have multiple volume buckets under the root. This lock can't guarantee atomic across all the delete keyname list. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #814: HDDS-3286. BasicOzoneFileSystem support batchDelete.
xiaoyuyao commented on a change in pull request #814: URL: https://github.com/apache/hadoop-ozone/pull/814#discussion_r429349020 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java ## @@ -48,7 +48,7 @@ OzoneFSOutputStream createFile(String key, short replication, boolean createDirectory(String keyName) throws IOException; - boolean deleteObject(String keyName); Review comment: Should we define a new method called deleteObjects to be backward compatible? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #856: HDDS-3475. Use transactionInfo table to persist transaction information.
bharatviswa504 commented on pull request #856: URL: https://github.com/apache/hadoop-ozone/pull/856#issuecomment-632835753 @hanishakoneru thanks for the review, addressed review comments and also rebased on top of apache/master as now HDDS-3474 went in. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3474) Create transactionInfo Table in OmMetadataManager
[ https://issues.apache.org/jira/browse/HDDS-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-3474: - Fix Version/s: 0.6.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Create transactionInfo Table in OmMetadataManager > - > > Key: HDDS-3474 > URL: https://issues.apache.org/jira/browse/HDDS-3474 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Fix For: 0.6.0 > > > This Jira is to create a transaction info table which stores the current term > and last transaction index applied to DB. > *In this Jira following will be done:* > 1. introduce a new transaction info table which stores transactionInfo. > Key = TRANSACTIONINFO > value = currentTerm-transactionIndex > 2. Add new UT's for this table. > 3. Provide utility/helper methods to parse the transaction info table value -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] smengcl commented on pull request #865: HDDS-2969. Implement ofs://: Add contract test
smengcl commented on pull request #865: URL: https://github.com/apache/hadoop-ozone/pull/865#issuecomment-632859113 Retest failed to download artifact. Retriggering again. ``` [ERROR] Failed to execute goal on project hadoop-hdds-common: Could not resolve dependencies for project org.apache.hadoop:hadoop-hdds-common:jar:0.6.0-SNAPSHOT: Failed to collect dependencies at org.apache.ratis:ratis-server:jar:0.6.0-490b689-SNAPSHOT: Failed to read artifact descriptor for org.apache.ratis:ratis-server:jar:0.6.0-490b689-SNAPSHOT: Could not transfer artifact org.apache.ratis:ratis-server:pom:0.6.0-490b689-SNAPSHOT from/to apache.snapshots.https (https://repository.apache.org/content/repositories/snapshots): Failed to transfer file https://repository.apache.org/content/repositories/snapshots/org/apache/ratis/ratis-server/0.6.0-490b689-SNAPSHOT/ratis-server-0.6.0-490b689-SNAPSHOT.pom with status code 503 -> [Help 1] ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] smengcl merged pull request #955: HDDS-3631. KeyInfo related changes to support fileHandle.
smengcl merged pull request #955: URL: https://github.com/apache/hadoop-ozone/pull/955 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3186) Introduce generic SCMRatisRequest and SCMRatisResponse
[ https://issues.apache.org/jira/browse/HDDS-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-3186: -- Summary: Introduce generic SCMRatisRequest and SCMRatisResponse (was: Client requests to SCM RatisServer) > Introduce generic SCMRatisRequest and SCMRatisResponse > -- > > Key: HDDS-3186 > URL: https://issues.apache.org/jira/browse/HDDS-3186 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Li Cheng >Assignee: Nanda kumar >Priority: Major > > Refactor requests to be handled by SCM RatisServer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-3186) Introduce generic SCMRatisRequest and SCMRatisResponse
[ https://issues.apache.org/jira/browse/HDDS-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114347#comment-17114347 ] Nanda kumar commented on HDDS-3186: --- Uploading an initial version of the patch, will update the PR over the weekend. > Introduce generic SCMRatisRequest and SCMRatisResponse > -- > > Key: HDDS-3186 > URL: https://issues.apache.org/jira/browse/HDDS-3186 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Li Cheng >Assignee: Nanda kumar >Priority: Major > > This jira will introduce generic SCMRatisRequest and SCMRatisResponse which > will be used by all the Ratis operations inside SCM. We will also have a > generic StateMachine which will dispatch the request to registered handlers. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] dineshchitlangia merged pull request #942: HDDS-2556. Handle InterruptedException in BlockOutputStream
dineshchitlangia merged pull request #942: URL: https://github.com/apache/hadoop-ozone/pull/942 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] dineshchitlangia commented on pull request #942: HDDS-2556. Handle InterruptedException in BlockOutputStream
dineshchitlangia commented on pull request #942: URL: https://github.com/apache/hadoop-ozone/pull/942#issuecomment-632887322 Thanks @bharatviswa504 for review. I have merged this to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on pull request #813: HDDS-3309. Add TimedOutTestsListener to surefire and add timeout to integration tests
adoroszlai commented on pull request #813: URL: https://github.com/apache/hadoop-ozone/pull/813#issuecomment-632898576 > I have addressed the whitespace and removed timeout from more ignored tests. Thanks, the update looks good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] nandakumar131 commented on pull request #959: HDDS-3186. Initial version.
nandakumar131 commented on pull request #959: URL: https://github.com/apache/hadoop-ozone/pull/959#issuecomment-632868565 /pending "Not ready for review" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] smengcl commented on pull request #813: HDDS-3309. Add TimedOutTestsListener to surefire
smengcl commented on pull request #813: URL: https://github.com/apache/hadoop-ozone/pull/813#issuecomment-632895939 > Thanks @smengcl for continuing work on this. There are few files with whitespace-only change, and several with ignored test class but timeout being added. Please let me know if you would like a list of these. Otherwise it looks good to me. Thanks @adoroszlai for another review. I have addressed the whitespace and removed timeout from more ignored tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-3638) Add a cat command to show the text of a file in the Ozone server
[ https://issues.apache.org/jira/browse/HDDS-3638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao resolved HDDS-3638. -- Fix Version/s: 0.6.0 Resolution: Fixed Thanks [~maobaolong] for the contribution and all for the reviews. The PR has been merged to master. > Add a cat command to show the text of a file in the Ozone server > > > Key: HDDS-3638 > URL: https://issues.apache.org/jira/browse/HDDS-3638 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone CLI >Affects Versions: 0.6.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Minor > Labels: pull-request-available > Fix For: 0.6.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #865: HDDS-2969. Implement ofs://: Add contract test
xiaoyuyao commented on pull request #865: URL: https://github.com/apache/hadoop-ozone/pull/865#issuecomment-632902658 @smengcl Please rebase the branch to pick up this change from master #867. That should fix the CI issue. HDDS-3373. Intermittent failure in TestOMRatisLogParser (#867) This seems to be an addendum commit in addition to the one that is currently in HDDS-2665-ofs branch. HDDS-3373. Intermittent failure in TestDnRatisLogParser and TestOMRatisLogParser (#858) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #925: HDDS-3586. OM HA can be started with 3 isolated LEADER instead of one OM ring
hanishakoneru commented on pull request #925: URL: https://github.com/apache/hadoop-ozone/pull/925#issuecomment-632843809 Thanks @elek for the review. I was planning to add unit tests but got side-tracked. I will update the PR with unit tests today. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3186) Introduce generic SCMRatisRequest and SCMRatisResponse
[ https://issues.apache.org/jira/browse/HDDS-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-3186: -- Description: This jira will introduce generic SCMRatisRequest and SCMRatisResponse which will be used by all the Ratis operations inside SCM. We will also have a generic StateMachine which will dispatch the request to registered handlers. (was: Refactor requests to be handled by SCM RatisServer) > Introduce generic SCMRatisRequest and SCMRatisResponse > -- > > Key: HDDS-3186 > URL: https://issues.apache.org/jira/browse/HDDS-3186 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Li Cheng >Assignee: Nanda kumar >Priority: Major > > This jira will introduce generic SCMRatisRequest and SCMRatisResponse which > will be used by all the Ratis operations inside SCM. We will also have a > generic StateMachine which will dispatch the request to registered handlers. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3186) Introduce generic SCMRatisRequest and SCMRatisResponse
[ https://issues.apache.org/jira/browse/HDDS-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-3186: - Labels: pull-request-available (was: ) > Introduce generic SCMRatisRequest and SCMRatisResponse > -- > > Key: HDDS-3186 > URL: https://issues.apache.org/jira/browse/HDDS-3186 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Li Cheng >Assignee: Nanda kumar >Priority: Major > Labels: pull-request-available > > This jira will introduce generic SCMRatisRequest and SCMRatisResponse which > will be used by all the Ratis operations inside SCM. We will also have a > generic StateMachine which will dispatch the request to registered handlers. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] nandakumar131 opened a new pull request #959: HDDS-3186. Initial version.
nandakumar131 opened a new pull request #959: URL: https://github.com/apache/hadoop-ozone/pull/959 ## What changes were proposed in this pull request? (Please fill in changes proposed in this fix) ## What is the link to the Apache JIRA (Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.) Please replace this section with the link to the Apache JIRA) ## How was this patch tested? (Please explain how this patch was tested. Ex: unit tests, manual tests) (If this patch involves UI changes, please attach a screen-shot; otherwise, remove this) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #941: HDDS-3574. Implement ofs://: Override getTrashRoot
smengcl commented on a change in pull request #941: URL: https://github.com/apache/hadoop-ozone/pull/941#discussion_r429422602 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OFSPath.java ## @@ -238,4 +248,23 @@ static String getTempMountBucketNameOfCurrentUser() throws IOException { String username = UserGroupInformation.getCurrentUser().getUserName(); return getTempMountBucketName(username); } + + /** + * Return trash root for the given path. + * @return trash root for the given path + */ + public Path getTrashRoot() { +try { + String username = UserGroupInformation.getCurrentUser().getUserName(); + URI uri = new URIBuilder().setScheme(OZONE_OFS_URI_SCHEME) + .setHost(authority).setPath(OZONE_URI_DELIMITER + volumeName + Review comment: Attempted in 53c19f1f1f543978f7e381d2160e35415756a859. I also explored [`Paths.get()`](https://docs.oracle.com/javase/8/docs/api/java/nio/file/Paths.html) but it returns `java.nio.file.Paths`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2556) Handle InterruptedException in BlockOutputStream
[ https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2556: Fix Version/s: 0.6.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Handle InterruptedException in BlockOutputStream > > > Key: HDDS-2556 > URL: https://issues.apache.org/jira/browse/HDDS-2556 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie, pull-request-available, sonar > Fix For: 0.6.0 > > > Fix these 5 instances > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3309) Add TimedOutTestsListener to surefire and add timeout to integration tests
[ https://issues.apache.org/jira/browse/HDDS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDDS-3309: - Summary: Add TimedOutTestsListener to surefire and add timeout to integration tests (was: Add TimedOutTestsListener to surefire and add timeouts to integration tests) > Add TimedOutTestsListener to surefire and add timeout to integration tests > -- > > Key: HDDS-3309 > URL: https://issues.apache.org/jira/browse/HDDS-3309 > Project: Hadoop Distributed Data Store > Issue Type: Test > Components: test >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Add TimedOutTestsListener as a listener to maven-surefire-plugin like Hadoop > does: > https://github.com/apache/hadoop/blob/1189af4746919774035f5d64ccb4d2ce21905aaa/hadoop-hdfs-project/hadoop-hdfs/pom.xml#L233-L238 > (Credit: [~elek]) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-3309) Add TimedOutTestsListener to surefire and add timeouts to integration tests
[ https://issues.apache.org/jira/browse/HDDS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDDS-3309: - Summary: Add TimedOutTestsListener to surefire and add timeouts to integration tests (was: Add TimedOutTestsListener to surefire) > Add TimedOutTestsListener to surefire and add timeouts to integration tests > --- > > Key: HDDS-3309 > URL: https://issues.apache.org/jira/browse/HDDS-3309 > Project: Hadoop Distributed Data Store > Issue Type: Test > Components: test >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Add TimedOutTestsListener as a listener to maven-surefire-plugin like Hadoop > does: > https://github.com/apache/hadoop/blob/1189af4746919774035f5d64ccb4d2ce21905aaa/hadoop-hdfs-project/hadoop-hdfs/pom.xml#L233-L238 > (Credit: [~elek]) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #954: HDDS-3638. Add a cat command to show the text of a file in the Ozone server
xiaoyuyao merged pull request #954: URL: https://github.com/apache/hadoop-ozone/pull/954 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2694) HddsVolume#readVersionFile fails when reading older versions
[ https://issues.apache.org/jira/browse/HDDS-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2694: - Labels: pull-request-available upgrade (was: upgrade) > HddsVolume#readVersionFile fails when reading older versions > > > Key: HDDS-2694 > URL: https://issues.apache.org/jira/browse/HDDS-2694 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Reporter: Attila Doroszlai >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available, upgrade > > {{HddsVolume#layoutVersion}} is a version number, supposed to be used for > handling upgrades from older versions. Currently only one version is > defined. But should a new version be introduced, HddsVolume would fail to > read older version file. This is caused by a check in {{HddsVolumeUtil}} > that only considers the latest version as valid: > {code:title=https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/HddsVolumeUtil.java#L137-L153} > /** >* Returns layOutVersion if it is valid. Throws an exception otherwise. >*/ > @VisibleForTesting > public static int getLayOutVersion(Properties props, File versionFile) > throws > InconsistentStorageStateException { > String lvStr = getProperty(props, OzoneConsts.LAYOUTVERSION, versionFile); > int lv = Integer.parseInt(lvStr); > if(DataNodeLayoutVersion.getLatestVersion().getVersion() != lv) { > throw new InconsistentStorageStateException("Invalid layOutVersion. " + > "Version file has layOutVersion as " + lv + " and latest Datanode " > + > "layOutVersion is " + > DataNodeLayoutVersion.getLatestVersion().getVersion()); > } > return lv; > } > {code} > I think this should check whether the version number identifies a known > {{DataNodeLayoutVersion}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] avijayanhwx closed pull request #913: HDDS-2694. HddsVolume#readVersionFile fails when reading older versions.
avijayanhwx closed pull request #913: URL: https://github.com/apache/hadoop-ozone/pull/913 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer
[ https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2572: - Labels: newbie pull-request-available sonar (was: newbie sonar) > Handle InterruptedException in SCMSecurityProtocolServer > > > Key: HDDS-2572 > URL: https://issues.apache.org/jira/browse/HDDS-2572 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie, pull-request-available, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] dineshchitlangia opened a new pull request #960: HDDS-2572. Handle InterruptedException in SCMSecurityProtocolServer
dineshchitlangia opened a new pull request #960: URL: https://github.com/apache/hadoop-ozone/pull/960 ## What changes were proposed in this pull request? Handled InterruptedException to address Sonar violations. Also resolved other sonar violations in this file. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2572 ## How was this patch tested? Clean build, checkstyle and sonar in local. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer
[ https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2572: Status: Patch Available (was: Open) > Handle InterruptedException in SCMSecurityProtocolServer > > > Key: HDDS-2572 > URL: https://issues.apache.org/jira/browse/HDDS-2572 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie, pull-request-available, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer
[ https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2572: Target Version/s: 0.6.0 (was: 0.5.0) > Handle InterruptedException in SCMSecurityProtocolServer > > > Key: HDDS-2572 > URL: https://issues.apache.org/jira/browse/HDDS-2572 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie, pull-request-available, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org