[jira] [Resolved] (HDFS-17443) TestNameEditsConfigs does not check null before closing fileSys and cluster
[ https://issues.apache.org/jira/browse/HDFS-17443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-17443. -- Fix Version/s: 3.5.0 Assignee: ConfX Resolution: Fixed Thanks [~FuzzingTeam] for the contribution. > TestNameEditsConfigs does not check null before closing fileSys and cluster > --- > > Key: HDFS-17443 > URL: https://issues.apache.org/jira/browse/HDFS-17443 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: ConfX >Assignee: ConfX >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > h2. What happened: > Got NullPointerException when trying to run TestNameEditsConfigs with > misconfiguration rather than the actual exception. > h2. Buggy code: > {code:java} > @Test > public void testNameEditsConfigsFailure() throws IOException { > ... > } finally { > fileSys.close(); // here fileSys might be null > cluster.shutdown(); > }{code} > h2. StackTrace: > {code:java} > java.lang.NullPointerException: Cannot invoke > "org.apache.hadoop.fs.FileSystem.close()" because "fileSys" is null >at > org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigsFailure(TestNameEditsConfigs.java:450){code} > h2. How to reproduce: > (1) Set {{dfs.namenode.edits.dir.minimum}} to {{251625215}} > (2) Run test: > {{org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs#testNameEditsConfigsFailure}} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17216) When distcp handle the small files, the bandwidth parameter will be invalid, resulting in serious overspeed behavior
[ https://issues.apache.org/jira/browse/HDFS-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-17216. -- Fix Version/s: 3.5.0 Assignee: xiaojunxiang Resolution: Fixed Thanks [~bigdata_zoodev] for the contribution and [~hiwangzhihui] for the review. > When distcp handle the small files, the bandwidth parameter will be invalid, > resulting in serious overspeed behavior > > > Key: HDFS-17216 > URL: https://issues.apache.org/jira/browse/HDFS-17216 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.3.4 >Reporter: xiaojunxiang >Assignee: xiaojunxiang >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: DiscpAnalyze.jpg > > > When distcp copies small files (file size slightly smaller than the > bandwidth), the throbber only starts to throb after 1 second, and the > throttled is specific to a single file. so the throbber becomes invalid, > causing distcp to fill the cluster bandwidth and crush production traffic, > which is a terrible thing. > Also, it takes time for files to set up the IO pipeline, so you shouldn't > test with very small files, which will slow the transfer, especially as > bandwidth kicks in, which will amplify the impact of small files on the rate -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17430) RecoveringBlock will skip no live replicas when get block recovery command.
[ https://issues.apache.org/jira/browse/HDFS-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829884#comment-17829884 ] Dinesh Chitlangia commented on HDFS-17430: -- Thanks [~haiyang Hu] for the improvement and [~Zander Huang] for the reviews > RecoveringBlock will skip no live replicas when get block recovery command. > --- > > Key: HDFS-17430 > URL: https://issues.apache.org/jira/browse/HDFS-17430 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > RecoveringBlock maybe skip no live replicas when get block recovery command. > *Issue:* > Currently the following scenarios may lead to failure in the execution of > BlockRecoveryWorker by the datanode, resulting file being not to be closed > for a long time. > *t1.* The block_xxx_xxx has two replicas[dn1,dn2]; the dn1 machine shut down > and will be dead status, the dn2 is live status. > *t2.* Occurs block recovery. > related logs: > {code:java} > 2024-03-13 21:58:00.651 WARN hdfs.StateChange DIR* > NameSystem.internalReleaseLease: File /xxx/file has not been closed. Lease > recovery is in progress. RecoveryId = 28577373754 for block blk_xxx_xxx > {code} > *t3.* The dn2 is chosen for block recovery. > dn1 is marked as stale (is dead state) at this time, here the > recoveryLocations size is 1, currently according to the following logic, dn1 > and dn2 will be chosen to participate in block recovery. > DatanodeManager#getBlockRecoveryCommand > {code:java} >// Skip stale nodes during recovery > final List recoveryLocations = > new ArrayList<>(storages.length); > final List storageIdx = new ArrayList<>(storages.length); > for (int i = 0; i < storages.length; ++i) { >if (!storages[i].getDatanodeDescriptor().isStale(staleInterval)) { > recoveryLocations.add(storages[i]); > storageIdx.add(i); >} > } > ... > // If we only get 1 replica after eliminating stale nodes, choose all > // replicas for recovery and let the primary data node handle failures. > DatanodeInfo[] recoveryInfos; > if (recoveryLocations.size() > 1) { >if (recoveryLocations.size() != storages.length) { > LOG.info("Skipped stale nodes for recovery : " > + (storages.length - recoveryLocations.size())); >} >recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(recoveryLocations); > } else { >// If too many replicas are stale, then choose all replicas to >// participate in block recovery. >recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(storages); > } > {code} > {code:java} > 2024-03-13 21:58:01,425 INFO datanode.DataNode > (BlockRecoveryWorker.java:logRecoverBlock(563)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > BlockRecoveryWorker: NameNode at xxx:8040 calls > recoverBlock(BP-xxx:blk_xxx_xxx, > targets=[DatanodeInfoWithStorage[dn1:50010,null,null], > DatanodeInfoWithStorage[dn2:50010,null,null]], > newGenerationStamp=28577373754, newBlock=null, isStriped=false) > {code} > *t4.* When dn2 executes BlockRecoveryWorker#recover, it will call > initReplicaRecovery operation on dn1, however, since the dn1 machine is > currently down state at this time, it will take a very long time to timeout, > the default number of retries to establish a server connection is 45 times. > related logs: > {code:java} > 2024-03-13 21:59:31,518 INFO ipc.Client > (Client.java:handleConnectionTimeout(904)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Retrying connect to server: dn1:8010. Already tried 0 time(s); maxRetries=45 > ... > 2024-03-13 23:05:35,295 INFO ipc.Client > (Client.java:handleConnectionTimeout(904)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Retrying connect to server: dn2:8010. Already tried 44 time(s); maxRetries=45 > 2024-03-13 23:07:05,392 WARN protocol.InterDatanodeProtocol > (BlockRecoveryWorker.java:recover(170)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Failed to recover block (block=BP-xxx:blk_xxx_xxx, > datanode=DatanodeInfoWithStorage[dn1:50010,null,null]) > org.apache.hadoop.net.ConnectTimeoutException: > Call From dn2 to dn1:8010 failed on socket timeout exception: > org.apache.hadoop.net.ConnectTimeoutException: 9 millis timeout while > waiting for channel to be ready for connect.ch : > java.nio.channels.SocketChannel[connection-pending remote=dn:8010]; For more > details see: http://wiki.apache.org/hadoop/SocketTimeout > at
[jira] [Resolved] (HDFS-17430) RecoveringBlock will skip no live replicas when get block recovery command.
[ https://issues.apache.org/jira/browse/HDFS-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-17430. -- Fix Version/s: 3.5.0 Resolution: Fixed > RecoveringBlock will skip no live replicas when get block recovery command. > --- > > Key: HDFS-17430 > URL: https://issues.apache.org/jira/browse/HDFS-17430 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > RecoveringBlock maybe skip no live replicas when get block recovery command. > *Issue:* > Currently the following scenarios may lead to failure in the execution of > BlockRecoveryWorker by the datanode, resulting file being not to be closed > for a long time. > *t1.* The block_xxx_xxx has two replicas[dn1,dn2]; the dn1 machine shut down > and will be dead status, the dn2 is live status. > *t2.* Occurs block recovery. > related logs: > {code:java} > 2024-03-13 21:58:00.651 WARN hdfs.StateChange DIR* > NameSystem.internalReleaseLease: File /xxx/file has not been closed. Lease > recovery is in progress. RecoveryId = 28577373754 for block blk_xxx_xxx > {code} > *t3.* The dn2 is chosen for block recovery. > dn1 is marked as stale (is dead state) at this time, here the > recoveryLocations size is 1, currently according to the following logic, dn1 > and dn2 will be chosen to participate in block recovery. > DatanodeManager#getBlockRecoveryCommand > {code:java} >// Skip stale nodes during recovery > final List recoveryLocations = > new ArrayList<>(storages.length); > final List storageIdx = new ArrayList<>(storages.length); > for (int i = 0; i < storages.length; ++i) { >if (!storages[i].getDatanodeDescriptor().isStale(staleInterval)) { > recoveryLocations.add(storages[i]); > storageIdx.add(i); >} > } > ... > // If we only get 1 replica after eliminating stale nodes, choose all > // replicas for recovery and let the primary data node handle failures. > DatanodeInfo[] recoveryInfos; > if (recoveryLocations.size() > 1) { >if (recoveryLocations.size() != storages.length) { > LOG.info("Skipped stale nodes for recovery : " > + (storages.length - recoveryLocations.size())); >} >recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(recoveryLocations); > } else { >// If too many replicas are stale, then choose all replicas to >// participate in block recovery. >recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(storages); > } > {code} > {code:java} > 2024-03-13 21:58:01,425 INFO datanode.DataNode > (BlockRecoveryWorker.java:logRecoverBlock(563)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > BlockRecoveryWorker: NameNode at xxx:8040 calls > recoverBlock(BP-xxx:blk_xxx_xxx, > targets=[DatanodeInfoWithStorage[dn1:50010,null,null], > DatanodeInfoWithStorage[dn2:50010,null,null]], > newGenerationStamp=28577373754, newBlock=null, isStriped=false) > {code} > *t4.* When dn2 executes BlockRecoveryWorker#recover, it will call > initReplicaRecovery operation on dn1, however, since the dn1 machine is > currently down state at this time, it will take a very long time to timeout, > the default number of retries to establish a server connection is 45 times. > related logs: > {code:java} > 2024-03-13 21:59:31,518 INFO ipc.Client > (Client.java:handleConnectionTimeout(904)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Retrying connect to server: dn1:8010. Already tried 0 time(s); maxRetries=45 > ... > 2024-03-13 23:05:35,295 INFO ipc.Client > (Client.java:handleConnectionTimeout(904)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Retrying connect to server: dn2:8010. Already tried 44 time(s); maxRetries=45 > 2024-03-13 23:07:05,392 WARN protocol.InterDatanodeProtocol > (BlockRecoveryWorker.java:recover(170)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > Failed to recover block (block=BP-xxx:blk_xxx_xxx, > datanode=DatanodeInfoWithStorage[dn1:50010,null,null]) > org.apache.hadoop.net.ConnectTimeoutException: > Call From dn2 to dn1:8010 failed on socket timeout exception: > org.apache.hadoop.net.ConnectTimeoutException: 9 millis timeout while > waiting for channel to be ready for connect.ch : > java.nio.channels.SocketChannel[connection-pending remote=dn:8010]; For more > details see: http://wiki.apache.org/hadoop/SocketTimeout > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at >
[jira] [Resolved] (HDFS-17433) metrics sumOfActorCommandQueueLength should only record valid commands
[ https://issues.apache.org/jira/browse/HDFS-17433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-17433. -- Fix Version/s: 3.5.0 Resolution: Fixed > metrics sumOfActorCommandQueueLength should only record valid commands > -- > > Key: HDFS-17433 > URL: https://issues.apache.org/jira/browse/HDFS-17433 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17431) Fix log format for BlockRecoveryWorker#recoverBlocks
[ https://issues.apache.org/jira/browse/HDFS-17431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-17431. -- Fix Version/s: 3.4.1 Resolution: Fixed Thanks [~haiyang Hu] for the improvement. > Fix log format for BlockRecoveryWorker#recoverBlocks > > > Key: HDFS-17431 > URL: https://issues.apache.org/jira/browse/HDFS-17431 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.1 > > > Fix log format for BlockRecoveryWorker#recoverBlocks > > As seen in PR [https://github.com/apache/hadoop/pull/6635] the additional {} > is moot. > > 2024-03-13 23:07:05,401 WARN datanode.DataNode > (BlockRecoveryWorker.java:run(623)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > recover Block: RecoveringBlock\{BP-xxx:blk_xxx_xxx; getBlockSize()=0; > corrupt=false; offset=-1; locs=[DatanodeInfoWithStorage[dn1:50010,null,null], > DatanodeInfoWithStorage[dn2:50010,null,null]]; cachedLocs=[]} > FAILED: > *{}* > org.apache.hadoop.ipc.RemoteException(java.io.IOException): The recovery id > 28577373754 does not match current recovery id 28578772548 for block > BP-xxx:blk_xxx_xxx > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:4129) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:1184) > at -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17431) Fix log format for BlockRecoveryWorker#recoverBlocks
[ https://issues.apache.org/jira/browse/HDFS-17431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDFS-17431: - Description: Fix log format for BlockRecoveryWorker#recoverBlocks As seen in PR [https://github.com/apache/hadoop/pull/6635] the additional {} is moot. 2024-03-13 23:07:05,401 WARN datanode.DataNode (BlockRecoveryWorker.java:run(623)) [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - recover Block: RecoveringBlock\{BP-xxx:blk_xxx_xxx; getBlockSize()=0; corrupt=false; offset=-1; locs=[DatanodeInfoWithStorage[dn1:50010,null,null], DatanodeInfoWithStorage[dn2:50010,null,null]]; cachedLocs=[]} FAILED: *{}* org.apache.hadoop.ipc.RemoteException(java.io.IOException): The recovery id 28577373754 does not match current recovery id 28578772548 for block BP-xxx:blk_xxx_xxx at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:4129) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:1184) at was:Fix log format for BlockRecoveryWorker#recoverBlocks > Fix log format for BlockRecoveryWorker#recoverBlocks > > > Key: HDFS-17431 > URL: https://issues.apache.org/jira/browse/HDFS-17431 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > Fix log format for BlockRecoveryWorker#recoverBlocks > > As seen in PR [https://github.com/apache/hadoop/pull/6635] the additional {} > is moot. > > 2024-03-13 23:07:05,401 WARN datanode.DataNode > (BlockRecoveryWorker.java:run(623)) > [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - > recover Block: RecoveringBlock\{BP-xxx:blk_xxx_xxx; getBlockSize()=0; > corrupt=false; offset=-1; locs=[DatanodeInfoWithStorage[dn1:50010,null,null], > DatanodeInfoWithStorage[dn2:50010,null,null]]; cachedLocs=[]} > FAILED: > *{}* > org.apache.hadoop.ipc.RemoteException(java.io.IOException): The recovery id > 28577373754 does not match current recovery id 28578772548 for block > BP-xxx:blk_xxx_xxx > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:4129) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:1184) > at -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16556) Fix typos in distcp
[ https://issues.apache.org/jira/browse/HDFS-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17526608#comment-17526608 ] Dinesh Chitlangia commented on HDFS-16556: -- Thanks [~philipse] for the improvment. > Fix typos in distcp > --- > > Key: HDFS-16556 > URL: https://issues.apache.org/jira/browse/HDFS-16556 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.3.2 >Reporter: guophilipse >Assignee: guophilipse >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix typos in distcp -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16556) Fix typos in distcp
[ https://issues.apache.org/jira/browse/HDFS-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-16556. -- Fix Version/s: 3.3.3 Resolution: Fixed > Fix typos in distcp > --- > > Key: HDFS-16556 > URL: https://issues.apache.org/jira/browse/HDFS-16556 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.3.2 >Reporter: guophilipse >Assignee: guophilipse >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix typos in distcp -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16556) Fix typos in distcp
[ https://issues.apache.org/jira/browse/HDFS-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDFS-16556: - Summary: Fix typos in distcp (was: Fixtypos in distcp) > Fix typos in distcp > --- > > Key: HDFS-16556 > URL: https://issues.apache.org/jira/browse/HDFS-16556 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.3.2 >Reporter: guophilipse >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix typos in distcp -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16556) Fixtypos in distcp
[ https://issues.apache.org/jira/browse/HDFS-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDFS-16556: - Description: Fix typos in distcp (was: Fixtypos in distcp) > Fixtypos in distcp > -- > > Key: HDFS-16556 > URL: https://issues.apache.org/jira/browse/HDFS-16556 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.3.2 >Reporter: guophilipse >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix typos in distcp -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16556) Fix typos in distcp
[ https://issues.apache.org/jira/browse/HDFS-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia reassigned HDFS-16556: Assignee: guophilipse > Fix typos in distcp > --- > > Key: HDFS-16556 > URL: https://issues.apache.org/jira/browse/HDFS-16556 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.3.2 >Reporter: guophilipse >Assignee: guophilipse >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix typos in distcp -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15873) Add namenode address in logs for block report
[ https://issues.apache.org/jira/browse/HDFS-15873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297523#comment-17297523 ] Dinesh Chitlangia commented on HDFS-15873: -- Thanks [~tomscut] for the contribution and [~ayushtkn] for the reviews. > Add namenode address in logs for block report > - > > Key: HDFS-15873 > URL: https://issues.apache.org/jira/browse/HDFS-15873 > Project: Hadoop HDFS > Issue Type: Wish > Components: datanode, hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 50m > Remaining Estimate: 0h > > Add namenode address in logs for block report. It's easier to track when the > block report was sent to ANN or SNN. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15873) Add namenode address in logs for block report
[ https://issues.apache.org/jira/browse/HDFS-15873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-15873. -- Fix Version/s: 3.3.1 Resolution: Fixed > Add namenode address in logs for block report > - > > Key: HDFS-15873 > URL: https://issues.apache.org/jira/browse/HDFS-15873 > Project: Hadoop HDFS > Issue Type: Wish > Components: datanode, hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 50m > Remaining Estimate: 0h > > Add namenode address in logs for block report. It's easier to track when the > block report was sent to ANN or SNN. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15834) Remove the usage of org.apache.log4j.Level
[ https://issues.apache.org/jira/browse/HDFS-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-15834. -- Fix Version/s: 3.3.1 Resolution: Fixed Thanks [~aajisaka] for the contribution > Remove the usage of org.apache.log4j.Level > -- > > Key: HDFS-15834 > URL: https://issues.apache.org/jira/browse/HDFS-15834 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 50m > Remaining Estimate: 0h > > Replace org.apache.log4j.Level with org.slf4j.event.Level in hadoop-hdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15814) Make some parameters configurable for DataNodeDiskMetrics
[ https://issues.apache.org/jira/browse/HDFS-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-15814. -- Fix Version/s: 3.3.1 Resolution: Fixed Thanks [~tomscut] for the improvement and [~arp] for the review. > Make some parameters configurable for DataNodeDiskMetrics > - > > Key: HDFS-15814 > URL: https://issues.apache.org/jira/browse/HDFS-15814 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > For ease of use, especially for small clusters, we can change some > parameters(MIN_OUTLIER_DETECTION_DISKS, SLOW_DISK_LOW_THRESHOLD_MS) > configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15814) Make some parameters configurable for DataNodeDiskMetrics
[ https://issues.apache.org/jira/browse/HDFS-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia reassigned HDFS-15814: Assignee: tomscut > Make some parameters configurable for DataNodeDiskMetrics > - > > Key: HDFS-15814 > URL: https://issues.apache.org/jira/browse/HDFS-15814 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > For ease of use, especially for small clusters, we can change some > parameters(MIN_OUTLIER_DETECTION_DISKS, SLOW_DISK_LOW_THRESHOLD_MS) > configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15253) Set default throttle value on dfs.image.transfer.bandwidthPerSec
[ https://issues.apache.org/jira/browse/HDFS-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072267#comment-17072267 ] Dinesh Chitlangia commented on HDFS-15253: -- [~kpalanisamy] - Thanks for filing this jira. Yes, restricting the bandwith to 50mb/s makes sense. When I work with customers who are running HDFS at scale, that is the first thing I recommend them. Regarding dfs.image.compress, I have fairly little experience and have not seen much benefit with it other than reduced file size. dfs.namenode.checkpoint.txns can vary based on the cluster usage. So no matter what value is set as default, there will always be a large set of users who would still have to tune it based on their cluster usage. So I would recommend we leave it at default 1M. > Set default throttle value on dfs.image.transfer.bandwidthPerSec > > > Key: HDFS-15253 > URL: https://issues.apache.org/jira/browse/HDFS-15253 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Major > > The default value dfs.image.transfer.bandwidthPerSec is set to 0 so it can > use maximum available bandwidth for fsimage transfers during checkpoint. I > think we should throttle this. Many users were experienced namenode failover > when transferring large image size along with fsimage replication on > dfs.namenode.name.dir. eg. >25Gb. > Thought to set, > dfs.image.transfer.bandwidthPerSec=52428800. (50 MB/s) > dfs.namenode.checkpoint.txns=200 (Default is 1M, good to avoid frequent > checkpoint. However, the default checkpoint runs every 6 hours once) > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser package
[ https://issues.apache.org/jira/browse/HDDS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2538. - Target Version/s: 0.5.0 Resolution: Fixed > Sonar: Fix issues found in DatabaseHelper in ozone audit parser package > --- > > Key: HDDS-2538 > URL: https://issues.apache.org/jira/browse/HDDS-2538 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Tools >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-dWKcVY8lQ4Zr39=false=BLOCKER=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2594) S3 RangeReads failing with NumberFormatException
[ https://issues.apache.org/jira/browse/HDDS-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2594: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > S3 RangeReads failing with NumberFormatException > > > Key: HDDS-2594 > URL: https://issues.apache.org/jira/browse/HDDS-2594 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > > {code:java} > 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler: > javax.servlet.ServletException: java.lang.NumberFormatException: For input > string: "3977248768" > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432) > at > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:539) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2597) Remove toString() as log calls it implicitly
[ https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2597. - Fix Version/s: 0.5.0 Resolution: Fixed > Remove toString() as log calls it implicitly > > > Key: HDDS-2597 > URL: https://issues.apache.org/jira/browse/HDDS-2597 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > No need to call "toString()" method as formatting and string conversion is > done by the Formatter. > > Related to > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWb=AW5md_AVKcVY8lQ4ZsWb] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2597) Remove toString() as log calls it implicitly
[ https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2597: Summary: Remove toString() as log calls it implicitly (was: No need to call "toString()" method as formatting and string conversion is done by the Formatter.) > Remove toString() as log calls it implicitly > > > Key: HDDS-2597 > URL: https://issues.apache.org/jira/browse/HDDS-2597 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > No need to call "toString()" method as formatting and string conversion is > done by the Formatter. > > Related to > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWb=AW5md_AVKcVY8lQ4ZsWb] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2597) No need to call "toString()" method as formatting and string conversion is done by the Formatter.
[ https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2597: Description: No need to call "toString()" method as formatting and string conversion is done by the Formatter. Related to [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWb=AW5md_AVKcVY8lQ4ZsWb] was:Related to https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWb=AW5md_AVKcVY8lQ4ZsWb > No need to call "toString()" method as formatting and string conversion is > done by the Formatter. > - > > Key: HDDS-2597 > URL: https://issues.apache.org/jira/browse/HDDS-2597 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > No need to call "toString()" method as formatting and string conversion is > done by the Formatter. > > Related to > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWb=AW5md_AVKcVY8lQ4ZsWb] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2596) Remove unused private method "createPipeline"
[ https://issues.apache.org/jira/browse/HDDS-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2596: Summary: Remove unused private method "createPipeline" (was: Remove this unused private "createPipeline" method) > Remove unused private method "createPipeline" > - > > Key: HDDS-2596 > URL: https://issues.apache.org/jira/browse/HDDS-2596 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWe=AW5md_AVKcVY8lQ4ZsWe] > and > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWW=AW5md_AVKcVY8lQ4ZsWW -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2598) Remove unused private field "LOG"
[ https://issues.apache.org/jira/browse/HDDS-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2598. - Fix Version/s: 0.5.0 Resolution: Fixed > Remove unused private field "LOG" > - > > Key: HDDS-2598 > URL: https://issues.apache.org/jira/browse/HDDS-2598 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWS=AW5md_APKcVY8lQ4ZsWS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2598) Remove unused private field "LOG"
[ https://issues.apache.org/jira/browse/HDDS-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2598: Summary: Remove unused private field "LOG" (was: Remove this unused "LOG" private field.) > Remove unused private field "LOG" > - > > Key: HDDS-2598 > URL: https://issues.apache.org/jira/browse/HDDS-2598 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWS=AW5md_APKcVY8lQ4ZsWS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2522) Fix TestSecureOzoneCluster
[ https://issues.apache.org/jira/browse/HDDS-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2522: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~adoroszlai] for reporting and fixing the issue, thanks [~xyao] for reviews. > Fix TestSecureOzoneCluster > -- > > Key: HDDS-2522 > URL: https://issues.apache.org/jira/browse/HDDS-2522 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Affects Versions: 0.5.0 >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > TestSecureOzoneCluster is failing with {{failure to login}}. > {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt} > --- > Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster > --- > Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< > FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster > testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster) Time > elapsed: 2.474 s <<< ERROR! > org.apache.hadoop.security.KerberosAuthException: > failure to login: for principal: > scm/pr-hdds-2291-5997d-4279494...@example.com from keytab > /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab > javax.security.auth.login.LoginException: Unable to obtain password from user > at > org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008) > at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315) > at > org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508) > at > org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254) > at > org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212) > at > org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600) > at > org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91) > at > org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list
[ https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2523: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) [~szetszwo] thanks for reporting the issue and sharing debug information. [~adoroszlai] thanks for fixing the issue > BufferPool.releaseBuffer may release a buffer different than the head of the > list > - > > Key: HDDS-2523 > URL: https://issues.apache.org/jira/browse/HDDS-2523 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Tsz-wo Sze >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: a.patch > > Time Spent: 20m > Remaining Estimate: 0h > > {code} > //BufferPool > public void releaseBuffer(ByteBuffer byteBuffer) { > // always remove from head of the list and append at last > ByteBuffer buffer = bufferList.remove(0); > // Ensure the buffer to be removed is always at the head of the list. > Preconditions.checkArgument(buffer.equals(byteBuffer)); > buffer.clear(); > bufferList.add(buffer); > Preconditions.checkArgument(currentBufferIndex >= 0); > currentBufferIndex--; > } > {code} > In the code above, it expects buffer and byteBuffer are the same object, i.e. > buffer == byteBuffer. However the precondition is checking > buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have > remaining() == 0 so that equals(..) returns true and the precondition does > not catch the bug. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2493) Sonar: Locking on a parameter in NetUtils.removeOutscope
[ https://issues.apache.org/jira/browse/HDDS-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2493: Affects Version/s: (was: 0.5.0) 0.4.1 > Sonar: Locking on a parameter in NetUtils.removeOutscope > > > Key: HDDS-2493 > URL: https://issues.apache.org/jira/browse/HDDS-2493 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.1 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2hKcVY8lQ4ZsNd=false=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2498) Sonar: Fix issues found in StorageContainerManager class
[ https://issues.apache.org/jira/browse/HDDS-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2498. - Target Version/s: 0.5.0 Resolution: Fixed Thanks [~swagle] for the contribution. When filing new Jira, let's keep `Fixed Version` empty. > Sonar: Fix issues found in StorageContainerManager class > > > Key: HDDS-2498 > URL: https://issues.apache.org/jira/browse/HDDS-2498 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?fileUuids=AW5md-HfKcVY8lQ4ZrcG=hadoop-ozone=AW5md-tIKcVY8lQ4ZsEr=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2580) Ensure resources are closed in Get/PutKeyHandler
[ https://issues.apache.org/jira/browse/HDDS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2580: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Ensure resources are closed in Get/PutKeyHandler > > > Key: HDDS-2580 > URL: https://issues.apache.org/jira/browse/HDDS-2580 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Dinesh Chitlangia >Assignee: Attila Doroszlai >Priority: Major > Labels: newbie, pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Use try-with-resources or close this "FileOutputStream" in a "finally" clause. > GetKeyHandler: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKTfdBVcJdcVFsvC=AW6HHKTfdBVcJdcVFsvC] > > Use try-with-resources or close this "OzoneOutputStream" in a "finally" > clause. > PutKeyHandler: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKRodBVcJdcVFsvB=AW6HHKRodBVcJdcVFsvB] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2493) Sonar: Locking on a parameter in NetUtils.removeOutscope
[ https://issues.apache.org/jira/browse/HDDS-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2493. - Resolution: Fixed > Sonar: Locking on a parameter in NetUtils.removeOutscope > > > Key: HDDS-2493 > URL: https://issues.apache.org/jira/browse/HDDS-2493 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2hKcVY8lQ4ZsNd=false=BUG -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2581) Use Java Configs for OM HA
[ https://issues.apache.org/jira/browse/HDDS-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2581: Summary: Use Java Configs for OM HA (was: Make OM Ha config to use Java Configs) > Use Java Configs for OM HA > -- > > Key: HDDS-2581 > URL: https://issues.apache.org/jira/browse/HDDS-2581 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Major > Labels: newbie > > This Jira is created based on the comments from [~aengineer] during HDDS-2536 > review. > Can we please use the Java Configs instead of this old-style config to add a > config? > > This Jira it to make all HA OM config to the new style (Java config based > approach) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2467) Allow running Freon validators with limited memory
[ https://issues.apache.org/jira/browse/HDDS-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2467: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Allow running Freon validators with limited memory > -- > > Key: HDDS-2467 > URL: https://issues.apache.org/jira/browse/HDDS-2467 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: freon >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Freon validators read each item to be validated completely into a {{byte[]}} > buffer. This allows timing only the read (and buffer allocation), but not > the subsequent digest calculation. However, it also means that memory > required for running the validators is proportional to key size. > I propose to add a command-line flag to allow calculating the digest while > reading the input stream. This changes timing results a bit, since values > will include the time required for digest calculation. On the other hand, > Freon will be able to validate huge keys with limited memory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2516) Code cleanup in EventQueue
[ https://issues.apache.org/jira/browse/HDDS-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2516: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Code cleanup in EventQueue > -- > > Key: HDDS-2516 > URL: https://issues.apache.org/jira/browse/HDDS-2516 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > https://sonarcloud.io/project/issues?fileUuids=AW5md-HgKcVY8lQ4ZrfB=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2535) TestOzoneManagerDoubleBufferWithOMResponse is flaky
[ https://issues.apache.org/jira/browse/HDDS-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2535: Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > TestOzoneManagerDoubleBufferWithOMResponse is flaky > --- > > Key: HDDS-2535 > URL: https://issues.apache.org/jira/browse/HDDS-2535 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Marton Elek >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Flakiness can be reproduced locally. Usually it passes, but when I started to > run it 100 times parallel with high cpu load it failed with the 3rd attempt > (timed out) > {code:java} > --- > Test set: > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse > --- > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 503.297 s <<< > FAILURE! - in > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse > testDoubleBuffer(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse) > Time elapsed: 500.122 s <<< ERROR! > java.lang.Exception: test timed out after 50 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382) > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:385) > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:129) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} > Independent from the flakiness I think a test where the timeout is 8 minutes > and starts 1000 threads to insert 500 buckets (500_000 buckets all together) > it's more like an integration test and would be better to move the slowest > part to the integration-test project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2535) TestOzoneManagerDoubleBufferWithOMResponse is flaky
[ https://issues.apache.org/jira/browse/HDDS-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978078#comment-16978078 ] Dinesh Chitlangia commented on HDDS-2535: - [~elek] Thanks for reporting the flaky test, [~bharat] Thanks for the contribution. > TestOzoneManagerDoubleBufferWithOMResponse is flaky > --- > > Key: HDDS-2535 > URL: https://issues.apache.org/jira/browse/HDDS-2535 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Marton Elek >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Flakiness can be reproduced locally. Usually it passes, but when I started to > run it 100 times parallel with high cpu load it failed with the 3rd attempt > (timed out) > {code:java} > --- > Test set: > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse > --- > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 503.297 s <<< > FAILURE! - in > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse > testDoubleBuffer(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse) > Time elapsed: 500.122 s <<< ERROR! > java.lang.Exception: test timed out after 50 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382) > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:385) > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:129) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} > Independent from the flakiness I think a test where the timeout is 8 minutes > and starts 1000 threads to insert 500 buckets (500_000 buckets all together) > it's more like an integration test and would be better to move the slowest > part to the integration-test project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2580) Sonar: Close resources in xxxKeyHandler
Dinesh Chitlangia created HDDS-2580: --- Summary: Sonar: Close resources in xxxKeyHandler Key: HDDS-2580 URL: https://issues.apache.org/jira/browse/HDDS-2580 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Dinesh Chitlangia Use try-with-resources or close this "FileOutputStream" in a "finally" clause. GetKeyHandler: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKTfdBVcJdcVFsvC=AW6HHKTfdBVcJdcVFsvC] Use try-with-resources or close this "OzoneOutputStream" in a "finally" clause. PutKeyHandler: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKRodBVcJdcVFsvB=AW6HHKRodBVcJdcVFsvB] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2578) Handle InterruptedException in Freon package
[ https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2578: Description: BaseFreonGenerator: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D] RandomKeyGenerator: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f] ProgressBar: 3 instances listed below [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p] was: BaseFreonGenerator: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D] ProgressBar: 3 instances listed below [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p] > Handle InterruptedException in Freon package > > > Key: HDDS-2578 > URL: https://issues.apache.org/jira/browse/HDDS-2578 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > BaseFreonGenerator: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D] > > RandomKeyGenerator: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f] > > ProgressBar: 3 instances listed below > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2504) Handle InterruptedException properly
[ https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2504: Description: {quote}Either re-interrupt or rethrow the {{InterruptedException}} {quote} in several files (42 issues) [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] was: {quote}Either re-interrupt or rethrow the {{InterruptedException}} {quote} in several files (42 issues) [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] Feel free to create sub-tasks if needed. > Handle InterruptedException properly > > > Key: HDDS-2504 > URL: https://issues.apache.org/jira/browse/HDDS-2504 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Priority: Major > Labels: newbie, sonar > > {quote}Either re-interrupt or rethrow the {{InterruptedException}} > {quote} > in several files (42 issues) > [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2578) Handle InterruptedException in Freon package
[ https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2578: Description: BaseFreonGenerator: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D] ProgressBar: 3 instances listed below [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l > Handle InterruptedException in Freon package > > > Key: HDDS-2578 > URL: https://issues.apache.org/jira/browse/HDDS-2578 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > BaseFreonGenerator: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D] > > ProgressBar: 3 instances listed below > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2578) Handle InterruptedException in Freon package
Dinesh Chitlangia created HDDS-2578: --- Summary: Handle InterruptedException in Freon package Key: HDDS-2578 URL: https://issues.apache.org/jira/browse/HDDS-2578 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2577) Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB
Dinesh Chitlangia created HDDS-2577: --- Summary: Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB Key: HDDS-2577 URL: https://issues.apache.org/jira/browse/HDDS-2577 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia OzoneManagerDoubleBuffer: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu] OzoneManagerRatisClient: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2577) Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB
[ https://issues.apache.org/jira/browse/HDDS-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2577: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l (was: OzoneManagerDoubleBuffer: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu] OzoneManagerRatisClient: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf] ) > Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB > - > > Key: HDDS-2577 > URL: https://issues.apache.org/jira/browse/HDDS-2577 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2576) Handle InterruptedException in ratis related files
[ https://issues.apache.org/jira/browse/HDDS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2576: Description: OzoneManagerDoubleBuffer: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu] OzoneManagerRatisClient: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH > Handle InterruptedException in ratis related files > -- > > Key: HDDS-2576 > URL: https://issues.apache.org/jira/browse/HDDS-2576 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > OzoneManagerDoubleBuffer: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu] > OzoneManagerRatisClient: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2576) Handle InterruptedException in ratis related files
Dinesh Chitlangia created HDDS-2576: --- Summary: Handle InterruptedException in ratis related files Key: HDDS-2576 URL: https://issues.apache.org/jira/browse/HDDS-2576 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2575) Handle InterruptedException in LogSubcommand
[ https://issues.apache.org/jira/browse/HDDS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2575: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH (was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67] ) > Handle InterruptedException in LogSubcommand > > > Key: HDDS-2575 > URL: https://issues.apache.org/jira/browse/HDDS-2575 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2575) Handle InterruptedException in LogSubcommand
Dinesh Chitlangia created HDDS-2575: --- Summary: Handle InterruptedException in LogSubcommand Key: HDDS-2575 URL: https://issues.apache.org/jira/browse/HDDS-2575 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2574) Handle InterruptedException in OzoneDelegationTokenSecretManager
[ https://issues.apache.org/jira/browse/HDDS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2574: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc > Handle InterruptedException in OzoneDelegationTokenSecretManager > > > Key: HDDS-2574 > URL: https://issues.apache.org/jira/browse/HDDS-2574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2574) Handle InterruptedException in OzoneDelegationTokenSecretManager
Dinesh Chitlangia created HDDS-2574: --- Summary: Handle InterruptedException in OzoneDelegationTokenSecretManager Key: HDDS-2574 URL: https://issues.apache.org/jira/browse/HDDS-2574 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2573) Handle InterruptedException in KeyOutputStream
[ https://issues.apache.org/jira/browse/HDDS-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2573: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc (was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] ) > Handle InterruptedException in KeyOutputStream > -- > > Key: HDDS-2573 > URL: https://issues.apache.org/jira/browse/HDDS-2573 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2573) Handle InterruptedException in KeyOutputStream
Dinesh Chitlangia created HDDS-2573: --- Summary: Handle InterruptedException in KeyOutputStream Key: HDDS-2573 URL: https://issues.apache.org/jira/browse/HDDS-2573 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer
[ https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2572: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] was: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh] > Handle InterruptedException in SCMSecurityProtocolServer > > > Key: HDDS-2572 > URL: https://issues.apache.org/jira/browse/HDDS-2572 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer
Dinesh Chitlangia created HDDS-2572: --- Summary: Handle InterruptedException in SCMSecurityProtocolServer Key: HDDS-2572 URL: https://issues.apache.org/jira/browse/HDDS-2572 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2571) Handle InterruptedException in SCMPipelineManager
Dinesh Chitlangia created HDDS-2571: --- Summary: Handle InterruptedException in SCMPipelineManager Key: HDDS-2571 URL: https://issues.apache.org/jira/browse/HDDS-2571 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2571) Handle InterruptedException in SCMPipelineManager
[ https://issues.apache.org/jira/browse/HDDS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2571: Description: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ > Handle InterruptedException in SCMPipelineManager > - > > Key: HDDS-2571 > URL: https://issues.apache.org/jira/browse/HDDS-2571 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2569) Handle InterruptedException in LogStreamServlet
[ https://issues.apache.org/jira/browse/HDDS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2569: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf (was: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh ) > Handle InterruptedException in LogStreamServlet > --- > > Key: HDDS-2569 > URL: https://issues.apache.org/jira/browse/HDDS-2569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2570) Handle InterruptedException in ProfileServlet
Dinesh Chitlangia created HDDS-2570: --- Summary: Handle InterruptedException in ProfileServlet Key: HDDS-2570 URL: https://issues.apache.org/jira/browse/HDDS-2570 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2570) Handle InterruptedException in ProfileServlet
[ https://issues.apache.org/jira/browse/HDDS-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2570: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ (was: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf) > Handle InterruptedException in ProfileServlet > - > > Key: HDDS-2570 > URL: https://issues.apache.org/jira/browse/HDDS-2570 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2568) Handle InterruptedException in OzoneContainer
Dinesh Chitlangia created HDDS-2568: --- Summary: Handle InterruptedException in OzoneContainer Key: HDDS-2568 URL: https://issues.apache.org/jira/browse/HDDS-2568 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2569) Handle InterruptedException in LogStreamServlet
Dinesh Chitlangia created HDDS-2569: --- Summary: Handle InterruptedException in LogStreamServlet Key: HDDS-2569 URL: https://issues.apache.org/jira/browse/HDDS-2569 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2568) Handle InterruptedException in OzoneContainer
[ https://issues.apache.org/jira/browse/HDDS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2568: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh was: Fix 2 instances: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk] > Handle InterruptedException in OzoneContainer > - > > Key: HDDS-2568 > URL: https://issues.apache.org/jira/browse/HDDS-2568 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2567) Handle InterruptedException in ContainerMetadataScanner
[ https://issues.apache.org/jira/browse/HDDS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2567: Description: Fix 2 instances: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk] was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb] > Handle InterruptedException in ContainerMetadataScanner > --- > > Key: HDDS-2567 > URL: https://issues.apache.org/jira/browse/HDDS-2567 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk] > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2566) Handle InterruptedException in ContainerDataScanner
[ https://issues.apache.org/jira/browse/HDDS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2566: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9 > Handle InterruptedException in ContainerDataScanner > --- > > Key: HDDS-2566 > URL: https://issues.apache.org/jira/browse/HDDS-2566 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2567) Handle InterruptedException in ContainerMetadataScanner
Dinesh Chitlangia created HDDS-2567: --- Summary: Handle InterruptedException in ContainerMetadataScanner Key: HDDS-2567 URL: https://issues.apache.org/jira/browse/HDDS-2567 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2566) Handle InterruptedException in ContainerDataScanner
[ https://issues.apache.org/jira/browse/HDDS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2566: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb] was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] > Handle InterruptedException in ContainerDataScanner > --- > > Key: HDDS-2566 > URL: https://issues.apache.org/jira/browse/HDDS-2566 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb] > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2566) Handle InterruptedException in ContainerDataScanner
Dinesh Chitlangia created HDDS-2566: --- Summary: Handle InterruptedException in ContainerDataScanner Key: HDDS-2566 URL: https://issues.apache.org/jira/browse/HDDS-2566 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2564) Handle InterruptedException in ContainerStateMachine
[ https://issues.apache.org/jira/browse/HDDS-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2564: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRV=AW5md-65KcVY8lQ4ZsRV was: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] > Handle InterruptedException in ContainerStateMachine > > > Key: HDDS-2564 > URL: https://issues.apache.org/jira/browse/HDDS-2564 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRV=AW5md-65KcVY8lQ4ZsRV > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2565) Handle InterruptedException in VolumeSet
Dinesh Chitlangia created HDDS-2565: --- Summary: Handle InterruptedException in VolumeSet Key: HDDS-2565 URL: https://issues.apache.org/jira/browse/HDDS-2565 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2565) Handle InterruptedException in VolumeSet
[ https://issues.apache.org/jira/browse/HDDS-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2565: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9 (was: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] ) > Handle InterruptedException in VolumeSet > > > Key: HDDS-2565 > URL: https://issues.apache.org/jira/browse/HDDS-2565 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2564) Handle InterruptedException in ContainerStateMachine
Dinesh Chitlangia created HDDS-2564: --- Summary: Handle InterruptedException in ContainerStateMachine Key: HDDS-2564 URL: https://issues.apache.org/jira/browse/HDDS-2564 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2563) Handle InterruptedException in RunningDatanodeState
[ https://issues.apache.org/jira/browse/HDDS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2563: Description: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx] > Handle InterruptedException in RunningDatanodeState > --- > > Key: HDDS-2563 > URL: https://issues.apache.org/jira/browse/HDDS-2563 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2563) Handle InterruptedException in RunningDatanodeState
Dinesh Chitlangia created HDDS-2563: --- Summary: Handle InterruptedException in RunningDatanodeState Key: HDDS-2563 URL: https://issues.apache.org/jira/browse/HDDS-2563 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2562) Handle InterruptedException in DatanodeStateMachine
[ https://issues.apache.org/jira/browse/HDDS-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2562: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx] was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj > Handle InterruptedException in DatanodeStateMachine > --- > > Key: HDDS-2562 > URL: https://issues.apache.org/jira/browse/HDDS-2562 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2562) Handle InterruptedException in DatanodeStateMachine
Dinesh Chitlangia created HDDS-2562: --- Summary: Handle InterruptedException in DatanodeStateMachine Key: HDDS-2562 URL: https://issues.apache.org/jira/browse/HDDS-2562 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2561) Handle InterruptedException in LeaseManager
[ https://issues.apache.org/jira/browse/HDDS-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2561: Description: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj (was: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH] ) > Handle InterruptedException in LeaseManager > --- > > Key: HDDS-2561 > URL: https://issues.apache.org/jira/browse/HDDS-2561 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2561) Handle InterruptedException in LeaseManager
Dinesh Chitlangia created HDDS-2561: --- Summary: Handle InterruptedException in LeaseManager Key: HDDS-2561 URL: https://issues.apache.org/jira/browse/HDDS-2561 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2560) Handle InterruptedException in Scheduler
Dinesh Chitlangia created HDDS-2560: --- Summary: Handle InterruptedException in Scheduler Key: HDDS-2560 URL: https://issues.apache.org/jira/browse/HDDS-2560 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2560) Handle InterruptedException in Scheduler
[ https://issues.apache.org/jira/browse/HDDS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2560: Description: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH] was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV] > Handle InterruptedException in Scheduler > > > Key: HDDS-2560 > URL: https://issues.apache.org/jira/browse/HDDS-2560 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2559) Handle InterruptedException in BackgroundService
Dinesh Chitlangia created HDDS-2559: --- Summary: Handle InterruptedException in BackgroundService Key: HDDS-2559 URL: https://issues.apache.org/jira/browse/HDDS-2559 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2559) Handle InterruptedException in BackgroundService
[ https://issues.apache.org/jira/browse/HDDS-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2559: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV] was: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX] > Handle InterruptedException in BackgroundService > > > Key: HDDS-2559 > URL: https://issues.apache.org/jira/browse/HDDS-2559 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV] > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2558) Handle InterruptedException in XceiverClientSpi
[ https://issues.apache.org/jira/browse/HDDS-2558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2558: Description: Fix 2 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX] was: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw] > Handle InterruptedException in XceiverClientSpi > --- > > Key: HDDS-2558 > URL: https://issues.apache.org/jira/browse/HDDS-2558 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix 2 instances: > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX] > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2558) Handle InterruptedException in XceiverClientSpi
Dinesh Chitlangia created HDDS-2558: --- Summary: Handle InterruptedException in XceiverClientSpi Key: HDDS-2558 URL: https://issues.apache.org/jira/browse/HDDS-2558 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2557) Handle InterruptedException in CommitWatcher
[ https://issues.apache.org/jira/browse/HDDS-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2557: Description: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw] was: Fix these 5 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl] > Handle InterruptedException in CommitWatcher > > > Key: HDDS-2557 > URL: https://issues.apache.org/jira/browse/HDDS-2557 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2557) Handle InterruptedException in CommitWatcher
Dinesh Chitlangia created HDDS-2557: --- Summary: Handle InterruptedException in CommitWatcher Key: HDDS-2557 URL: https://issues.apache.org/jira/browse/HDDS-2557 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix these 5 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2556) Handle InterruptedException in BlockOutputStream
[ https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2556: Description: Fix these 5 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl] was: Fix these 3 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] > Handle InterruptedException in BlockOutputStream > > > Key: HDDS-2556 > URL: https://issues.apache.org/jira/browse/HDDS-2556 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > Fix these 5 instances > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2556) Handle InterruptedException in BlockOutputStream
Dinesh Chitlangia created HDDS-2556: --- Summary: Handle InterruptedException in BlockOutputStream Key: HDDS-2556 URL: https://issues.apache.org/jira/browse/HDDS-2556 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix these 3 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2555) Handle InterruptedException in XceiverClientGrpc
Dinesh Chitlangia created HDDS-2555: --- Summary: Handle InterruptedException in XceiverClientGrpc Key: HDDS-2555 URL: https://issues.apache.org/jira/browse/HDDS-2555 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Dinesh Chitlangia Fix these 3 instances [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2504) Handle InterruptedException properly
[ https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2504: Labels: newbie sonar (was: sonar) > Handle InterruptedException properly > > > Key: HDDS-2504 > URL: https://issues.apache.org/jira/browse/HDDS-2504 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Priority: Major > Labels: newbie, sonar > > {quote}Either re-interrupt or rethrow the {{InterruptedException}} > {quote} > in several files (42 issues) > [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] > Feel free to create sub-tasks if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2504) Handle InterruptedException properly
[ https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2504: Description: {quote}Either re-interrupt or rethrow the {{InterruptedException}} {quote} in several files (42 issues) [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] Feel free to create sub-tasks if needed. was: bq. Either re-interrupt or rethrow the {{InterruptedException}} in several files (39 issues) https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG Feel free to create sub-tasks if needed. > Handle InterruptedException properly > > > Key: HDDS-2504 > URL: https://issues.apache.org/jira/browse/HDDS-2504 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Priority: Major > Labels: sonar > > {quote}Either re-interrupt or rethrow the {{InterruptedException}} > {quote} > in several files (42 issues) > [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] > Feel free to create sub-tasks if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2554) Sonar: Null pointers should not be dereferenced
Dinesh Chitlangia created HDDS-2554: --- Summary: Sonar: Null pointers should not be dereferenced Key: HDDS-2554 URL: https://issues.apache.org/jira/browse/HDDS-2554 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuP1m2E_7tGaNiTf=AW6BMuP1m2E_7tGaNiTf] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2553) Sonar: Iterator.next() methods should throw NoSuchElementException
Dinesh Chitlangia created HDDS-2553: --- Summary: Sonar: Iterator.next() methods should throw NoSuchElementException Key: HDDS-2553 URL: https://issues.apache.org/jira/browse/HDDS-2553 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMujFm2E_7tGaNiTl=AW6BMujFm2E_7tGaNiTl] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2552) Sonar: Save and reuse Random object
Dinesh Chitlangia created HDDS-2552: --- Summary: Sonar: Save and reuse Random object Key: HDDS-2552 URL: https://issues.apache.org/jira/browse/HDDS-2552 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia Assignee: Shweta [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cLKcVY8lQ4Zr2o=AW5md-cLKcVY8lQ4Zr2o] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2551) Sonar: Non-primitive fields should not be volatile
[ https://issues.apache.org/jira/browse/HDDS-2551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2551: Labels: codehealth (was: ) > Sonar: Non-primitive fields should not be volatile > -- > > Key: HDDS-2551 > URL: https://issues.apache.org/jira/browse/HDDS-2551 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Shweta >Priority: Major > Labels: codehealth > > Fix following 8 instances: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVd=AW5md-_2KcVY8lQ4ZsVd] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2nKcVY8lQ4ZsNi=AW5md-2nKcVY8lQ4ZsNi] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2yKcVY8lQ4ZsN4=AW5md-2yKcVY8lQ4ZsN4] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-1FKcVY8lQ4ZsLn=AW5md-1FKcVY8lQ4ZsLn] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md--bKcVY8lQ4ZsU6=AW5md--bKcVY8lQ4ZsU6] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9GKcVY8lQ4ZsT0=AW5md-9GKcVY8lQ4ZsT0] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtj=AW5md-VxKcVY8lQ4Zrtj] > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtk=AW5md-VxKcVY8lQ4Zrtk] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2551) Sonar: Non-primitive fields should not be volatile
Dinesh Chitlangia created HDDS-2551: --- Summary: Sonar: Non-primitive fields should not be volatile Key: HDDS-2551 URL: https://issues.apache.org/jira/browse/HDDS-2551 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia Assignee: Shweta Fix following 8 instances: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVd=AW5md-_2KcVY8lQ4ZsVd] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2nKcVY8lQ4ZsNi=AW5md-2nKcVY8lQ4ZsNi] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2yKcVY8lQ4ZsN4=AW5md-2yKcVY8lQ4ZsN4] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-1FKcVY8lQ4ZsLn=AW5md-1FKcVY8lQ4ZsLn] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md--bKcVY8lQ4ZsU6=AW5md--bKcVY8lQ4ZsU6] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9GKcVY8lQ4ZsT0=AW5md-9GKcVY8lQ4ZsT0] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtj=AW5md-VxKcVY8lQ4Zrtj] [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtk=AW5md-VxKcVY8lQ4Zrtk] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2549) Invoke method(s) only conditionally
[ https://issues.apache.org/jira/browse/HDDS-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2549. - Fix Version/s: 0.5.0 Target Version/s: 0.5.0 Resolution: Fixed Thanks [~apurohit] for the contribution & [~aengineer] for the reviews. > Invoke method(s) only conditionally > --- > > Key: HDDS-2549 > URL: https://issues.apache.org/jira/browse/HDDS-2549 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Related to : > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AVKcVY8lQ4ZsWU=AW5md_AVKcVY8lQ4ZsWU -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2545) Remove empty statement
[ https://issues.apache.org/jira/browse/HDDS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2545. - Fix Version/s: 0.5.0 Target Version/s: 0.5.0 Resolution: Fixed > Remove empty statement > -- > > Key: HDDS-2545 > URL: https://issues.apache.org/jira/browse/HDDS-2545 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Minor > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Related to : > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWF=AW5md_AGKcVY8lQ4ZsWF -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2545) Remove empty statement
[ https://issues.apache.org/jira/browse/HDDS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2545: Summary: Remove empty statement (was: Remove this empty statement.) > Remove empty statement > -- > > Key: HDDS-2545 > URL: https://issues.apache.org/jira/browse/HDDS-2545 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Related to : > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWF=AW5md_AGKcVY8lQ4ZsWF -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2550) Sonar: OzoneClient should be closed in GetAclKeyHandler
[ https://issues.apache.org/jira/browse/HDDS-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2550. - Resolution: Fixed [~adoroszlai] & [~bharat] for the reviews. [~ppogde] Thank you for filing the jira and the fix. For future: 1. When filing a Jira, pls leave `Fix Versions` as empty 2. Pls try to keep the PR title same as Jira title along with Jira# Example: Jira Title {{Sonar: OzoneClient should be closed in GetAclKeyHandler}} PR Title {{HDDS-2550. Sonar: OzoneClient should be closed in GetAclKeyHandler}} > Sonar: OzoneClient should be closed in GetAclKeyHandler > --- > > Key: HDDS-2550 > URL: https://issues.apache.org/jira/browse/HDDS-2550 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone CLI >Reporter: Prashant Pogde >Assignee: Prashant Pogde >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This is a followup to HDDS-2490 where we need similar changes in > GetAclKeyHandler. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2550) Sonar: OzoneClient should be closed in GetAclKeyHandler
[ https://issues.apache.org/jira/browse/HDDS-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-2550: Summary: Sonar: OzoneClient should be closed in GetAclKeyHandler (was: follow up : Ensure OzoneClient is closed in Ozone Shell handlers) > Sonar: OzoneClient should be closed in GetAclKeyHandler > --- > > Key: HDDS-2550 > URL: https://issues.apache.org/jira/browse/HDDS-2550 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone CLI >Reporter: Prashant Pogde >Assignee: Prashant Pogde >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 10m > Remaining Estimate: 0h > > This is a followup to HDDS-2490 where we need similar changes in > GetAclKeyHandler. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org