[jira] [Commented] (HDFS-17362) RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally
[ https://issues.apache.org/jira/browse/HDFS-17362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813541#comment-17813541 ] ASF GitHub Bot commented on HDFS-17362: --- simbadzina commented on code in PR #6510: URL: https://github.com/apache/hadoop/pull/6510#discussion_r1475691426 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java: ## @@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends AbstractNNFailoverProxyP public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class xface, HAProxyFactory factory) { -this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, xface, factory)); +this(conf, uri, xface, factory, +new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory)); Review Comment: Actually, `IPFailoverProxyProvider` can still be set on `routerContext.getFileSystemURI().toString()` even when we use the ConfiguredFailoverProxyProvider. > RBF: RouterObserverReadProxyProvider should use > ConfiguredFailoverProxyProvider internally > -- > > Key: HDFS-17362 > URL: https://issues.apache.org/jira/browse/HDFS-17362 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > > Currently, RouterObserverReadProxyProvider is using IPFailoverProxyProvider, > while ObserverReadProxyProvider is using ConfiguredFailoverProxyProvider. If > we are to align RouterObserverReadProxyProvider with > ObserverReadProxyProvider, RouterObserverReadProxyProvider should internally > use ConfiguredFailoverProxyProvider. Moreover, IPFailoverProxyProvider has > an issue with resolving HA configurations. (For example, > IPFailoverProxyProvider cannot resolve hdfs://router-service.) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17362) RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally
[ https://issues.apache.org/jira/browse/HDFS-17362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813538#comment-17813538 ] ASF GitHub Bot commented on HDFS-17362: --- simbadzina commented on code in PR #6510: URL: https://github.com/apache/hadoop/pull/6510#discussion_r1475654924 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java: ## @@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends AbstractNNFailoverProxyP public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class xface, HAProxyFactory factory) { -this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, xface, factory)); +this(conf, uri, xface, factory, +new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory)); Review Comment: In my original code I assumed the routers would all sit behind a single host name and traffic would be split via DNS. The IpFailoverProxyProvider is then needed incase one router is down but the IP is still in the DNS record. I can see how in other setups users may need to list all the routers explicitly in the client configurations. Can we make the proxy provider configurable? To support both usecases. > RBF: RouterObserverReadProxyProvider should use > ConfiguredFailoverProxyProvider internally > -- > > Key: HDFS-17362 > URL: https://issues.apache.org/jira/browse/HDFS-17362 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > > Currently, RouterObserverReadProxyProvider is using IPFailoverProxyProvider, > while ObserverReadProxyProvider is using ConfiguredFailoverProxyProvider. If > we are to align RouterObserverReadProxyProvider with > ObserverReadProxyProvider, RouterObserverReadProxyProvider should internally > use ConfiguredFailoverProxyProvider. Moreover, IPFailoverProxyProvider has > an issue with resolving HA configurations. (For example, > IPFailoverProxyProvider cannot resolve hdfs://router-service.) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.
[ https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813515#comment-17813515 ] ASF GitHub Bot commented on HDFS-17358: --- tasanuma commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1923134680 I haven't looked at the code in detail, but the unit test seems to be failing. > EC: infinite lease recovery caused by the length of RWR equals to zero. > --- > > Key: HDFS-17358 > URL: https://issues.apache.org/jira/browse/HDFS-17358 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > Recently, there is a strange case happened on our ec production cluster. > The phenomenon is as below described: NameNode does infinite recovery lease > of some ec files(~80K+) and those files could never be closed. > > After digging into logs and releated code, we found the root cause is below > codes in method `BlockRecoveryWorker$RecoveryTaskStriped#recover`: > {code:java} > // we met info.getNumBytes==0 here! > if (info != null && > info.getGenerationStamp() >= block.getGenerationStamp() && > info.getNumBytes() > 0) { > final BlockRecord existing = syncBlocks.get(blockId); > if (existing == null || > info.getNumBytes() > existing.rInfo.getNumBytes()) { > // if we have >1 replicas for the same internal block, we > // simply choose the one with larger length. > // TODO: better usage of redundant replicas > syncBlocks.put(blockId, new BlockRecord(id, proxyDN, info)); > } > } > // throw exception here! > checkLocations(syncBlocks.size()); > {code} > The related logs are as below: > {code:java} > java.io.IOException: > BP-1157541496-10.104.10.198-1702548776421:blk_-9223372036808032688_2938828 > has no enough internal blocks, unable to start recovery. Locations=[...] > {code} > {code:java} > 2024-01-23 12:48:16,171 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > initReplicaRecovery: blk_-9223372036808032686_2938828, recoveryId=27615365, > replica=ReplicaUnderRecovery, blk_-9223372036808032686_2938828, RUR > getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = > /data25/hadoop/hdfs/datanode getBlockURI() = > file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-x.x.x.x-1702548776421/current/rbw/blk_-9223372036808032686 > recoveryId=27529675 original=ReplicaWaitingToBeRecovered, > blk_-9223372036808032686_2938828, RWR getNumBytes() = 0 getBytesOnDisk() = 0 > getVisibleLength()= -1 getVolume() = /data25/hadoop/hdfs/datanode > getBlockURI() = > file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-10.104.10.198-1702548776421/current/rbw/blk_-9223372036808032686 > {code} > because the length of RWR is zero, the length of the returned object in > below codes is zero. We can't put it into syncBlocks. > So throw exception in checkLocations method. > {code:java} > ReplicaRecoveryInfo info = callInitReplicaRecovery(proxyDN, > new RecoveringBlock(internalBlk, null, recoveryId)); {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813507#comment-17813507 ] ASF GitHub Bot commented on HDFS-17360: --- hadoop-yetus commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1923066306 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 22s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 28s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 38s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 215m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 308m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6505 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 75b53f0e1f28 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c430cc09a1662a5eb33cbfae7e6ecb608e93d55f | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/10/testReport/ | | Max. process+thread count | 5418 (vs. ulimit of 5500) | |
[jira] [Commented] (HDFS-17342) Fix DataNode may invalidates normal block causing missing block
[ https://issues.apache.org/jira/browse/HDFS-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813504#comment-17813504 ] ASF GitHub Bot commented on HDFS-17342: --- haiyang1987 commented on code in PR #6464: URL: https://github.com/apache/hadoop/pull/6464#discussion_r1475614254 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java: ## @@ -2011,4 +2011,95 @@ public void tesInvalidateMissingBlock() throws Exception { cluster.shutdown(); } } + + @Test + public void testCheckFilesWhenInvalidateMissingBlock() throws Exception { +long blockSize = 1024; +int heartbeatInterval = 1; +HdfsConfiguration c = new HdfsConfiguration(); +c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heartbeatInterval); +c.setLong(DFS_BLOCK_SIZE_KEY, blockSize); +MiniDFSCluster cluster = new MiniDFSCluster.Builder(c). +numDataNodes(1).build(); +DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get(); +try { + cluster.waitActive(); + GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer. + captureLogs(DataNode.LOG); + BlockReaderTestUtil util = new BlockReaderTestUtil(cluster, new + HdfsConfiguration(conf)); + Path path = new Path("/testFile"); + util.writeFile(path, 1); + String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId(); + DataNode dn = cluster.getDataNodes().get(0); + FsDatasetImpl dnFSDataset = (FsDatasetImpl) dn.getFSDataset(); + List replicaInfos = dnFSDataset.getFinalizedBlocks(bpid); + assertEquals(1, replicaInfos.size()); + DFSTestUtil.readFile(cluster.getFileSystem(), path); + LocatedBlock blk = util.getFileBlocks(path, 512).get(0); + ExtendedBlock block = blk.getBlock(); + + // Append a new block with an incremented generation stamp. + long newGS = block.getGenerationStamp() + 1; + dnFSDataset.append(block, newGS, 1024); + block.setGenerationStamp(newGS); + ReplicaInfo tmpReplicaInfo = dnFSDataset.getReplicaInfo(blk.getBlock()); + + DataNodeFaultInjector injector = new DataNodeFaultInjector() { +@Override +public void delayGetMetaDataInputStream() { + try { +Thread.sleep(8000); + } catch (InterruptedException e) { +// Ignore exception. + } +} + }; + // Delay to getMetaDataInputStream. + DataNodeFaultInjector.set(injector); + + ExecutorService executorService = Executors.newFixedThreadPool(2); + try { +Future blockReaderFuture = executorService.submit(() -> { + try { +// Submit tasks for reading block. +BlockReader blockReader = BlockReaderTestUtil.getBlockReader( +cluster.getFileSystem(), blk, 0, 512); +blockReader.close(); + } catch (IOException e) { +// Ignore exception. + } +}); + +Future finalizeBlockFuture = executorService.submit(() -> { + try { +// Submit tasks for finalizing block. +Thread.sleep(1000); +dnFSDataset.finalizeBlock(block, false); + } catch (Exception e) { +// Ignore exception + } +}); + +// Wait for both tasks to complete. +blockReaderFuture.get(); +finalizeBlockFuture.get(); + } finally { +executorService.shutdown(); + } + + // Validate the replica is exits. + assertNotNull(dnFSDataset.getReplicaInfo(blk.getBlock())); + + // Check DN log for FileNotFoundException. + String expectedMsg = String.format("opReadBlock %s received exception " + + "java.io.FileNotFoundException: %s (No such file or directory)", + blk.getBlock(), tmpReplicaInfo.getMetadataURI().getPath()); + assertTrue("Expected log message not found in DN log.", Review Comment: Hi @smarthanwang If I understand correctly, here the UT has reproduced. > Fix DataNode may invalidates normal block causing missing block > --- > > Key: HDFS-17342 > URL: https://issues.apache.org/jira/browse/HDFS-17342 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > When users read an append file, occasional exceptions may occur, such as > org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: xxx. > This can happen if one thread is reading the block while writer thread is > finalizing it simultaneously. > *Root cause:* > # The reader thread obtains a RBW replica
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813477#comment-17813477 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6513: URL: https://github.com/apache/hadoop/pull/6513#issuecomment-1922786482 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 8s | | trunk passed | | +1 :green_heart: | compile | 3m 16s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 8s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 51s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 3m 17s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 3m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 54s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 54s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 39s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/7/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 16 new + 243 unchanged - 2 fixed = 259 total (was 245) | | +1 :green_heart: | mvnsite | 1m 1s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 45s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 204m 31s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | | The patch does not generate ASF License warnings. | | | | 319m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.TestBlocksScheduledCounter | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDiskError | | | hadoop.hdfs.TestGetBlocks | | | hadoop.hdfs.TestDFSClientExcludedNodes | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestReconstructStripedFileWithValidator | |
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813476#comment-17813476 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6513: URL: https://github.com/apache/hadoop/pull/6513#issuecomment-1922765734 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 56s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 13s | | trunk passed | | +1 :green_heart: | compile | 2m 58s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 48s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 11s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 52s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 2s | | the patch passed | | +1 :green_heart: | compile | 2m 58s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 58s | | the patch passed | | +1 :green_heart: | compile | 2m 47s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 42s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/6/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 17 new + 242 unchanged - 3 fixed = 259 total (was 245) | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 45s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 194m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 308m 14s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestBlocksScheduledCounter | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDiskError | | | hadoop.hdfs.TestDFSClientExcludedNodes | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | |
[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
[ https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813472#comment-17813472 ] ASF GitHub Bot commented on HDFS-17146: --- zhtttylz commented on PR #6504: URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1922732504 Hi @slfan1989, @virajjasani, @huangzhaobo99, could you please take a look at this PR when you're available? Much appreciated! > Use the dfsadmin -reconfig command to initiate reconfiguration on all > decommissioning datanodes. > > > Key: HDFS-17146 > URL: https://issues.apache.org/jira/browse/HDFS-17146 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > > If the *DFSAdmin* command could have the ability to perform bulk operations > across all decommissioned datanodes, that would be highly advantageous. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17366) NameNode Fine-Grained Locking via Namespace Tree
[ https://issues.apache.org/jira/browse/HDFS-17366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZanderXu updated HDFS-17366: Description: As we all known, the write performance of NameNode is limited by the global lock. We target to enable fine-grained locking based on the Namespace tree to improve the performance of NameNode write operations. There are multiple motivations for creating this ticket: * We have implemented this fine-grained locking and gained nearly 7x performance improvements in our prod environment * Other companies made similar improvements based on their internal branch. Internal branches are quite different from the community, so few feedback and discussions in the community. * The topic of fine-grained locking has been discussed for a very long time, but still without any results. We implemented this fine-gained locking based on the namespace tree to maximize the number of concurrency for disjoint or independent operations. was: As we all known, the write performance of NameNode is limited by the global lock. We target to enable fine-grained locking based on the Namespace tree to improve the performance of NameNode write operations. There are multiple motivations for creating this ticket: * We have implemented this fine-grained locking and gained nearly 7x performance improvements in our prod environment * Other companies made similar improvements based on their internal branch. Internal branches are quite different from the community, so few feedback and discussions in the community. * The topic off fine-grained locking has been discussed for a very long time without progress. We implemented this fine-gained locking based on the namespace tree to maximize the number of concurrency for disjoint or independent operations. > NameNode Fine-Grained Locking via Namespace Tree > > > Key: HDFS-17366 > URL: https://issues.apache.org/jira/browse/HDFS-17366 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namenode >Reporter: ZanderXu >Priority: Major > > As we all known, the write performance of NameNode is limited by the global > lock. We target to enable fine-grained locking based on the Namespace tree to > improve the performance of NameNode write operations. > There are multiple motivations for creating this ticket: > * We have implemented this fine-grained locking and gained nearly 7x > performance improvements in our prod environment > * Other companies made similar improvements based on their internal branch. > Internal branches are quite different from the community, so few feedback and > discussions in the community. > * The topic of fine-grained locking has been discussed for a very long time, > but still without any results. > > We implemented this fine-gained locking based on the namespace tree to > maximize the number of concurrency for disjoint or independent operations. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813451#comment-17813451 ] ASF GitHub Bot commented on HDFS-17360: --- huangzhaobo99 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1922622272 @slfan1989 The effect is quite optimistic. Below are the high concurrency blocks retrieved ![c0b394c3-3d5a-499b-99bf-794a33a2f71a](https://github.com/apache/hadoop/assets/63718681/43cdbe15-022f-4cf3-b7cb-b4d720766eb0) > Record the number of times a block is read during a certain time period. > > > Key: HDFS-17360 > URL: https://issues.apache.org/jira/browse/HDFS-17360 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huangzhaobo >Assignee: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813447#comment-17813447 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6513: URL: https://github.com/apache/hadoop/pull/6513#issuecomment-1922613659 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 18s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 1s | | trunk passed | | +1 :green_heart: | compile | 6m 1s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 5m 52s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 18s | | trunk passed | | +1 :green_heart: | javadoc | 1m 52s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 17s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | | the patch passed | | +1 :green_heart: | compile | 5m 58s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 5m 58s | | the patch passed | | +1 :green_heart: | compile | 5m 35s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 5m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 19s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 22 new + 244 unchanged - 0 fixed = 266 total (was 244) | | +1 :green_heart: | mvnsite | 2m 3s | | the patch passed | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 26s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 261m 0s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 448m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6513 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c63a298f7712 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 99ab2b805acc9ced210a29a59d07ac5e33d1e46d | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions |
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813434#comment-17813434 ] ASF GitHub Bot commented on HDFS-17360: --- huangzhaobo99 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1922559177 > > @huangzhaobo99 From my personal perspective, this pr is valuable. However, I don't think it's a good idea to write this information into JMX because JMX is meant for collecting statistical information. If we include detailed information, it might cause some complications. > > @slfan1989 The idea mainly comes from the DatanodeNetworkCounts metric, which is more complex and does not even perform cleaning operations on the key. > > ReadBlockIdCounts metric should not be a problem, and it also adds a switch. Hi @slfan1989, This metric should be okay, Could you please provide me with a ticket for this? hoping to attract more people's attention, Perhaps there is a better way. > Record the number of times a block is read during a certain time period. > > > Key: HDFS-17360 > URL: https://issues.apache.org/jira/browse/HDFS-17360 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huangzhaobo >Assignee: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813402#comment-17813402 ] ASF GitHub Bot commented on HDFS-17360: --- hadoop-yetus commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-191614 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 15s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 264m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 42s | | The patch does not generate ASF License warnings. | | | | 417m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6505 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 1547436aff41 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b8f10ae14aa8e5f5a9ad9966df6d2c46b7e65ff4 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/9/testReport/ | | Max. process+thread count | 2691 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | |
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813383#comment-17813383 ] ASF GitHub Bot commented on HDFS-17360: --- hadoop-yetus commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1921987044 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 36m 48s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/8/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 59s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 44s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 44s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 32s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 56s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 197m 20s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 296m 57s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6505 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 2ad1e56fd348 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3eca3d868ac0777c36d42e258c6da6e823f6cbb8 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/8/testReport/ | | Max. process+thread count | 4362 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/8/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | |
[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
[ https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813380#comment-17813380 ] ASF GitHub Bot commented on HDFS-17146: --- hadoop-yetus commented on PR #6504: URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1921969339 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 9s | | trunk passed | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 31s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 220m 3s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 358m 2s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6504 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux e4fb71e17dff 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 67f9b94cbf3ae67ca48c087f5f51618bef297b2d | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/5/testReport/ | | Max. process+thread count | 3581 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated.
[jira] [Commented] (HDFS-17368) HA: Standy should exit safemode when resources are from low available
[ https://issues.apache.org/jira/browse/HDFS-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813377#comment-17813377 ] ASF GitHub Bot commented on HDFS-17368: --- hadoop-yetus commented on PR #6518: URL: https://github.com/apache/hadoop/pull/6518#issuecomment-1921949946 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 29s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 35s | | the patch passed | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 199m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6518/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 286m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6518/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6518 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a7b554722c26 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 13bccda7c70e086d64ce3fc3dab56325c0cbdefa | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6518/1/testReport/ | | Max. process+thread count | 4188 (vs. ulimit of 5500) | | modules | C:
[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.
[ https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813348#comment-17813348 ] ASF GitHub Bot commented on HDFS-17365: --- hadoop-yetus commented on PR #6517: URL: https://github.com/apache/hadoop/pull/6517#issuecomment-1921831336 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 18s | | trunk passed | | +1 :green_heart: | compile | 2m 52s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 44s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 16s | | trunk passed | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 3s | | the patch passed | | +1 :green_heart: | compile | 2m 49s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 49s | | the patch passed | | +1 :green_heart: | compile | 2m 45s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 45s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/3/artifact/out/blanks-eol.txt) | The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 0m 33s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 0s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 53s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 200m 51s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 305m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6517 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 2cb7ceaf1cb2 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git
[jira] [Commented] (HDFS-17359) EC: recheck failed streamers should only after flushing all packets.
[ https://issues.apache.org/jira/browse/HDFS-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813306#comment-17813306 ] ASF GitHub Bot commented on HDFS-17359: --- hfutatzhanghb commented on PR #6503: URL: https://github.com/apache/hadoop/pull/6503#issuecomment-1921573095 > Merged. Thanks for your contribution! @hfutatzhanghb Sir, thanks a lot for your reviewing and merging ~ > EC: recheck failed streamers should only after flushing all packets. > > > Key: HDFS-17359 > URL: https://issues.apache.org/jira/browse/HDFS-17359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.9, 3.4.1, 3.5.0 > > > In method DFSStripedOutputStream#checkStreamerFailures, we have below codes: > {code:java} > Set newFailed = checkStreamers(); > if (newFailed.size() == 0) { > return; > } if (isNeedFlushAllPackets) { > // for healthy streamers, wait till all of them have fetched the new > block > // and flushed out all the enqueued packets. > flushAllInternals(); > } > // recheck failed streamers again after the flush > newFailed = checkStreamers(); {code} > We should better move the re-check logic into if condition to reduce useless > invocation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17359) EC: recheck failed streamers should only after flushing all packets.
[ https://issues.apache.org/jira/browse/HDFS-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-17359. - Fix Version/s: 3.3.9 3.4.1 3.5.0 Resolution: Fixed > EC: recheck failed streamers should only after flushing all packets. > > > Key: HDFS-17359 > URL: https://issues.apache.org/jira/browse/HDFS-17359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.9, 3.4.1, 3.5.0 > > > In method DFSStripedOutputStream#checkStreamerFailures, we have below codes: > {code:java} > Set newFailed = checkStreamers(); > if (newFailed.size() == 0) { > return; > } if (isNeedFlushAllPackets) { > // for healthy streamers, wait till all of them have fetched the new > block > // and flushed out all the enqueued packets. > flushAllInternals(); > } > // recheck failed streamers again after the flush > newFailed = checkStreamers(); {code} > We should better move the re-check logic into if condition to reduce useless > invocation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17359) EC: recheck failed streamers should only after flushing all packets.
[ https://issues.apache.org/jira/browse/HDFS-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813269#comment-17813269 ] ASF GitHub Bot commented on HDFS-17359: --- tasanuma merged PR #6503: URL: https://github.com/apache/hadoop/pull/6503 > EC: recheck failed streamers should only after flushing all packets. > > > Key: HDFS-17359 > URL: https://issues.apache.org/jira/browse/HDFS-17359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > > In method DFSStripedOutputStream#checkStreamerFailures, we have below codes: > {code:java} > Set newFailed = checkStreamers(); > if (newFailed.size() == 0) { > return; > } if (isNeedFlushAllPackets) { > // for healthy streamers, wait till all of them have fetched the new > block > // and flushed out all the enqueued packets. > flushAllInternals(); > } > // recheck failed streamers again after the flush > newFailed = checkStreamers(); {code} > We should better move the re-check logic into if condition to reduce useless > invocation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17359) EC: recheck failed streamers should only after flushing all packets.
[ https://issues.apache.org/jira/browse/HDFS-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813270#comment-17813270 ] ASF GitHub Bot commented on HDFS-17359: --- tasanuma commented on PR #6503: URL: https://github.com/apache/hadoop/pull/6503#issuecomment-1921517899 Merged. Thanks for your contribution! @hfutatzhanghb > EC: recheck failed streamers should only after flushing all packets. > > > Key: HDFS-17359 > URL: https://issues.apache.org/jira/browse/HDFS-17359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > > In method DFSStripedOutputStream#checkStreamerFailures, we have below codes: > {code:java} > Set newFailed = checkStreamers(); > if (newFailed.size() == 0) { > return; > } if (isNeedFlushAllPackets) { > // for healthy streamers, wait till all of them have fetched the new > block > // and flushed out all the enqueued packets. > flushAllInternals(); > } > // recheck failed streamers again after the flush > newFailed = checkStreamers(); {code} > We should better move the re-check logic into if condition to reduce useless > invocation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17368) HA: Standy should exit safemode when resources are from low available
[ https://issues.apache.org/jira/browse/HDFS-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813232#comment-17813232 ] ASF GitHub Bot commented on HDFS-17368: --- zhuzilong2013 opened a new pull request, #6518: URL: https://github.com/apache/hadoop/pull/6518 ### Description of PR Refer to HDFS-17368. The NameNodeResourceMonitor automatically enters safemode when it detects that the resources are not suffcient. NNRM is only in ANN. If both ANN and SNN enter SM due to low resources, and later SNN's disk space is restored, SNN willl become ANN and ANN will become SNN. However, at this point, SNN will not exit the SM, even if the disk is recovered. Consider the following scenario: - Initially, nn-1 is active and nn-2 is standby. The insufficient resources of both nn-1 and nn-2 in dfs.namenode.name.dir, the NameNodeResourceMonitor detects the resource issue and puts nn01 into safemode. - At this point, nn-1 is in safemode (ON) and active, while nn-2 is in safemode (OFF) and standby. - After a period of time, the resources in nn-2's dfs.namenode.name.dir recover, triggering failover. - Now, nn-1 is in safe mode (ON) and standby, while nn-2 is in safe mode (OFF) and active. - Afterward, the resources in nn-1's dfs.namenode.name.dir recover. - However, since nn-1 is standby but in safemode (ON), it unable to exit safe mode automatically. If SNN is detected to be in SM(because low resource), it will exit. ### How was this patch tested? Test in a production environment ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > HA: Standy should exit safemode when resources are from low available > - > > Key: HDFS-17368 > URL: https://issues.apache.org/jira/browse/HDFS-17368 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zilong Zhu >Priority: Major > > The NameNodeResourceMonitor automatically enters safemode when it detects > that the resources are not suffcient. NNRM is only in ANN. If both ANN and > SNN enter SM due to low resources, and later SNN's disk space is restored, > SNN willl become ANN and ANN will become SNN. However, at this point, SNN > will not exit the SM, even if the disk is recovered. > Consider the following scenario: > * Initially, nn-1 is active and nn-2 is standby. The insufficient resources > of both nn-1 and nn-2 in dfs.namenode.name.dir, the NameNodeResourceMonitor > detects the resource issue and puts nn01 into safemode. > * At this point, nn-1 is in safemode (ON) and active, while nn-2 is in > safemode (OFF) and standby. > * After a period of time, the resources in nn-2's dfs.namenode.name.dir > recover, triggering failover. > * Now, nn-1 is in safe mode (ON) and standby, while nn-2 is in safe mode > (OFF) and active. > * Afterward, the resources in nn-1's dfs.namenode.name.dir recover. > * However, since nn-1 is standby but in safemode (ON), it unable to exit > safe mode automatically. > There are two possible ways fix this issues: > # If SNN is detected to be in SM(because low resource), it will exit. > # Or we already have HDFS-17231, we can revert HDFS-2914. Bringing NNRM back > to SNN. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17368) HA: Standy should exit safemode when resources are from low available
[ https://issues.apache.org/jira/browse/HDFS-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17368: -- Labels: pull-request-available (was: ) > HA: Standy should exit safemode when resources are from low available > - > > Key: HDFS-17368 > URL: https://issues.apache.org/jira/browse/HDFS-17368 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zilong Zhu >Priority: Major > Labels: pull-request-available > > The NameNodeResourceMonitor automatically enters safemode when it detects > that the resources are not suffcient. NNRM is only in ANN. If both ANN and > SNN enter SM due to low resources, and later SNN's disk space is restored, > SNN willl become ANN and ANN will become SNN. However, at this point, SNN > will not exit the SM, even if the disk is recovered. > Consider the following scenario: > * Initially, nn-1 is active and nn-2 is standby. The insufficient resources > of both nn-1 and nn-2 in dfs.namenode.name.dir, the NameNodeResourceMonitor > detects the resource issue and puts nn01 into safemode. > * At this point, nn-1 is in safemode (ON) and active, while nn-2 is in > safemode (OFF) and standby. > * After a period of time, the resources in nn-2's dfs.namenode.name.dir > recover, triggering failover. > * Now, nn-1 is in safe mode (ON) and standby, while nn-2 is in safe mode > (OFF) and active. > * Afterward, the resources in nn-1's dfs.namenode.name.dir recover. > * However, since nn-1 is standby but in safemode (ON), it unable to exit > safe mode automatically. > There are two possible ways fix this issues: > # If SNN is detected to be in SM(because low resource), it will exit. > # Or we already have HDFS-17231, we can revert HDFS-2914. Bringing NNRM back > to SNN. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17342) Fix DataNode may invalidates normal block causing missing block
[ https://issues.apache.org/jira/browse/HDFS-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813204#comment-17813204 ] ASF GitHub Bot commented on HDFS-17342: --- smarthanwang commented on code in PR #6464: URL: https://github.com/apache/hadoop/pull/6464#discussion_r1474440780 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java: ## @@ -2011,4 +2011,95 @@ public void tesInvalidateMissingBlock() throws Exception { cluster.shutdown(); } } + + @Test + public void testCheckFilesWhenInvalidateMissingBlock() throws Exception { +long blockSize = 1024; +int heartbeatInterval = 1; +HdfsConfiguration c = new HdfsConfiguration(); +c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heartbeatInterval); +c.setLong(DFS_BLOCK_SIZE_KEY, blockSize); +MiniDFSCluster cluster = new MiniDFSCluster.Builder(c). +numDataNodes(1).build(); +DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get(); +try { + cluster.waitActive(); + GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer. + captureLogs(DataNode.LOG); + BlockReaderTestUtil util = new BlockReaderTestUtil(cluster, new + HdfsConfiguration(conf)); + Path path = new Path("/testFile"); + util.writeFile(path, 1); + String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId(); + DataNode dn = cluster.getDataNodes().get(0); + FsDatasetImpl dnFSDataset = (FsDatasetImpl) dn.getFSDataset(); + List replicaInfos = dnFSDataset.getFinalizedBlocks(bpid); + assertEquals(1, replicaInfos.size()); + DFSTestUtil.readFile(cluster.getFileSystem(), path); + LocatedBlock blk = util.getFileBlocks(path, 512).get(0); + ExtendedBlock block = blk.getBlock(); + + // Append a new block with an incremented generation stamp. + long newGS = block.getGenerationStamp() + 1; + dnFSDataset.append(block, newGS, 1024); + block.setGenerationStamp(newGS); + ReplicaInfo tmpReplicaInfo = dnFSDataset.getReplicaInfo(blk.getBlock()); + + DataNodeFaultInjector injector = new DataNodeFaultInjector() { +@Override +public void delayGetMetaDataInputStream() { + try { +Thread.sleep(8000); + } catch (InterruptedException e) { +// Ignore exception. + } +} + }; + // Delay to getMetaDataInputStream. + DataNodeFaultInjector.set(injector); + + ExecutorService executorService = Executors.newFixedThreadPool(2); + try { +Future blockReaderFuture = executorService.submit(() -> { + try { +// Submit tasks for reading block. +BlockReader blockReader = BlockReaderTestUtil.getBlockReader( +cluster.getFileSystem(), blk, 0, 512); +blockReader.close(); + } catch (IOException e) { +// Ignore exception. + } +}); + +Future finalizeBlockFuture = executorService.submit(() -> { + try { +// Submit tasks for finalizing block. +Thread.sleep(1000); +dnFSDataset.finalizeBlock(block, false); + } catch (Exception e) { +// Ignore exception + } +}); + +// Wait for both tasks to complete. +blockReaderFuture.get(); +finalizeBlockFuture.get(); + } finally { +executorService.shutdown(); + } + + // Validate the replica is exits. + assertNotNull(dnFSDataset.getReplicaInfo(blk.getBlock())); + + // Check DN log for FileNotFoundException. + String expectedMsg = String.format("opReadBlock %s received exception " + + "java.io.FileNotFoundException: %s (No such file or directory)", + blk.getBlock(), tmpReplicaInfo.getMetadataURI().getPath()); + assertTrue("Expected log message not found in DN log.", Review Comment: > In fact, current case `block from rbw to finalized ` anyway will throw `FNE` . OK,can we reproduce it? > Fix DataNode may invalidates normal block causing missing block > --- > > Key: HDFS-17342 > URL: https://issues.apache.org/jira/browse/HDFS-17342 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > When users read an append file, occasional exceptions may occur, such as > org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: xxx. > This can happen if one thread is reading the block while writer thread is > finalizing it simultaneously. > *Root cause:* > # The
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813188#comment-17813188 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6513: URL: https://github.com/apache/hadoop/pull/6513#issuecomment-1921215841 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 1s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 12s | | trunk passed | | +1 :green_heart: | compile | 6m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 5m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 19s | | trunk passed | | +1 :green_heart: | javadoc | 1m 52s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 17s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 58s | | the patch passed | | +1 :green_heart: | compile | 5m 55s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 5m 55s | | the patch passed | | +1 :green_heart: | compile | 5m 46s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 5m 46s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 19s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 22 new + 244 unchanged - 0 fixed = 266 total (was 244) | | +1 :green_heart: | mvnsite | 2m 3s | | the patch passed | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 24s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 263m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 451m 8s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6513/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6513 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ea52e28102f7 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b1169ef10af88b7a4f071cad298d2a663b5f4801 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions |
[jira] [Created] (HDFS-17368) HA: Standy should exit safemode when resources are from low available
Zilong Zhu created HDFS-17368: - Summary: HA: Standy should exit safemode when resources are from low available Key: HDFS-17368 URL: https://issues.apache.org/jira/browse/HDFS-17368 Project: Hadoop HDFS Issue Type: Bug Reporter: Zilong Zhu The NameNodeResourceMonitor automatically enters safemode when it detects that the resources are not suffcient. NNRM is only in ANN. If both ANN and SNN enter SM due to low resources, and later SNN's disk space is restored, SNN willl become ANN and ANN will become SNN. However, at this point, SNN will not exit the SM, even if the disk is recovered. Consider the following scenario: * Initially, nn-1 is active and nn-2 is standby. The insufficient resources of both nn-1 and nn-2 in dfs.namenode.name.dir, the NameNodeResourceMonitor detects the resource issue and puts nn01 into safemode. * At this point, nn-1 is in safemode (ON) and active, while nn-2 is in safemode (OFF) and standby. * After a period of time, the resources in nn-2's dfs.namenode.name.dir recover, triggering failover. * Now, nn-1 is in safe mode (ON) and standby, while nn-2 is in safe mode (OFF) and active. * Afterward, the resources in nn-1's dfs.namenode.name.dir recover. * However, since nn-1 is standby but in safemode (ON), it unable to exit safe mode automatically. There are two possible ways fix this issues: # If SNN is detected to be in SM(because low resource), it will exit. # Or we already have HDFS-17231, we can revert HDFS-2914. Bringing NNRM back to SNN. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
[ https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813170#comment-17813170 ] ASF GitHub Bot commented on HDFS-17146: --- hadoop-yetus commented on PR #6504: URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1921097299 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 37s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 49s | | trunk passed | | -1 :x: | shadedclient | 42m 13s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 28s | | the patch passed | | +1 :green_heart: | compile | 1m 40s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 40s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 27s | | the patch passed | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 48s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 17s | | the patch passed | | -1 :x: | shadedclient | 5m 13s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 24s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +0 :ok: | asflicense | 0m 23s | | ASF License check generated no output? | | | | 117m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6504 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux f7594edf787d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 67f9b94cbf3ae67ca48c087f5f51618bef297b2d | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/4/testReport/ | | Max. process+thread count | 89 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | |
[jira] [Updated] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
[ https://issues.apache.org/jira/browse/HDFS-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17367: - Target Version/s: 3.5.0 > Add PercentUsed for Different StorageTypes in JMX > - > > Key: HDFS-17367 > URL: https://issues.apache.org/jira/browse/HDFS-17367 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > Currently, the NameNode only displays PercentUsed for the entire cluster. We > plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
[ https://issues.apache.org/jira/browse/HDFS-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17367: - Component/s: metrics > Add PercentUsed for Different StorageTypes in JMX > - > > Key: HDFS-17367 > URL: https://issues.apache.org/jira/browse/HDFS-17367 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > Currently, the NameNode only displays PercentUsed for the entire cluster. We > plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.
[ https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813122#comment-17813122 ] ASF GitHub Bot commented on HDFS-17365: --- hadoop-yetus commented on PR #6517: URL: https://github.com/apache/hadoop/pull/6517#issuecomment-1920857801 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 23s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 40s | | trunk passed | | +1 :green_heart: | compile | 3m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 3s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 15s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 58s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 3m 14s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 3m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 58s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 58s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/2/artifact/out/blanks-eol.txt) | The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 36s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) | | +1 :green_heart: | mvnsite | 1m 1s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 54s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 204m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 319m 14s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6517 | | Optional Tests | dupname asflicense compile javac javadoc
[jira] [Created] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
Hualong Zhang created HDFS-17367: Summary: Add PercentUsed for Different StorageTypes in JMX Key: HDFS-17367 URL: https://issues.apache.org/jira/browse/HDFS-17367 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.5.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Currently, the NameNode only displays PercentUsed for the entire cluster. We plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
[ https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813121#comment-17813121 ] ASF GitHub Bot commented on HDFS-17146: --- hadoop-yetus commented on PR #6504: URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1920846066 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 5s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 4s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 4s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/3/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 0m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 226m 11s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 370m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6504 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux b6a0c9ef9ec1 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 62249f5699cb3a132fb05222c1f53e0cbd05ca84 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/3/testReport/ | | Max. process+thread count | 3644 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813117#comment-17813117 ] ASF GitHub Bot commented on HDFS-17360: --- huangzhaobo99 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1920822366 > @huangzhaobo99 From my personal perspective, this pr is valuable. However, I don't think it's a good idea to write this information into JMX because JMX is meant for collecting statistical information. If we include detailed information, it might cause some complications. @slfan1989 The idea mainly comes from the DatanodeNetworkCounts metric, which is more complex and does not even perform cleaning operations on the key. It should not be a problem, and it also adds a switch. > Record the number of times a block is read during a certain time period. > > > Key: HDFS-17360 > URL: https://issues.apache.org/jira/browse/HDFS-17360 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huangzhaobo >Assignee: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.
[ https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813114#comment-17813114 ] ASF GitHub Bot commented on HDFS-17365: --- hadoop-yetus commented on PR #6517: URL: https://github.com/apache/hadoop/pull/6517#issuecomment-1920818666 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 17s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 18s | | trunk passed | | +1 :green_heart: | compile | 2m 57s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 47s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 0s | | the patch passed | | +1 :green_heart: | compile | 2m 48s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 48s | | the patch passed | | +1 :green_heart: | compile | 2m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 50s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/1/artifact/out/blanks-eol.txt) | The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 39s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) | | +1 :green_heart: | mvnsite | 1m 4s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 41s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 196m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 308m 58s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6517 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
[jira] [Commented] (HDFS-17360) Record the number of times a block is read during a certain time period.
[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813104#comment-17813104 ] ASF GitHub Bot commented on HDFS-17360: --- slfan1989 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1920766819 @huangzhaobo99 From my personal perspective, this pr is valuable. However, I don't think it's a good idea to write this information into JMX because JMX is meant for collecting statistical information. If we include detailed information, it might cause some complications. > Record the number of times a block is read during a certain time period. > > > Key: HDFS-17360 > URL: https://issues.apache.org/jira/browse/HDFS-17360 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huangzhaobo >Assignee: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org