[jira] [Resolved] (HDFS-17457) [FGL] UTs support fine-grained locking

2024-04-22 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei resolved HDFS-17457.

Resolution: Fixed

> [FGL] UTs support fine-grained locking
> --
>
> Key: HDFS-17457
> URL: https://issues.apache.org/jira/browse/HDFS-17457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> [FGL] UTs support fine-grained locking



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-04-22 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1567/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-httpfs 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-rbf 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 1093] 

spotbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 1093] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services
 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   module:hadoop-yarn-project 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   module:root 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 

[jira] [Created] (HDFS-17495) Change FSNamesystem.digest to use a configurable algorithm.

2024-04-22 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HDFS-17495:
-

 Summary: Change FSNamesystem.digest to use a configurable 
algorithm.
 Key: HDFS-17495
 URL: https://issues.apache.org/jira/browse/HDFS-17495
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namanode
Reporter: Tsz-wo Sze
Assignee: Tsz-wo Sze


FSNamesystem.digest currently is hardcoded to use the MD5 algorithm.  This Jira 
is to make it configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Hadoop Committer - Haiyang Hu

2024-04-22 Thread haiyang hu
Thanks!  Looking forward to participating more in the Apache Hadoop
community!

Yuanbo Liu  于2024年4月22日周一 15:26写道:

> Congratulations
>
> On Mon, Apr 22, 2024 at 12:14 PM Ayush Saxena  wrote:
>
>> Congratulations Haiyang!!!
>>
>> -Ayush
>>
>> > On 22 Apr 2024, at 9:41 AM, Xiaoqiao He  wrote:
>> >
>> > I am pleased to announce that Haiyang Hu has been elected as
>> > a committer on the Apache Hadoop project. We appreciate all of
>> > Haiyang's work, and look forward to her/his continued contributions.
>> >
>> > Congratulations and Welcome, Haiyang!
>> >
>> > Best Regards,
>> > - He Xiaoqiao
>> > (On behalf of the Apache Hadoop PMC)
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>>
>>


[jira] [Resolved] (HDFS-17485) Fix SpotBug in RouterRpcServer.java

2024-04-22 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu resolved HDFS-17485.
-
Resolution: Duplicate

> Fix SpotBug in RouterRpcServer.java
> ---
>
> Key: HDFS-17485
> URL: https://issues.apache.org/jira/browse/HDFS-17485
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-22-15-02-33-725.png
>
>
> !image-2024-04-22-15-02-33-725.png|width=1566,height=265!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17494) [FGL] GetFileInfo supports fine-grained locking

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17494:
---

 Summary: [FGL] GetFileInfo supports fine-grained locking
 Key: HDFS-17494
 URL: https://issues.apache.org/jira/browse/HDFS-17494
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


[FGL] GetFileInfo supports fine-grained locking



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17493) [FGL] Make INodeMap thread safe

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17493:
---

 Summary: [FGL] Make INodeMap thread safe
 Key: HDFS-17493
 URL: https://issues.apache.org/jira/browse/HDFS-17493
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


Make INodeMap thread safe, since it may be accessed or updated concurrently.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17492) [FGL] Abstract a INodeLockManager to mange acquiring and releasing locks in the directory-tree

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17492:
---

 Summary: [FGL] Abstract a INodeLockManager to mange acquiring and 
releasing locks in the directory-tree
 Key: HDFS-17492
 URL: https://issues.apache.org/jira/browse/HDFS-17492
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


Abstract a INodeLockManager to mange acquiring and releasing locks in the 
directory-tree.
 # Abstract a lock type to cover all cases in NN
 # Acquire the full path lock for the input path base on the input lock type
 # Acquire the full path lock for the input iNodeId base on the input lock type
 # Acquire the full path lock for some input paths, such as for rename, concat

 

INodeLockManager should returns an IIP which contains both iNodes and locks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17491) [FGL] Make getFullPathName in INode.java thread safe

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17491:
---

 Summary: [FGL] Make getFullPathName in INode.java thread safe
 Key: HDFS-17491
 URL: https://issues.apache.org/jira/browse/HDFS-17491
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu


Make getFullPathName in INode.java thread safe, so that we can safely get the 
fullpath of an iNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17490) [FGL] Make INodesInPath closeable

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17490:
---

 Summary: [FGL] Make INodesInPath closeable
 Key: HDFS-17490
 URL: https://issues.apache.org/jira/browse/HDFS-17490
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


Add an array to store the locks corresponding to each iNode in INodesInPath, 
and make INodesInPath closeable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17489) [FGL] Implement a LockPool

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17489:
---

 Summary: [FGL] Implement a LockPool
 Key: HDFS-17489
 URL: https://issues.apache.org/jira/browse/HDFS-17489
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


A lockPool to mange all locks.

It will allocate a lock and cache it if this lock doesn't been cached when 
acquiring lock, and it will uncache a unused lock from memory when releasing 
lock.

And it also should support persist locks in memory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-04-22 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.TestDFSInotifyEventInputStream 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [452K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1370/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   

[jira] [Created] (HDFS-17488) DN can fail IBRs with NPE when a volume is removed

2024-04-22 Thread Felix N (Jira)
Felix N created HDFS-17488:
--

 Summary: DN can fail IBRs with NPE when a volume is removed
 Key: HDFS-17488
 URL: https://issues.apache.org/jira/browse/HDFS-17488
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Felix N
Assignee: Felix N


 

Error logs
{code:java}
2024-04-22 15:46:33,422 [BP-1842952724-10.22.68.249-1713771988830 heartbeating 
to localhost/127.0.0.1:64977] ERROR datanode.DataNode 
(BPServiceActor.java:run(922)) - Exception in BPOfferService for Block pool 
BP-1842952724-10.22.68.249-1713771988830 (Datanode Uuid 
1659ffaf-1a80-4a8e-a542-643f6bd97ed4) service to localhost/127.0.0.1:64977
java.lang.NullPointerException
    at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReceivedAndDeleted(DatanodeProtocolClientSideTranslatorPB.java:246)
    at 
org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.sendIBRs(IncrementalBlockReportManager.java:218)
    at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:749)
    at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:920)
    at java.lang.Thread.run(Thread.java:748) {code}
The root cause is in BPOfferService#notifyNamenodeBlock, happens when it's 
called on a block belonging to a volume already removed prior. Because the 
volume was already removed

 
{code:java}
private void notifyNamenodeBlock(ExtendedBlock block, BlockStatus status,
String delHint, String storageUuid, boolean isOnTransientStorage) {
  checkBlock(block);
  final ReceivedDeletedBlockInfo info = new ReceivedDeletedBlockInfo(
  block.getLocalBlock(), status, delHint);
  final DatanodeStorage storage = dn.getFSDataset().getStorage(storageUuid);
  
  // storage == null here because it's already removed earlier.

  for (BPServiceActor actor : bpServices) {
actor.getIbrManager().notifyNamenodeBlock(info, storage,
isOnTransientStorage);
  }
} {code}
so IBRs with a null storage are now pending.

The reason why notifyNamenodeBlock can trigger on such blocks is up in 
DirectoryScanner#reconcile
{code:java}
  public void reconcile() throws IOException {
    LOG.debug("reconcile start DirectoryScanning");
    scan();

// If a volume is removed here after scan() already finished running,
// diffs is stale and checkAndUpdate will run on a removed volume

    // HDFS-14476: run checkAndUpdate with batch to avoid holding the lock too
    // long
    int loopCount = 0;
    synchronized (diffs) {
      for (final Map.Entry entry : diffs.getEntries()) {
        dataset.checkAndUpdate(entry.getKey(), entry.getValue());        
    ...
  } {code}
Inside checkAndUpdate, memBlockInfo is null because all the block meta in 
memory is removed during the volume removal, but diskFile still exists. Then 
DataNode#notifyNamenodeDeletedBlock (and further down the line, 
notifyNamenodeBlock) is called on this block.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17487) [FGL] Make rollEdits thread safe

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17487:
---

 Summary: [FGL] Make rollEdits thread safe
 Key: HDFS-17487
 URL: https://issues.apache.org/jira/browse/HDFS-17487
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


rollEdits is a very common used RPCs. It is not thread-safe, so it still needs 
to hold the global write lock. So it has a big impact on the performance.

 

We need to make it thread-safe to let it hold the global read lock to improve 
the performance.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Hadoop Committer - Haiyang Hu

2024-04-22 Thread Yuanbo Liu
Congratulations

On Mon, Apr 22, 2024 at 12:14 PM Ayush Saxena  wrote:

> Congratulations Haiyang!!!
>
> -Ayush
>
> > On 22 Apr 2024, at 9:41 AM, Xiaoqiao He  wrote:
> >
> > I am pleased to announce that Haiyang Hu has been elected as
> > a committer on the Apache Hadoop project. We appreciate all of
> > Haiyang's work, and look forward to her/his continued contributions.
> >
> > Congratulations and Welcome, Haiyang!
> >
> > Best Regards,
> > - He Xiaoqiao
> > (On behalf of the Apache Hadoop PMC)
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-17486) VIO: dumpXattrs logic optimization

2024-04-22 Thread wangzhihui (Jira)
wangzhihui created HDFS-17486:
-

 Summary: VIO: dumpXattrs logic optimization
 Key: HDFS-17486
 URL: https://issues.apache.org/jira/browse/HDFS-17486
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.3.3, 3.2.0
Reporter: wangzhihui


The dumpXattrs logic in VIO should use FSImageFormatPBINode.Loader.loadXAttrs() 
to get the Xattrs attribute for easy maintenance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17485) Fix SpotBug in RouterRpcServer.java

2024-04-22 Thread ZanderXu (Jira)
ZanderXu created HDFS-17485:
---

 Summary: Fix SpotBug in RouterRpcServer.java
 Key: HDFS-17485
 URL: https://issues.apache.org/jira/browse/HDFS-17485
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: ZanderXu
Assignee: ZanderXu
 Attachments: image-2024-04-22-15-02-33-725.png

!image-2024-04-22-15-02-33-725.png|width=1566,height=265!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17484) Introduce redundancy.considerLoad.minLoad to avoiding excluding nodes when they are not busy actually

2024-04-22 Thread farmmamba (Jira)
farmmamba created HDFS-17484:


 Summary: Introduce redundancy.considerLoad.minLoad to avoiding 
excluding nodes when they are not busy actually
 Key: HDFS-17484
 URL: https://issues.apache.org/jira/browse/HDFS-17484
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.4.0
Reporter: farmmamba
Assignee: farmmamba


Currently, we have `dfs.namenode.redundancy.considerLoad` equals true by 
default, and 

dfs.namenode.redundancy.considerLoad.factor equals 2.0 by default.

Think about below situation. when we are doing stress test, we may deploy hdfs 
client onto the datanode. So, this hdfs client will prefer to write to its 
local datanode and increase this machine's load.  Suppose we have 3 datanodes, 
the load of them are as below:  5.0, 0.2, 0.3.

 

The load equals to 5.0 will be excluded when choose datanodes for a block. But 
actually, it is not slow node when load equals to 5.0 for a machine with 80 cpu 
cores.

 

So, we should better add a new configuration entry :  
`dfs.namenode.redundancy.considerLoad.minLoad` to indicate the mininum factor 
we will make considerLoad take effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org