tomscut commented on a change in pull request #4009:
URL: https://github.com/apache/hadoop/pull/4009#discussion_r835865303
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/sps/TestExternalStoragePolicySatisfier.java
##########
@@ -441,6 +442,8 @@ private void doTestWhenStoragePolicySetToCOLD() throws
Exception {
hdfsCluster.triggerHeartbeats();
dfs.satisfyStoragePolicy(new Path(FILE));
+ // Assert metrics.
+ assertEquals(1, hdfsCluster.getNamesystem().getPendingSPSPaths());
// Wait till namenode notified about the block location details
DFSTestUtil.waitExpectedStorageType(FILE, StorageType.ARCHIVE, 3, 35000,
dfs);
Review comment:
> ```
> // Wait till namenode notified about the block location details
> DFSTestUtil.waitExpectedStorageType(FILE, StorageType.ARCHIVE, 3,
35000,
> dfs);
> ```
>
> Here you are waiting for SPS to process the path and move the blocks to
the correct place, once this is done, whether `getPendingSPSPaths` will still
return 1? I suppose no, right? the path got processed so the count should
reduce to 0.
>
> So, my take is you don't have a control on `
DFSTestUtil.waitExpectedStorageType(FILE, StorageType.ARCHIVE, 3, 35000,
dfs);`, if by chance SPS process that path before your assertion then the test
will fail.
>
> I haven't gone through the code, but that is what I felt in my initial
pass, if it doesn't work this way do lemme know
Thank you @ayushtkn for your detailed explanation. I made a mistake and you
are right. I'll add a new unit test to assert metrics without running SPS.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]