sodonnel commented on pull request #1111:
URL: https://github.com/apache/hadoop-ozone/pull/1111#issuecomment-648797267


   Good catch @runzhiwang 
   
   It seems to me that the iterator fails to work correctly if there is a 
filter which filters out any of the blocks. The first time a block is filtered 
out, even though there are recursive calls, it will still return false at the 
top level and the iterator will stop, even if there are more blocks available.
   
   I was surprised a unit test did not catch this, so I checked 
TestKeyValueBlockIterator#testKeyValueBlockIteratorWithFilter and it fails to 
catch this:
   
   ```
     @Test
     public void testKeyValueBlockIteratorWithFilter() throws Exception {
       long containerId = 103L;
       int deletedBlocks = 5;
       int normalBlocks = 5;
       createContainerWithBlocks(containerId, normalBlocks, deletedBlocks);
       String containerPath = new File(containerData.getMetadataPath())
           .getParent();
       try(KeyValueBlockIterator keyValueBlockIterator = new 
KeyValueBlockIterator(
           containerId, new File(containerPath), MetadataKeyFilters
           .getDeletingKeyFilter())) {
   
         int counter = 5;
         while (keyValueBlockIterator.hasNext()) {
           BlockData blockData = keyValueBlockIterator.nextBlock();
           assertEquals(blockData.getLocalID(), counter++);
         }
       }
     }
   ```
   The reason it fails to catch it is because it does not check `count` is 
incremented from 5 to 10, as it expects to get the 5 deleting blocks. If we add 
an assertEquals(10, count) at the end of the test, it would catch this issue. 
However there is another problem in the test, as the createContainerWithBlocks 
method never created the "deleting" blocks.
   
   With the following change to the test I can reproduce the issue and your 
change fixes it:
   
   ```
   diff --git 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
   index ccd3227fa..1f1423264 100644
   --- 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
   +++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
   @@ -223,6 +223,7 @@ public void testKeyValueBlockIteratorWithFilter() throws 
Exception {
            BlockData blockData = keyValueBlockIterator.nextBlock();
            assertEquals(blockData.getLocalID(), counter++);
          }
   +      assertEquals(10, counter);
        }
      }
    
   @@ -276,8 +277,8 @@ private void createContainerWithBlocks(long containerId, 
int
                .getProtoBufMessage().toByteArray());
          }
    
   -      for (int i = normalBlocks; i < deletedBlocks; i++) {
   -        BlockID blockID = new BlockID(containerId, i);
   +      for (int i = 0; i < deletedBlocks; i++) {
   +        BlockID blockID = new BlockID(containerId, i + normalBlocks);
            BlockData blockData = new BlockData(blockID);
            blockData.setChunks(chunkList);
            metadataStore.getStore().put(StringUtils.string2Bytes(OzoneConsts
   ```
   
   Could you make that change to the tests and then we are good to commit this 
change. Thanks.
   
   I was also wondering - how did you discover this? Did you discover a problem 
at runtime on your cluster which caused you to investigate this?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to