DomGarguilo commented on pull request #166:
URL: https://github.com/apache/accumulo-testing/pull/166#issuecomment-973161204


   > Have you done any manual testing of this?
   
   I have done some testing. Configuring CI to write a full set of 1,000,000 
nodes at depth 25 allows the deletes to occur. It seems that all nodes written 
are then deleted once a compaction happens (I manually compacted).
   
   Something I found while doing that testing is, it is not possible to have 
all written entries deleted. For example, if I set the entries property to 
**25,000,000**, then the code will exit the loop 
([here](https://github.com/apache/accumulo-testing/blob/ae207b1bd7a855a8abec2fe42c2559d2bf26405b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java#L197-L198))
 before reaching the portion that initiates the deletes. If I set the entries 
to **25,000,001**, then the deletes **will** happen, but another 1M entries 
will be written before then next check 
([here](https://github.com/apache/accumulo-testing/blob/ae207b1bd7a855a8abec2fe42c2559d2bf26405b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java#L179-L180))
 will trigger an exit, leaving the total entries at 1M. I don't think this is a 
big deal, just thought it was interesting and also don't see a clean way to 
allow this to happen.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to