cshannon opened a new pull request, #4811: URL: https://github.com/apache/accumulo/pull/4811
Update a few ITs that were stopping a normal server instance and starting up a test impl by taking advantage of changes in #4643 to set the server instance type before start up. This simplifies the tests and also prevents weird behavior by having servers start up and be shut down quickly. I took a look around the tests and I didn't see too many tests that could be updated to use the new method as some required keeping the original servers around but these were the ones I found. This closes #4644 **Note:** Another nice side effect of this is that the changes here actually fix the hanging `ExternalCompacttion_2_IT` test I was having issues with and mentioned in the gRPC prototype in this [comment](https://github.com/apache/accumulo/pull/4715#issuecomment-2241699447). The issue for the hanging wasn't related specifically to gRPC but was directly related to starting up and immediately stopping the first compactor procesess and how the new async future and long polling works from the compaction queue. Before the changes here, the test would kill the original compactors and start up new ones of the type `ExternalDoNothingCompactor`. Because of the long polling change, there is a race condition where the job is assigned to a future when the original compactor requests a job. Then when the compactor dies, the new one that starts up won't get a job assigned as the previous future never times out from the dead compactor even though the job was cleaned up by the dead compaction detector. Making sure the future will timeout is something that still needs to be fixed as well, but the test change here fixes the root cause by only starting up the `DoNothingCompactor` in the first place. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
