keith-turner commented on code in PR #3653:
URL: https://github.com/apache/accumulo/pull/3653#discussion_r1273863706
##########
server/compactor/src/main/java/org/apache/accumulo/compactor/Compactor.java:
##########
@@ -426,8 +426,12 @@ protected void
updateCompactionCompleted(TExternalCompactionJob job, TCompaction
* @throws RetriesExceededException thrown when retries have been exceeded
*/
protected TExternalCompactionJob getNextJob(Supplier<UUID> uuid) throws
RetriesExceededException {
+ final long startingWaitTime =
+ getConfiguration().getTimeInMillis(Property.COMPACTOR_JOB_WAIT_TIME);
+ final long maxWaitTime =
+
getConfiguration().getTimeInMillis(Property.COMPACTOR_MAX_JOB_WAIT_TIME);
Review Comment:
Not sure if this check is needed, maybe redundant. Maybe
RetryableThriftCall or Retry is already doing this check.
```suggestion
getConfiguration().getTimeInMillis(Property.COMPACTOR_MAX_JOB_WAIT_TIME);
Preconditions.checkArgument( startingWaitTime <= maxWaitTime, "TODO
Message including props names");
```
##########
core/src/main/java/org/apache/accumulo/core/conf/Property.java:
##########
@@ -1098,6 +1098,12 @@ public enum Property {
@Experimental
COMPACTOR_CLIENTPORT("compactor.port.client", "9133", PropertyType.PORT,
"The port used for handling client connections on the compactor
servers", "2.1.0"),
+ COMPACTOR_JOB_WAIT_TIME("compactor.wait.time.job", "1s",
PropertyType.TIMEDURATION,
+ "The amount of time to wait between checks for the next compaction job",
"4.0.0"),
Review Comment:
If we have a max the making this prop a min seems more consistent.
```suggestion
COMPACTOR_JOB_WAIT_TIME_MIN("compactor.wait.time.job.min", "1s",
PropertyType.TIMEDURATION,
"The minimum amount of time to wait between checks for the next
compaction job. Going up to a max of compactor.wait.time.job.max.", "4.0.0"),
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]