mcvsubbu commented on a change in pull request #7267:
URL: https://github.com/apache/pinot/pull/7267#discussion_r695144401
##########
File path:
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -748,11 +749,22 @@ public long getLastConsumedTimestamp() {
return _lastLogTime;
}
- @VisibleForTesting
- protected StreamPartitionMsgOffset getCurrentOffset() {
+ public StreamPartitionMsgOffset getCurrentOffset() {
return _currentOffset;
}
+ public StreamPartitionMsgOffset fetchLatestStreamOffset() {
+ try (StreamMetadataProvider metadataProvider = _streamConsumerFactory
+ .createPartitionMetadataProvider(_clientId, _partitionGroupId)) {
+ return metadataProvider
+ .fetchStreamPartitionOffset(OffsetCriteria.LARGEST_OFFSET_CRITERIA,
/*maxWaitTimeMs=*/5000);
Review comment:
It is better to take the max wait time as a parameter, so that the
caller can decide on this. Please javadoc the error cases, etc. (do you want to
throw a timeout exception?)
##########
File path:
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in
streams.
+ * To achieve this, every time status check is called, {@link
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+ private static final Logger LOGGER =
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+ private final InstanceDataManager _instanceDataManager;
+ private Supplier<Set<String>> _consumingSegmentFinder;
+
+ private Set<String> _alreadyProcessedSegments = new HashSet<>();
Review comment:
Looks like this has the segments that have reached the target offset.
Can we name it appropriately? Maybe `targetOffsetReachedSegments` ? or
`_caughtUpSegments` or `readySegments` or `_segmentsReadyForQueries`
##########
File path:
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -748,11 +749,22 @@ public long getLastConsumedTimestamp() {
return _lastLogTime;
}
- @VisibleForTesting
- protected StreamPartitionMsgOffset getCurrentOffset() {
+ public StreamPartitionMsgOffset getCurrentOffset() {
return _currentOffset;
}
+ public StreamPartitionMsgOffset fetchLatestStreamOffset() {
Review comment:
You may want to add a comment here that this creates a new partition
metadata provider each time, so that whoever calls this (public) method is
aware.
##########
File path:
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in
streams.
+ * To achieve this, every time status check is called, {@link
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+ private static final Logger LOGGER =
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+ private final InstanceDataManager _instanceDataManager;
+ private Supplier<Set<String>> _consumingSegmentFinder;
+
+ private Set<String> _alreadyProcessedSegments = new HashSet<>();
+ private Map<String, StreamPartitionMsgOffset>
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+ public OffsetBasedConsumptionStatusChecker(InstanceDataManager
instanceDataManager, HelixAdmin helixAdmin,
+ String helixClusterName, String instanceId) {
+ this(instanceDataManager, () -> findConsumingSegments(helixAdmin,
helixClusterName, instanceId));
+ }
+
+ @VisibleForTesting
+ OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+ Supplier<Set<String>> consumingSegmentFinder) {
+ _instanceDataManager = instanceDataManager;
+ _consumingSegmentFinder = consumingSegmentFinder;
+ }
+
+ public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+ boolean allSegsReachedLatest = true;
+ Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+ for (String segName : consumingSegmentNames) {
+ if (_alreadyProcessedSegments.contains(segName)) {
+ continue;
+ }
+ TableDataManager tableDataManager = getTableDataManager(segName);
+ if (tableDataManager == null) {
+ LOGGER.info("TableDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ SegmentDataManager segmentDataManager =
tableDataManager.acquireSegment(segName);
+ if (segmentDataManager == null) {
+ LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+ // There's a small chance that after getting the list of consuming
segment names at the beginning of this method
+ // up to this point, a consuming segment gets converted to a committed
segment. In that case status check is
+ // returned as false and in the next round the new consuming segment
will be used for fetching offsets.
+ LOGGER.info("Segment {} is already committed. Will check consumption
status later", segName);
+ tableDataManager.releaseSegment(segmentDataManager);
Review comment:
you should release with a try/catch/finally.
Also, can it happen that after the release the segment can move over to be
on `OfflineSegmentDataManager`, then we will see an exception for wrong
casting. So, grab the offset while you have the lock.
##########
File path:
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in
streams.
+ * To achieve this, every time status check is called, {@link
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+ private static final Logger LOGGER =
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+ private final InstanceDataManager _instanceDataManager;
+ private Supplier<Set<String>> _consumingSegmentFinder;
+
+ private Set<String> _alreadyProcessedSegments = new HashSet<>();
+ private Map<String, StreamPartitionMsgOffset>
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+ public OffsetBasedConsumptionStatusChecker(InstanceDataManager
instanceDataManager, HelixAdmin helixAdmin,
+ String helixClusterName, String instanceId) {
+ this(instanceDataManager, () -> findConsumingSegments(helixAdmin,
helixClusterName, instanceId));
+ }
+
+ @VisibleForTesting
+ OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+ Supplier<Set<String>> consumingSegmentFinder) {
+ _instanceDataManager = instanceDataManager;
+ _consumingSegmentFinder = consumingSegmentFinder;
+ }
+
+ public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+ boolean allSegsReachedLatest = true;
+ Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+ for (String segName : consumingSegmentNames) {
+ if (_alreadyProcessedSegments.contains(segName)) {
+ continue;
+ }
+ TableDataManager tableDataManager = getTableDataManager(segName);
+ if (tableDataManager == null) {
+ LOGGER.info("TableDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ SegmentDataManager segmentDataManager =
tableDataManager.acquireSegment(segName);
+ if (segmentDataManager == null) {
+ LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+ // There's a small chance that after getting the list of consuming
segment names at the beginning of this method
+ // up to this point, a consuming segment gets converted to a committed
segment. In that case status check is
+ // returned as false and in the next round the new consuming segment
will be used for fetching offsets.
+ LOGGER.info("Segment {} is already committed. Will check consumption
status later", segName);
Review comment:
```suggestion
LOGGER.info("Segment {} is already committed. Will check consumption
status later on the next segment", segName);
```
##########
File path:
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -227,7 +228,7 @@ public void deleteSegmentFile() {
private final String _metricKeyName;
private final ServerMetrics _serverMetrics;
private final MutableSegmentImpl _realtimeSegment;
- private StreamPartitionMsgOffset _currentOffset;
+ private volatile StreamPartitionMsgOffset _currentOffset;
Review comment:
Good catch. This was missed before, not sure how. I think the intention
was to do something like
`_currentOffset.setFrom(someOtherOffset)` and have a volatile inside the
offset object, but now we are actually copying the offset. This is a bug, and
perhaps you should check this in independently (or explore other patterns)
##########
File path:
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in
streams.
+ * To achieve this, every time status check is called, {@link
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+ private static final Logger LOGGER =
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+ private final InstanceDataManager _instanceDataManager;
+ private Supplier<Set<String>> _consumingSegmentFinder;
+
+ private Set<String> _alreadyProcessedSegments = new HashSet<>();
+ private Map<String, StreamPartitionMsgOffset>
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+ public OffsetBasedConsumptionStatusChecker(InstanceDataManager
instanceDataManager, HelixAdmin helixAdmin,
+ String helixClusterName, String instanceId) {
+ this(instanceDataManager, () -> findConsumingSegments(helixAdmin,
helixClusterName, instanceId));
+ }
+
+ @VisibleForTesting
+ OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+ Supplier<Set<String>> consumingSegmentFinder) {
+ _instanceDataManager = instanceDataManager;
+ _consumingSegmentFinder = consumingSegmentFinder;
+ }
+
+ public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+ boolean allSegsReachedLatest = true;
+ Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+ for (String segName : consumingSegmentNames) {
+ if (_alreadyProcessedSegments.contains(segName)) {
+ continue;
+ }
+ TableDataManager tableDataManager = getTableDataManager(segName);
+ if (tableDataManager == null) {
+ LOGGER.info("TableDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ SegmentDataManager segmentDataManager =
tableDataManager.acquireSegment(segName);
+ if (segmentDataManager == null) {
+ LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will
check consumption status later", segName);
+ return false;
+ }
+ if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+ // There's a small chance that after getting the list of consuming
segment names at the beginning of this method
+ // up to this point, a consuming segment gets converted to a committed
segment. In that case status check is
+ // returned as false and in the next round the new consuming segment
will be used for fetching offsets.
+ LOGGER.info("Segment {} is already committed. Will check consumption
status later", segName);
+ tableDataManager.releaseSegment(segmentDataManager);
+ return false;
+ }
+ LLRealtimeSegmentDataManager rtSegmentDataManager =
(LLRealtimeSegmentDataManager) segmentDataManager;
+ StreamPartitionMsgOffset latestIngestedOffset =
rtSegmentDataManager.getCurrentOffset();
+ StreamPartitionMsgOffset latestStreamOffset =
_segmentNameToLatestStreamOffset.containsKey(segName)
+ ? _segmentNameToLatestStreamOffset.get(segName)
+ : rtSegmentDataManager.fetchLatestStreamOffset();
+ tableDataManager.releaseSegment(segmentDataManager);
+ if (latestStreamOffset == null || latestIngestedOffset == null) {
+ LOGGER.info("Null offset found for segment {} - latest stream offset:
{}, latest ingested offset: {}. "
+ + "Will check consumption status later", segName,
latestStreamOffset, latestIngestedOffset);
+ return false;
+ }
+ if (latestIngestedOffset.compareTo(latestStreamOffset) < 0) {
+ LOGGER.info("Latest ingested offset {} in segment {} is smaller than
stream latest available offset {} ",
+ latestIngestedOffset, segName, latestStreamOffset);
+ _segmentNameToLatestStreamOffset.put(segName, latestStreamOffset);
+ allSegsReachedLatest = false;
+ continue;
+ }
+ LOGGER.info("Segment {} with latest ingested offset {} has caught up to
the latest stream offset {}", segName,
+ latestIngestedOffset, latestStreamOffset);
+ _alreadyProcessedSegments.add(segName);
+ }
+ return allSegsReachedLatest;
+ }
+
+ private TableDataManager getTableDataManager(String segmentName) {
+ LLCSegmentName llcSegmentName = new LLCSegmentName(segmentName);
+ String tableName = llcSegmentName.getTableName();
+ String tableNameWithType =
TableNameBuilder.forType(TableType.REALTIME).tableNameWithType(tableName);
+ return _instanceDataManager.getTableDataManager(tableNameWithType);
+ }
+
+ private static Set<String> findConsumingSegments(HelixAdmin helixAdmin,
String helixClusterName, String instanceId) {
Review comment:
Why don't we query the local `RealtimeTableDataManager` instead of
getting all the tables and filtering out only those that have this instance for
some of the partitions?
##########
File path:
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -748,11 +749,22 @@ public long getLastConsumedTimestamp() {
return _lastLogTime;
}
- @VisibleForTesting
- protected StreamPartitionMsgOffset getCurrentOffset() {
+ public StreamPartitionMsgOffset getCurrentOffset() {
return _currentOffset;
}
+ public StreamPartitionMsgOffset fetchLatestStreamOffset() {
+ try (StreamMetadataProvider metadataProvider = _streamConsumerFactory
+ .createPartitionMetadataProvider(_clientId, _partitionGroupId)) {
+ return metadataProvider
+ .fetchStreamPartitionOffset(OffsetCriteria.LARGEST_OFFSET_CRITERIA,
/*maxWaitTimeMs=*/5000);
Review comment:
Another way to do this is to make it a private method to get the offset
when the consumer starts (or in another thread that tries a few times until we
get a value). That way, waiting period is not very important. This public
method can return whatever we have obtained as the high-water-mark before (or
null if we don't have anything). I prefer this, so that any external caller
does not expect any waiting time here
##########
File path:
pinot-common/src/main/java/org/apache/pinot/common/utils/ServiceStatus.java
##########
@@ -232,13 +236,21 @@ public synchronized Status getServiceStatus() {
return _serviceStatus;
}
long now = System.currentTimeMillis();
- if (now < _endWaitTime) {
- _statusDescription =
- String.format("Waiting for consuming segments to catchup,
timeRemaining=%dms", _endWaitTime - now);
- return Status.STARTING;
+ if (now >= _endWaitTime) {
+ _statusDescription = String.format("Consuming segments status GOOD
since %dms", _endWaitTime);
+ return Status.GOOD;
}
- _statusDescription = String.format("Consuming segments status GOOD since
%dms", _endWaitTime);
- return Status.GOOD;
+ if (_allConsumingSegmentsHaveReachedLatestOffset.get()) {
+ // TODO: Once the performance of offset based consumption checker is
validated:
+ // - remove the log line
+ // - uncomment the status & statusDescription lines
+ LOGGER.info("All consuming segments have reached their latest
offsets!");
Review comment:
```suggestion
LOGGER.info("All consuming segments have reached their latest
offsets. End time is {} in the {}", Math.abs(now-_endWaitTime), now >
_endWaitTime ? "past" : "future" );
```
This way, we can easily look for how far off we are with the new algorithm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]