FrankChen021 commented on code in PR #18687:
URL: https://github.com/apache/druid/pull/18687#discussion_r2454589989


##########
server/src/main/java/org/apache/druid/server/metrics/SegmentStatsMonitor.java:
##########
@@ -63,7 +64,7 @@ public SegmentStatsMonitor(
       SegmentLoaderConfig segmentLoaderConfig
   )
   {
-    if (segmentLoaderConfig.isLazyLoadOnStart()) {
+    if 
(segmentLoaderConfig.getStartupCacheLoadStrategy().equals(LoadAllLazilyStrategy.STRATEGY_NAME))
 {

Review Comment:
   flip the equals



##########
server/src/main/java/org/apache/druid/segment/loading/SegmentLocalCacheManager.java:
##########
@@ -853,7 +858,8 @@ public void mount(StorageLocation mountLocation) throws 
SegmentLoadingException
           final SegmentizerFactory factory = getSegmentFactory(storageDir);
 
           @SuppressWarnings("ObjectEquality")
-          final boolean lazy = config.isLazyLoadOnStart() && lazyLoadCallback 
!= SegmentLazyLoadFailCallback.NOOP;
+          final boolean lazy = loadStrategy.shouldLoadLazily(dataSegment)
+                               && lazyLoadCallback != 
SegmentLazyLoadFailCallback.NOOP;

Review Comment:
   put the shouldLoadLazily condition in the last



##########
docs/configuration/index.md:
##########
@@ -1585,7 +1585,9 @@ These Historical configurations can be defined in the 
`historical/runtime.proper
 |`druid.segmentCache.announceIntervalMillis`|How frequently to announce 
segments while segments are loading from cache. Set this value to zero to wait 
for all segments to be loaded before announcing.|5000 (5 seconds)|
 |`druid.segmentCache.numLoadingThreads`|How many segments to drop or load 
concurrently from deep storage. Note that the work of loading segments involves 
downloading segments from deep storage, decompressing them and loading them to 
a memory mapped location. So the work is not all I/O Bound. Depending on CPU 
and network load, one could possibly increase this config to a higher 
value.|max(1,Number of cores / 6)|
 |`druid.segmentCache.numBootstrapThreads`|How many segments to load 
concurrently during historical startup.|`druid.segmentCache.numLoadingThreads`|
-|`druid.segmentCache.lazyLoadOnStart`|Whether or not to load segment columns 
metadata lazily during historical startup. When set to true, Historical startup 
time will be dramatically improved by deferring segment loading until the first 
time that segment takes part in a query, which will incur this cost 
instead.|false|
+|`druid.segmentCache.lazyLoadOnStart`|_DEPRECATED_ Use 
`druid.segmentCache.startupLoadStrategy` instead. Whether or not to load 
segment columns metadata lazily during historical startup. When set to true, 
Historical startup time will be dramatically improved by deferring segment 
loading until the first time that segment takes part in a query, which will 
incur this cost instead.|false|
+|`druid.segmentCache.startupLoadStrategy`|Selects the segment column metadata 
loading strategy during historical startup. Possible values are 
`loadAllEagerly`, `loadAllLazily`, and `loadEagerlyBeforePeriod`. More details 
on each strategy below.|`loadAllEagerly`|
+|`druid.segmentCache.startupLoadPeriod`| Used only when startup load strategy 
`loadEagerlyBeforePeriod` is configured. Suppose timestamp `t` is when the 
Historical started up. Any segment metadata with interval that does not overlap 
with the interval of `[t - startupLoadPeriod, t]` will be lazily loaded.|`P7D`|

Review Comment:
   ```suggestion
   |`druid.segmentCache.startupLoadPeriod`| Used only when startup load 
strategy `loadEagerlyBeforePeriod` is configured. Suppose timestamp `t` is when 
the Historical started up. Any segment metadata with interval that falls in the 
interval of `[t - startupLoadPeriod, t]` will be loaded.|`P7D`|
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to