cryptoe commented on code in PR #16676:
URL: https://github.com/apache/druid/pull/16676#discussion_r1666227570


##########
server/src/main/java/org/apache/druid/segment/metadata/AbstractSegmentMetadataCache.java:
##########
@@ -199,8 +199,9 @@ public abstract class AbstractSegmentMetadataCache<T 
extends DataSourceInformati
   /**
    * Map of datasource and generic object extending DataSourceInformation.
    * This structure can be accessed by {@link #cacheExec} and {@link 
#callbackExec} threads.
+   * It contains schema for datasources with atleast 1 available segment.
    */
-  protected final ConcurrentMap<String, T> tables = new ConcurrentHashMap<>();
+  protected final ConcurrentHashMap<String, T> tables = new 
ConcurrentHashMap<>();

Review Comment:
   Nit: Just wondering what specific hashMapMethods are you using which 
required this change. 



##########
server/src/main/java/org/apache/druid/client/coordinator/CoordinatorClient.java:
##########
@@ -69,4 +69,9 @@ public interface CoordinatorClient
    * Returns a new instance backed by a ServiceClient which follows the 
provided retryPolicy
    */
   CoordinatorClient withRetryPolicy(ServiceRetryPolicy retryPolicy);
+
+  /**
+   * Retrieves list of used datasources.
+   */
+  ListenableFuture<Set<String>> fetchUsedDataSources();

Review Comment:
   Please add the definition of used data sources here. 



##########
server/src/main/java/org/apache/druid/segment/metadata/CoordinatorSegmentMetadataCache.java:
##########
@@ -181,6 +220,12 @@ public void onLeaderStart()
     try {
       segmentSchemaBackfillQueue.onLeaderStart();
       cacheExecFuture = cacheExec.submit(this::cacheExecLoop);
+      coldSchemaExecFuture = coldScehmaExec.schedule(
+          this::coldDatasourceSchemaExec,
+          coldSchemaExecPeriodMillis,

Review Comment:
   Is there a specific reason to undocumented these properties. 
   Do we have any metrics which tell us the performance of these executor 
service in terms of number of cold segments back filed ?



##########
server/src/main/java/org/apache/druid/segment/metadata/CoordinatorSegmentMetadataCache.java:
##########
@@ -419,6 +502,98 @@ private Set<SegmentId> 
filterSegmentWithCachedSchema(Set<SegmentId> segmentIds)
     return cachedSegments;
   }
 
+  @VisibleForTesting
+  protected void coldDatasourceSchemaExec()
+  {
+    Collection<ImmutableDruidDataSource> immutableDataSources =
+        
sqlSegmentsMetadataManager.getImmutableDataSourcesWithAllUsedSegments();
+
+    final Map<String, ColumnType> columnTypes = new LinkedHashMap<>();
+
+    Set<String> dataSources = new HashSet<>();
+
+    for (ImmutableDruidDataSource dataSource : immutableDataSources) {
+      String dataSourceName = dataSource.getName();
+      dataSources.add(dataSourceName);
+      Collection<DataSegment> dataSegments = dataSource.getSegments();
+
+      for (DataSegment segment : dataSegments) {
+        Integer replicationFactor = 
segmentReplicationStatusManager.getReplicationFactor(segment.getId());
+        if (replicationFactor != null && replicationFactor != 0) {
+          continue;
+        }
+        Optional<SchemaPayloadPlus> optionalSchema = 
segmentSchemaCache.getSchemaForSegment(segment.getId());
+        if (optionalSchema.isPresent()) {
+          RowSignature rowSignature = 
optionalSchema.get().getSchemaPayload().getRowSignature();
+          for (String column : rowSignature.getColumnNames()) {
+            final ColumnType columnType =
+                rowSignature.getColumnType(column)
+                            .orElseThrow(() -> new ISE("Encountered null type 
for column [%s]", column));
+
+            columnTypes.compute(column, (c, existingType) -> 
columnTypeMergePolicy.merge(existingType, columnType));
+          }
+        }
+      }
+
+      final RowSignature.Builder builder = RowSignature.builder();
+      columnTypes.forEach(builder::add);
+
+      RowSignature coldSignature = builder.build();
+
+      log.debug("[%s] signature from cold segments is [%s]", dataSourceName, 
coldSignature);
+
+      coldSchemaTable.put(dataSourceName, new 
DataSourceInformation(dataSourceName, coldSignature));
+
+      // update tables map with merged schema, if signature doesn't exist we 
do not add entry in this table
+      // schema for entirely cold datasource is maintained separately
+      tables.computeIfPresent(
+          dataSourceName,
+          (ds, info) -> {
+            RowSignature mergedSignature = 
mergeHotAndColdSchema(info.getRowSignature(), coldSignature);
+
+            if (!info.getRowSignature().equals(mergedSignature)) {
+              log.info(
+                  "[%s] has new merged signature: %s. hot signature [%s], cold 
signature [%s].",
+                  ds, mergedSignature, info.getRowSignature(), coldSignature
+              );
+            } else {
+              log.debug("[%s] merged signature is unchanged.", ds);
+            }
+
+            return new DataSourceInformation(ds, mergedSignature);
+          }
+      );
+    }
+
+    // remove any stale datasource from the map
+    coldSchemaTable.keySet().retainAll(dataSources);
+  }
+
+  private RowSignature mergeHotAndColdSchema(RowSignature hot, RowSignature 
cold)

Review Comment:
   I am very surprised you need a new method here. There should be existing 
logic which does this no ?



##########
server/src/main/java/org/apache/druid/segment/metadata/CoordinatorSegmentMetadataCache.java:
##########
@@ -92,14 +114,27 @@ public CoordinatorSegmentMetadataCache(
       InternalQueryConfig internalQueryConfig,
       ServiceEmitter emitter,
       SegmentSchemaCache segmentSchemaCache,
-      SegmentSchemaBackFillQueue segmentSchemaBackfillQueue
+      SegmentSchemaBackFillQueue segmentSchemaBackfillQueue,
+      SqlSegmentsMetadataManager sqlSegmentsMetadataManager,
+      SegmentReplicationStatusManager segmentReplicationStatusManager,
+      Supplier<SegmentsMetadataManagerConfig> 
segmentsMetadataManagerConfigSupplier
   )
   {
     super(queryLifecycleFactory, config, escalator, internalQueryConfig, 
emitter);
     this.config = config;
     this.columnTypeMergePolicy = config.getMetadataColumnTypeMergePolicy();
     this.segmentSchemaCache = segmentSchemaCache;
     this.segmentSchemaBackfillQueue = segmentSchemaBackfillQueue;
+    this.sqlSegmentsMetadataManager = sqlSegmentsMetadataManager;
+    this.segmentReplicationStatusManager = segmentReplicationStatusManager;
+    this.coldSchemaExecPeriodMillis =
+        
segmentsMetadataManagerConfigSupplier.get().getPollDuration().getMillis();
+    coldScehmaExec = Executors.newSingleThreadScheduledExecutor(
+        new ThreadFactoryBuilder()
+            .setNameFormat("DruidColdSchema-ScheduledExecutor-%d")
+            .setDaemon(true)

Review Comment:
   Why is this a demon thread ?



##########
server/src/main/java/org/apache/druid/segment/metadata/CoordinatorSegmentMetadataCache.java:
##########
@@ -419,6 +502,98 @@ private Set<SegmentId> 
filterSegmentWithCachedSchema(Set<SegmentId> segmentIds)
     return cachedSegments;
   }
 
+  @VisibleForTesting
+  protected void coldDatasourceSchemaExec()
+  {
+    Collection<ImmutableDruidDataSource> immutableDataSources =
+        
sqlSegmentsMetadataManager.getImmutableDataSourcesWithAllUsedSegments();
+
+    final Map<String, ColumnType> columnTypes = new LinkedHashMap<>();
+
+    Set<String> dataSources = new HashSet<>();
+
+    for (ImmutableDruidDataSource dataSource : immutableDataSources) {
+      String dataSourceName = dataSource.getName();
+      dataSources.add(dataSourceName);
+      Collection<DataSegment> dataSegments = dataSource.getSegments();
+
+      for (DataSegment segment : dataSegments) {
+        Integer replicationFactor = 
segmentReplicationStatusManager.getReplicationFactor(segment.getId());
+        if (replicationFactor != null && replicationFactor != 0) {
+          continue;
+        }
+        Optional<SchemaPayloadPlus> optionalSchema = 
segmentSchemaCache.getSchemaForSegment(segment.getId());
+        if (optionalSchema.isPresent()) {
+          RowSignature rowSignature = 
optionalSchema.get().getSchemaPayload().getRowSignature();
+          for (String column : rowSignature.getColumnNames()) {
+            final ColumnType columnType =
+                rowSignature.getColumnType(column)
+                            .orElseThrow(() -> new ISE("Encountered null type 
for column [%s]", column));
+
+            columnTypes.compute(column, (c, existingType) -> 
columnTypeMergePolicy.merge(existingType, columnType));
+          }
+        }
+      }
+
+      final RowSignature.Builder builder = RowSignature.builder();
+      columnTypes.forEach(builder::add);
+
+      RowSignature coldSignature = builder.build();
+
+      log.debug("[%s] signature from cold segments is [%s]", dataSourceName, 
coldSignature);
+
+      coldSchemaTable.put(dataSourceName, new 
DataSourceInformation(dataSourceName, coldSignature));
+
+      // update tables map with merged schema, if signature doesn't exist we 
do not add entry in this table
+      // schema for entirely cold datasource is maintained separately
+      tables.computeIfPresent(
+          dataSourceName,
+          (ds, info) -> {
+            RowSignature mergedSignature = 
mergeHotAndColdSchema(info.getRowSignature(), coldSignature);
+
+            if (!info.getRowSignature().equals(mergedSignature)) {
+              log.info(
+                  "[%s] has new merged signature: %s. hot signature [%s], cold 
signature [%s].",
+                  ds, mergedSignature, info.getRowSignature(), coldSignature
+              );
+            } else {
+              log.debug("[%s] merged signature is unchanged.", ds);
+            }
+
+            return new DataSourceInformation(ds, mergedSignature);
+          }
+      );
+    }
+
+    // remove any stale datasource from the map
+    coldSchemaTable.keySet().retainAll(dataSources);

Review Comment:
   Do we have a test case for this ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to