kfaraz commented on code in PR #15817:
URL: https://github.com/apache/druid/pull/15817#discussion_r1568214127


##########
server/src/main/java/org/apache/druid/segment/metadata/KillUnreferencedSegmentSchemas.java:
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.metadata;
+
+import com.google.inject.Inject;
+import org.apache.druid.guice.LazySingleton;
+import org.apache.druid.java.util.emitter.EmittingLogger;
+import org.apache.druid.metadata.SegmentsMetadataManager;
+
+import java.util.List;
+
+/**
+ * This class deals with cleaning schema which is not referenced by any used 
segment.
+ * <p>
+ * <ol>
+ * <li>If a schema is not referenced, UPDATE schemas SET used = false, 
used_status_last_updated = now</li>
+ * <li>DELETE FROM schemas WHERE used = false AND used_status_last_updated < 6 
hours ago</li>
+ * <li>When creating a new segment, try to find schema for the fingerprint of 
the segment.</li>
+ *    <ol type="a">
+ *    <li> If no record found, create a new one.</li>
+ *    <li> If record found which has used = true, reuse this schema_id.</li>
+ *    <li> If record found which has used = false, UPDATE SET used = true, 
used_status_last_updated = now</li>
+ *    </ol>
+ * </ol>
+ * </p>
+ * <p>
+ * Possible race conditions:
+ *    <ol type="a">
+ *    <li> Between ops 1 and 3b: In other words, we might end up with a 
segment that points to a schema that has just been marked as unused. This can 
be repaired by the coordinator duty. </li>
+ *    <li> Between 2 and 3c: This can be handled. Either 2 will fail to update 
any rows (good case) or 3c will fail to update any rows and thus return 0 (bad 
case). In the bad case, we need to recreate the schema, same as step 3a. </li>
+ *    </ol>
+ * </p>
+ */
+@LazySingleton
+public class KillUnreferencedSegmentSchemas

Review Comment:
   Why can't this be merged into `KillUnreferencedSegmentSchemasDuty` itself? 
Why do we need two classes?
   
   `SegmentSchemaManager` can be passed into the constructor of the duty by the 
`DruidCoordinator`.



##########
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java:
##########
@@ -185,7 +193,10 @@ public DruidCoordinator(
       BalancerStrategyFactory balancerStrategyFactory,
       LookupCoordinatorManager lookupCoordinatorManager,
       @Coordinator DruidLeaderSelector coordLeaderSelector,
-      CompactionSegmentSearchPolicy compactionSegmentSearchPolicy
+      CompactionSegmentSearchPolicy compactionSegmentSearchPolicy,
+      KillUnreferencedSegmentSchemas killUnreferencedSegmentSchemas,

Review Comment:
   We shouldn't need to pass this here.



##########
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java:
##########
@@ -167,4 +167,15 @@ public int getHttpLoadQueuePeonBatchSize()
     return 1;
   }
 
+  @Config("druid.coordinator.kill.segmentSchema.on")
+  @Default("true")
+  public abstract boolean isSegmentSchemaKillEnabled();
+
+  @Config("druid.coordinator.kill.segmentSchema.period")
+  @Default("PT1H")
+  public abstract Duration getCoordinatorSegmentSchemaKillPeriod();
+
+  @Config("druid.coordinator.kill.segmentSchema.durationToRetain")
+  @Default("PT6H")
+  public abstract Duration getCoordinatorSegmentSchemaKillDurationToRetain();

Review Comment:
   The `coordinator` substring is redundant.
   
   ```suggestion
     public abstract Duration getSegmentSchemaKillDurationToRetain();
   ```



##########
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java:
##########
@@ -167,4 +167,15 @@ public int getHttpLoadQueuePeonBatchSize()
     return 1;
   }
 
+  @Config("druid.coordinator.kill.segmentSchema.on")
+  @Default("true")
+  public abstract boolean isSegmentSchemaKillEnabled();
+
+  @Config("druid.coordinator.kill.segmentSchema.period")
+  @Default("PT1H")
+  public abstract Duration getCoordinatorSegmentSchemaKillPeriod();

Review Comment:
   ```suggestion
     public abstract Duration getSegmentSchemaKillPeriod();
   ```



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,
+      String fingerprint,
+      long numRows,
+      SchemaPayload schemaPayload

Review Comment:
   How about accepting a `SchemaPayloadPlus` instead? All call sites seem to be 
using it already.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.

Review Comment:
   Redundant comments.



##########
services/src/main/java/org/apache/druid/cli/CliHistorical.java:
##########
@@ -101,6 +101,8 @@ protected List<? extends Module> getModules()
         new JoinableFactoryModule(),
         new HistoricalServiceModule(),
         binder -> {
+          
CliCoordinator.validateCentralizedDatasourceSchemaConfig(getProperties());

Review Comment:
   Is this validation needed here?



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,
+      String fingerprint,
+      long numRows,
+      SchemaPayload schemaPayload
+  )

Review Comment:
   So overall:
   
   ```suggestion
     public void addMetadata(
         SegmentId segmentId,
         SchemaPayloadPlus schemaPayloadPlus,
         FingerprintGenerator fingerprintGenerator
     )
   ```



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,
+      String fingerprint,
+      long numRows,
+      SchemaPayload schemaPayload
+  )
+  {
+    segmentIdToMetadataMap.put(segmentId, new SegmentStats(numRows, 
fingerprint));
+    schemaFingerprintToPayloadMap.put(fingerprint, schemaPayload);
+  }
+
+  /**
+   * Merge with another instance.
+   */
+  public void merge(MinimalSegmentSchemas other)
+  {
+    this.segmentIdToMetadataMap.putAll(other.getSegmentIdToMetadataMap());
+    
this.schemaFingerprintToPayloadMap.putAll(other.getSchemaFingerprintToPayloadMap());
+  }
+
+  public int size()
+  {
+    return schemaFingerprintToPayloadMap.size();
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+    MinimalSegmentSchemas that = (MinimalSegmentSchemas) o;
+    return Objects.equals(segmentIdToMetadataMap, that.segmentIdToMetadataMap)
+           && Objects.equals(schemaFingerprintToPayloadMap, 
that.schemaFingerprintToPayloadMap);
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return Objects.hash(segmentIdToMetadataMap, schemaFingerprintToPayloadMap);
+  }
+
+  @Override
+  public String toString()
+  {
+    return "MinimalSegmentSchemas{" +
+           "segmentIdToMetadataMap=" + segmentIdToMetadataMap +
+           ", schemaFingerprintToPayloadMap=" + schemaFingerprintToPayloadMap +
+           ", version='" + schemaVersion + '\'' +
+           '}';
+  }
+
+  /**
+   * Encapsulates segment level information like numRows, schema fingerprint.
+   */
+  public static class SegmentStats

Review Comment:
   This should be a top level class. After all, it is the class that represents 
the feature itself. 🙂



##########
server/src/main/java/org/apache/druid/server/http/MetadataResource.java:
##########
@@ -156,37 +158,46 @@ public Response getAllUsedSegments(
       @QueryParam("includeRealtimeSegments") final @Nullable String 
includeRealtimeSegments
   )
   {
-    // realtime segments can be requested only when {@code 
includeOverShadowedStatus} is set
-    if (includeOvershadowedStatus == null && includeRealtimeSegments != null) {
-      return Response.status(Response.Status.BAD_REQUEST).build();
-    }
+    try {
+      // realtime segments can be requested only when {@code 
includeOverShadowedStatus} is set

Review Comment:
   Plain comment is not rendered as a javadoc
   ```suggestion
         // realtime segments can be requested only when 
includeOverShadowedStatus is set
   ```



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(

Review Comment:
   All usages of this method generate a fingerprint just before calling this 
method.
   It would be better to just pass the fingerprint generator here rather than 
having to pass both the `SchemaPayload` and its fingerprint. It would also have 
the advantage of having just one call site for 
`fingerprintGenerator.generate()`.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,
+      String fingerprint,
+      long numRows,
+      SchemaPayload schemaPayload
+  )
+  {
+    segmentIdToMetadataMap.put(segmentId, new SegmentStats(numRows, 
fingerprint));
+    schemaFingerprintToPayloadMap.put(fingerprint, schemaPayload);
+  }
+
+  /**
+   * Merge with another instance.
+   */
+  public void merge(MinimalSegmentSchemas other)
+  {
+    this.segmentIdToMetadataMap.putAll(other.getSegmentIdToMetadataMap());
+    
this.schemaFingerprintToPayloadMap.putAll(other.getSchemaFingerprintToPayloadMap());
+  }
+
+  public int size()
+  {
+    return schemaFingerprintToPayloadMap.size();
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+    MinimalSegmentSchemas that = (MinimalSegmentSchemas) o;
+    return Objects.equals(segmentIdToMetadataMap, that.segmentIdToMetadataMap)
+           && Objects.equals(schemaFingerprintToPayloadMap, 
that.schemaFingerprintToPayloadMap);
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return Objects.hash(segmentIdToMetadataMap, schemaFingerprintToPayloadMap);
+  }
+
+  @Override
+  public String toString()
+  {
+    return "MinimalSegmentSchemas{" +
+           "segmentIdToMetadataMap=" + segmentIdToMetadataMap +
+           ", schemaFingerprintToPayloadMap=" + schemaFingerprintToPayloadMap +
+           ", version='" + schemaVersion + '\'' +
+           '}';
+  }
+
+  /**
+   * Encapsulates segment level information like numRows, schema fingerprint.
+   */
+  public static class SegmentStats

Review Comment:
   Rename to `SegmentMetadata`.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.

Review Comment:
   Not needed.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas

Review Comment:
   `MinimalSegmentSchemas` is a weird name and implies that there is another 
version of this datastructure which is not minimal.
   
   Please rename this class and the variable names wherever this is used to 
something more appropriate which represents the contents inside, e.g. 
`SegmentSchemaMapping` or `SegmentMetadataMapping`.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,
+      String fingerprint,
+      long numRows,
+      SchemaPayload schemaPayload
+  )
+  {
+    segmentIdToMetadataMap.put(segmentId, new SegmentStats(numRows, 
fingerprint));
+    schemaFingerprintToPayloadMap.put(fingerprint, schemaPayload);
+  }
+
+  /**
+   * Merge with another instance.
+   */
+  public void merge(MinimalSegmentSchemas other)
+  {
+    this.segmentIdToMetadataMap.putAll(other.getSegmentIdToMetadataMap());
+    
this.schemaFingerprintToPayloadMap.putAll(other.getSchemaFingerprintToPayloadMap());
+  }
+
+  public int size()
+  {
+    return schemaFingerprintToPayloadMap.size();

Review Comment:
   For non-empty, we check the other map `segmentIdToMetadata` but for `size()` 
we check `schemaFingerprintToPayload`. We should either rename the methods 
appropriately or use the same map to determine both size and emptiness.



##########
server/src/test/java/org/apache/druid/segment/metadata/SegmentSchemaTestUtils.java:
##########
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.metadata;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.jackson.JacksonUtils;
+import org.apache.druid.metadata.TestDerbyConnector;
+import org.apache.druid.metadata.storage.derby.DerbyConnector;
+import org.apache.druid.segment.SchemaPayload;
+import org.apache.druid.timeline.DataSegment;
+import org.apache.druid.timeline.partition.NoneShardSpec;
+import org.junit.Assert;
+import org.skife.jdbi.v2.PreparedBatch;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+public class SegmentSchemaTestUtils
+{
+  private final TestDerbyConnector.DerbyConnectorRule derbyConnectorRule;
+  private final DerbyConnector derbyConnector;
+  private final ObjectMapper mapper;
+
+  public SegmentSchemaTestUtils(
+      TestDerbyConnector.DerbyConnectorRule derbyConnectorRule,
+      DerbyConnector derbyConnector,
+      ObjectMapper mapper
+  )
+  {
+    this.derbyConnectorRule = derbyConnectorRule;
+    this.derbyConnector = derbyConnector;
+    this.mapper = mapper;
+  }
+
+  public Boolean insertUsedSegments(Set<DataSegment> dataSegments, Map<String, 
Pair<Long, Long>> segmentStats)
+  {
+    if (!segmentStats.isEmpty()) {
+      final String table = 
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable();
+      return derbyConnector.retryWithHandle(
+          handle -> {
+            PreparedBatch preparedBatch = handle.prepareBatch(
+                StringUtils.format(
+                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_status_last_updated, 
schema_id, num_rows) "
+                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated, :schema_id, 
:num_rows)",
+                    table,
+                    derbyConnector.getQuoteString()
+                )
+            );
+            for (DataSegment segment : dataSegments) {
+              String id = segment.getId().toString();
+              preparedBatch.add()
+                           .bind("id", id)
+                           .bind("dataSource", segment.getDataSource())
+                           .bind("created_date", DateTimes.nowUtc().toString())
+                           .bind("start", 
segment.getInterval().getStart().toString())
+                           .bind("end", 
segment.getInterval().getEnd().toString())
+                           .bind("partitioned", !(segment.getShardSpec() 
instanceof NoneShardSpec))
+                           .bind("version", segment.getVersion())
+                           .bind("used", true)
+                           .bind("payload", mapper.writeValueAsBytes(segment))
+                           .bind("used_status_last_updated", 
DateTimes.nowUtc().toString())
+                           .bind("schema_id", segmentStats.containsKey(id) ? 
segmentStats.get(id).lhs : null)
+                           .bind("num_rows", segmentStats.containsKey(id) ? 
segmentStats.get(id).rhs : null);
+            }
+
+            final int[] affectedRows = preparedBatch.execute();
+            final boolean succeeded = 
Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
+            if (!succeeded) {
+              throw new ISE("Failed to publish segments to DB");
+            }
+            return true;
+          }
+      );
+    } else {
+      final String table = 
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable();
+      return derbyConnector.retryWithHandle(
+          handle -> {
+            PreparedBatch preparedBatch = handle.prepareBatch(
+                StringUtils.format(
+                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
+                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated)",
+                    table,
+                    derbyConnector.getQuoteString()
+                )
+            );
+            for (DataSegment segment : dataSegments) {
+              String id = segment.getId().toString();
+              preparedBatch.add()
+                           .bind("id", id)
+                           .bind("dataSource", segment.getDataSource())
+                           .bind("created_date", DateTimes.nowUtc().toString())
+                           .bind("start", 
segment.getInterval().getStart().toString())
+                           .bind("end", 
segment.getInterval().getEnd().toString())
+                           .bind("partitioned", !(segment.getShardSpec() 
instanceof NoneShardSpec))
+                           .bind("version", segment.getVersion())
+                           .bind("used", true)
+                           .bind("payload", mapper.writeValueAsBytes(segment))
+                           .bind("used_status_last_updated", 
DateTimes.nowUtc().toString());
+            }
+
+            final int[] affectedRows = preparedBatch.execute();
+            final boolean succeeded = 
Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
+            if (!succeeded) {
+              throw new ISE("Failed to publish segments to DB");
+            }
+            return true;
+          }
+      );
+    }
+  }
+
+  public Map<String, Long> insertSegmentSchema(
+      String dataSource,
+      Map<String, SchemaPayload> schemaPayloadMap,
+      Set<String> usedFingerprints
+  )
+  {
+    final String table = 
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentSchemasTable();
+    derbyConnector.retryWithHandle(
+        handle -> {
+          PreparedBatch preparedBatch = handle.prepareBatch(
+              StringUtils.format(
+                  "INSERT INTO %1$s (created_date, datasource, fingerprint, 
payload, used, used_status_last_updated, version) "
+                  + "VALUES (:created_date, :datasource, :fingerprint, 
:payload, :used, :used_status_last_updated, :version)",
+                  table
+              )
+          );
+
+          for (Map.Entry<String, SchemaPayload> entry : 
schemaPayloadMap.entrySet()) {
+            String fingerprint = entry.getKey();
+            SchemaPayload payload = entry.getValue();
+            String now = DateTimes.nowUtc().toString();
+            preparedBatch.add()
+                         .bind("created_date", now)
+                         .bind("datasource", dataSource)
+                         .bind("fingerprint", fingerprint)
+                         .bind("payload", mapper.writeValueAsBytes(payload))
+                         .bind("used", usedFingerprints.contains(fingerprint))
+                         .bind("used_status_last_updated", now)
+                         .bind("version", 
CentralizedDatasourceSchemaConfig.SCHEMA_VERSION);
+          }
+
+          final int[] affectedRows = preparedBatch.execute();
+          final boolean succeeded = 
Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
+          if (!succeeded) {
+            throw new ISE("Failed to publish segments to DB");
+          }
+          return true;
+        }
+    );
+
+    Map<String, Long> fingerprintSchemaIdMap = new HashMap<>();
+    derbyConnector.retryWithHandle(
+        handle ->
+            handle.createQuery("SELECT fingerprint, id FROM " + table)
+                  .map((index, result, context) -> 
fingerprintSchemaIdMap.put(result.getString(1), result.getLong(2)))
+                  .list()
+    );
+    return fingerprintSchemaIdMap;
+  }
+
+  public void verifySegmentSchema(Map<String, Pair<SchemaPayload, Integer>> 
segmentIdSchemaMap)
+  {
+    final String segmentsTable = 
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable();
+    // segmentId -> schemaId, numRows
+    Map<String, Pair<Long, Long>> segmentStats = new HashMap<>();
+
+    derbyConnector.retryWithHandle(
+        handle -> handle.createQuery("SELECT id, schema_id, num_rows FROM " + 
segmentsTable + " WHERE used = true ORDER BY id")
+                        .map((index, result, context) -> 
segmentStats.put(result.getString(1), Pair.of(result.getLong(2), 
result.getLong(3))))
+                        .list()
+    );
+
+    // schemaId -> schema details
+    Map<Long, SegmentSchemaRepresentation> schemaRepresentationMap = new 
HashMap<>();
+
+    final String schemaTable = 
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentSchemasTable();
+
+    derbyConnector.retryWithHandle(
+        handle -> handle.createQuery("SELECT id, fingerprint, payload, 
created_date, used, version FROM "
+                                     + schemaTable)
+                        .map(((index, r, ctx) ->
+                            schemaRepresentationMap.put(
+                                r.getLong(1),
+                                new SegmentSchemaRepresentation(
+                                    r.getString(2),
+                                    JacksonUtils.readValue(
+                                        mapper,
+                                        r.getBytes(3),
+                                        SchemaPayload.class
+                                    ),
+                                    r.getString(4),
+                                    r.getBoolean(5),
+                                    r.getString(6)
+                                )
+                            )))
+                        .list());
+
+    for (Map.Entry<String, Pair<SchemaPayload, Integer>> entry : 
segmentIdSchemaMap.entrySet()) {
+      String id = entry.getKey();
+      SchemaPayload schemaPayload = entry.getValue().lhs;
+      Integer random = entry.getValue().rhs;
+
+      Assert.assertTrue(segmentStats.containsKey(id));
+
+      Assert.assertEquals(random.intValue(), 
segmentStats.get(id).rhs.intValue());
+      
Assert.assertTrue(schemaRepresentationMap.containsKey(segmentStats.get(id).lhs));
+
+      SegmentSchemaRepresentation schemaRepresentation = 
schemaRepresentationMap.get(segmentStats.get(id).lhs);
+      Assert.assertEquals(schemaPayload, 
schemaRepresentation.getSchemaPayload());
+      Assert.assertTrue(schemaRepresentation.isUsed());
+      Assert.assertEquals(CentralizedDatasourceSchemaConfig.SCHEMA_VERSION, 
schemaRepresentation.getVersion());
+    }
+  }
+
+  public static class SegmentSchemaRepresentation

Review Comment:
   ```suggestion
     public static class SegmentSchemaRecord
   ```



##########
services/src/main/java/org/apache/druid/cli/CliCoordinator.java:
##########
@@ -188,14 +189,9 @@ protected Set<NodeRole> getNodeRoles(Properties properties)
            : ImmutableSet.of(NodeRole.COORDINATOR);
   }
 
-  @Override
-  protected List<? extends Module> getModules()
+  protected static void validateCentralizedDatasourceSchemaConfig(Properties 
properties)

Review Comment:
   Why not have this method in `ServerRunnable` instead?
   If not needed, `CliX` should not need to refer to `CliCoordinator`.



##########
server/src/main/java/org/apache/druid/segment/metadata/KillUnreferencedSegmentSchemas.java:
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.metadata;
+
+import com.google.inject.Inject;
+import org.apache.druid.guice.LazySingleton;
+import org.apache.druid.java.util.emitter.EmittingLogger;
+import org.apache.druid.metadata.SegmentsMetadataManager;
+
+import java.util.List;
+
+/**
+ * This class deals with cleaning schema which is not referenced by any used 
segment.
+ * <p>
+ * <ol>
+ * <li>If a schema is not referenced, UPDATE schemas SET used = false, 
used_status_last_updated = now</li>
+ * <li>DELETE FROM schemas WHERE used = false AND used_status_last_updated < 6 
hours ago</li>
+ * <li>When creating a new segment, try to find schema for the fingerprint of 
the segment.</li>
+ *    <ol type="a">
+ *    <li> If no record found, create a new one.</li>
+ *    <li> If record found which has used = true, reuse this schema_id.</li>
+ *    <li> If record found which has used = false, UPDATE SET used = true, 
used_status_last_updated = now</li>
+ *    </ol>
+ * </ol>
+ * </p>
+ * <p>
+ * Possible race conditions:
+ *    <ol type="a">
+ *    <li> Between ops 1 and 3b: In other words, we might end up with a 
segment that points to a schema that has just been marked as unused. This can 
be repaired by the coordinator duty. </li>
+ *    <li> Between 2 and 3c: This can be handled. Either 2 will fail to update 
any rows (good case) or 3c will fail to update any rows and thus return 0 (bad 
case). In the bad case, we need to recreate the schema, same as step 3a. </li>
+ *    </ol>
+ * </p>
+ */
+@LazySingleton
+public class KillUnreferencedSegmentSchemas
+{
+  private static final EmittingLogger log = new 
EmittingLogger(KillUnreferencedSegmentSchemas.class);
+  private final SegmentSchemaManager segmentSchemaManager;
+  private final SegmentsMetadataManager metadataManager;
+
+  @Inject
+  public KillUnreferencedSegmentSchemas(
+      SegmentSchemaManager segmentSchemaManager,
+      SegmentsMetadataManager metadataManager
+  )
+  {
+    this.segmentSchemaManager = segmentSchemaManager;
+    this.metadataManager = metadataManager;
+  }
+
+  public int cleanup(long timestamp)
+  {
+    // 1: Identify unreferenced schema and mark them as unused. These will get 
deleted after a fixed period.
+    int unused = segmentSchemaManager.identifyAndMarkSchemaUnused();
+    log.info("Identified [%s] unreferenced schema. Marking them as unused.", 
unused);
+
+    // 2 (repair step): Identify unused schema which are still referenced by 
segments, make them used.
+    // This case would arise when segment is associated with a schema which 
turned unused by the previous statement
+    // or the previous run of this duty.
+    List<Long> schemaIdsToUpdate = 
segmentSchemaManager.identifyReferencedUnusedSchema();
+    if (schemaIdsToUpdate.size() > 0) {
+      segmentSchemaManager.markSchemaUsed(schemaIdsToUpdate);
+      log.info("Identified [%s] unused schemas still referenced by used 
segments. Marking them as used.", schemaIdsToUpdate.size());
+    }
+
+    // 3: Delete unused schema older than {@code timestamp}.

Review Comment:
   ```suggestion
       // 3: Delete unused schema older than timestamp
   ```



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()
+  {
+    return segmentIdToMetadataMap;
+  }
+
+  @JsonProperty
+  public Map<String, SchemaPayload> getSchemaFingerprintToPayloadMap()
+  {
+    return schemaFingerprintToPayloadMap;
+  }
+
+  @JsonProperty
+  public String getSchemaVersion()
+  {
+    return schemaVersion;
+  }
+
+  public boolean isNonEmpty()
+  {
+    return segmentIdToMetadataMap.size() > 0;
+  }
+
+  /**
+   * Add schema information for the segment.
+   */
+  public void addSchema(
+      String segmentId,

Review Comment:
   Accept a more concrete class like `SegmentId` or `DataSegment` here instead 
of a plain `String`.



##########
processing/src/main/java/org/apache/druid/segment/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+  // Mapping of segmentId to segment level information like schema fingerprint 
and numRows.
+  private final Map<String, SegmentStats> segmentIdToMetadataMap;
+
+  // Mapping of schema fingerprint to payload.
+  private final Map<String, SchemaPayload> schemaFingerprintToPayloadMap;
+
+  private final String schemaVersion;
+
+  @JsonCreator
+  public MinimalSegmentSchemas(
+      @JsonProperty("segmentIdToMetadataMap") Map<String, SegmentStats> 
segmentIdToMetadataMap,
+      @JsonProperty("schemaFingerprintToPayloadMap") Map<String, 
SchemaPayload> schemaFingerprintToPayloadMap,
+      @JsonProperty("schemaVersion") String schemaVersion
+  )
+  {
+    this.segmentIdToMetadataMap = segmentIdToMetadataMap;
+    this.schemaFingerprintToPayloadMap = schemaFingerprintToPayloadMap;
+    this.schemaVersion = schemaVersion;
+  }
+
+  public MinimalSegmentSchemas(String schemaVersion)
+  {
+    this.segmentIdToMetadataMap = new HashMap<>();
+    this.schemaFingerprintToPayloadMap = new HashMap<>();
+    this.schemaVersion = schemaVersion;
+  }
+
+  @JsonProperty
+  public Map<String, SegmentStats> getSegmentIdToMetadataMap()

Review Comment:
   Alongwith this, there should be utility (non-serializable) methods to get 
the metadata of a single segment.
   
   ```java
   SegmentStats getMetadata(SegmentId segmentId);
   ```



##########
services/src/main/java/org/apache/druid/cli/CliMiddleManager.java:
##########
@@ -130,6 +130,8 @@ protected List<? extends Module> getModules()
           @Override
           public void configure(Binder binder)
           {
+            
CliCoordinator.validateCentralizedDatasourceSchemaConfig(getProperties());

Review Comment:
   Why does MM need to do the validation if it is not going to use the schema 
config?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to