kfaraz commented on code in PR #15817:
URL: https://github.com/apache/druid/pull/15817#discussion_r1553312419
##########
services/src/main/java/org/apache/druid/cli/CliCoordinator.java:
##########
@@ -465,20 +469,20 @@ public CoordinatorCustomDutyGroups get()
private static class HeartbeatSupplier implements
Provider<Supplier<Map<String, Object>>>
{
- private final DruidCoordinator coordinator;
+ private final DruidLeaderSelector leaderSelector;
Review Comment:
I think this change was already merged in a different PR. Would need some
conflict resolution here.
##########
server/src/main/java/org/apache/druid/segment/metadata/CentralizedDatasourceSchemaConfig.java:
##########
@@ -29,36 +29,66 @@ public class CentralizedDatasourceSchemaConfig
{
@JsonProperty
private boolean enabled = false;
+ @JsonProperty
+ private boolean backFillEnabled = true;
+ @JsonProperty
+ private long backFillPeriod = 60000;
- // If realtime segment schema should be published in segment announcement
flow
- // This config is temporary and will be removed.
+ // internal config meant for testing
@JsonProperty
- private boolean announceRealtimeSegmentSchema = false;
+ private boolean taskSchemaPublishDisabled = false;
+ @JsonProperty
public boolean isEnabled()
{
return enabled;
}
- public boolean announceRealtimeSegmentSchema()
+ @JsonProperty
+ public boolean isBackFillEnabled()
+ {
+ return backFillEnabled;
+ }
+
+ @JsonProperty
+ public long getBackFillPeriod()
{
- return announceRealtimeSegmentSchema;
+ return backFillPeriod;
+ }
+
+ @JsonProperty
+ public boolean isTaskSchemaPublishDisabled()
+ {
+ return taskSchemaPublishDisabled;
}
public static CentralizedDatasourceSchemaConfig create()
{
return new CentralizedDatasourceSchemaConfig();
}
+ public static CentralizedDatasourceSchemaConfig create(boolean enabled)
Review Comment:
Is this config maintained per datasource? Should it be called
`CentralizedSegmentSchemaConfig` instead?
##########
services/src/main/java/org/apache/druid/cli/CliIndexer.java:
##########
@@ -146,6 +150,30 @@ public void configure(Binder binder)
JsonConfigProvider.bind(binder, "druid", DruidNode.class,
Parent.class);
JsonConfigProvider.bind(binder, "druid.worker",
WorkerConfig.class);
+ String serverViewType = (String) properties.getOrDefault(
+ ServerViewModule.SERVERVIEW_TYPE_PROPERTY,
+ ServerViewModule.DEFAULT_SERVERVIEW_TYPE
+ );
+
+ if
(Boolean.parseBoolean(properties.getProperty(CliCoordinator.CENTRALIZED_DATASOURCE_SCHEMA_ENABLED))
+ &&
!serverViewType.equals(ServerViewModule.SERVERVIEW_TYPE_HTTP)) {
+ throw DruidException
+ .forPersona(DruidException.Persona.ADMIN)
+ .ofCategory(DruidException.Category.UNSUPPORTED)
+ .build(
+ StringUtils.format(
+ "CentralizedDatasourceSchema feature is incompatible
with config %1$s=%2$s. "
+ + "Please consider switching to http based segment
discovery (set %1$s=%3$s) "
+ + "or disable the feature (set %4$s=false).",
+ ServerViewModule.SERVERVIEW_TYPE_PROPERTY,
+ serverViewType,
+ ServerViewModule.SERVERVIEW_TYPE_HTTP,
+ CliCoordinator.CENTRALIZED_DATASOURCE_SCHEMA_ENABLED
+ ));
+ }
+
+ JsonConfigProvider.bind(binder,
"druid.centralizedDatasourceSchema", CentralizedDatasourceSchemaConfig.class);
Review Comment:
Why is this whole logic (validation + binding) needed on Indexer? I would
think that this should happen on the coordinator. How does the indexer use the
`CentralizedDatasourceSchemaConfig`?
##########
services/src/main/java/org/apache/druid/cli/CliOverlord.java:
##########
@@ -187,6 +199,30 @@ public void configure(Binder binder)
.to(DEFAULT_SERVICE_NAME);
binder.bindConstant().annotatedWith(Names.named("servicePort")).to(8090);
binder.bindConstant().annotatedWith(Names.named("tlsServicePort")).to(8290);
+
+ String serverViewType = (String) properties.getOrDefault(
+ ServerViewModule.SERVERVIEW_TYPE_PROPERTY,
+ ServerViewModule.DEFAULT_SERVERVIEW_TYPE
+ );
+
+ if
(Boolean.parseBoolean(properties.getProperty(CliCoordinator.CENTRALIZED_DATASOURCE_SCHEMA_ENABLED))
+ &&
!serverViewType.equals(ServerViewModule.SERVERVIEW_TYPE_HTTP)) {
+ throw DruidException
+ .forPersona(DruidException.Persona.ADMIN)
+ .ofCategory(DruidException.Category.UNSUPPORTED)
+ .build(
+ StringUtils.format(
+ "CentralizedDatasourceSchema feature is
incompatible with config %1$s=%2$s. "
+ + "Please consider switching to http based segment
discovery (set %1$s=%3$s) "
+ + "or disable the feature (set %4$s=false).",
+ ServerViewModule.SERVERVIEW_TYPE_PROPERTY,
+ serverViewType,
+ ServerViewModule.SERVERVIEW_TYPE_HTTP,
+
CliCoordinator.CENTRALIZED_DATASOURCE_SCHEMA_ENABLED
+ ));
+ }
+
+ JsonConfigProvider.bind(binder,
"druid.centralizedDatasourceSchema", CentralizedDatasourceSchemaConfig.class);
Review Comment:
Why is this logic needed here?
##########
processing/src/main/java/org/apache/druid/segment/column/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.column;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+ // Mapping of segmentId to segment level information like schema fingerprint
and numRows.
+ private final Map<String, SegmentStats> segmentStatsMap;
+
+ // Mapping of schema fingerprint to payload.
+ private final Map<String, SchemaPayload> schemaPayloadMap;
+
+ @JsonCreator
+ public MinimalSegmentSchemas(
+ @JsonProperty("segmentStatsMap") Map<String, SegmentStats>
segmentStatsMap,
+ @JsonProperty("schemaPayloadMap") Map<String, SchemaPayload>
schemaPayloadMap
+ )
+ {
+ this.segmentStatsMap = segmentStatsMap;
+ this.schemaPayloadMap = schemaPayloadMap;
+ }
+
+ public MinimalSegmentSchemas()
+ {
+ this.segmentStatsMap = new HashMap<>();
+ this.schemaPayloadMap = new HashMap<>();
+ }
+
+ @JsonProperty
+ public Map<String, SegmentStats> getSegmentStatsMap()
+ {
+ return segmentStatsMap;
+ }
+
+ @JsonProperty
+ public Map<String, SchemaPayload> getSchemaPayloadMap()
+ {
+ return schemaPayloadMap;
+ }
+
+ public boolean isNonEmpty()
+ {
+ return segmentStatsMap.size() > 0;
+ }
+
+ /**
+ * Add schema information for the segment.
+ */
+ public void addSchema(
+ String segmentId,
+ String fingerprint,
+ long numRows,
+ SchemaPayload schemaPayload
+ )
+ {
+ segmentStatsMap.put(segmentId, new SegmentStats(numRows, fingerprint));
+ schemaPayloadMap.put(fingerprint, schemaPayload);
+ }
+
+ /**
+ * Merge with another instance.
+ */
+ public void merge(MinimalSegmentSchemas other)
+ {
+ this.segmentStatsMap.putAll(other.getSegmentStatsMap());
+ this.schemaPayloadMap.putAll(other.getSchemaPayloadMap());
+ }
+
+ public int size()
+ {
+ return schemaPayloadMap.size();
+ }
+
+ @Override
+ public boolean equals(Object o)
+ {
+ if (this == o) {
+ return true;
+ }
+ if (o == null || getClass() != o.getClass()) {
+ return false;
+ }
+ MinimalSegmentSchemas that = (MinimalSegmentSchemas) o;
+ return Objects.equals(segmentStatsMap, that.segmentStatsMap)
+ && Objects.equals(schemaPayloadMap, that.schemaPayloadMap);
+ }
+
+ @Override
+ public int hashCode()
+ {
+ return Objects.hash(segmentStatsMap, schemaPayloadMap);
+ }
+
+ @Override
+ public String toString()
+ {
+ return "MinimalSegmentSchemas{" +
+ "segmentStatsMap=" + segmentStatsMap +
+ ", schemaPayloadMap=" + schemaPayloadMap +
+ '}';
+ }
+
+ /**
+ * Encapsulates segment level information like numRows, schema fingerprint.
+ */
+ public static class SegmentStats
+ {
+ final Long numRows;
+ final String fingerprint;
Review Comment:
Rename to `schemaFingerprint`.
##########
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java:
##########
@@ -565,13 +585,18 @@ List<CoordinatorDuty> makeIndexingServiceDuties()
private List<CoordinatorDuty> makeMetadataStoreManagementDuties()
{
- return Arrays.asList(
- new KillSupervisors(config, metadataManager.supervisors()),
- new KillAuditLog(config, metadataManager.audit()),
- new KillRules(config, metadataManager.rules()),
- new KillDatasourceMetadata(config, metadataManager.indexer(),
metadataManager.supervisors()),
- new KillCompactionConfig(config, metadataManager.segments(),
metadataManager.configs())
- );
+ List<CoordinatorDuty> duties = new ArrayList<>();
+
+ duties.add(new KillSupervisors(config, metadataManager.supervisors()));
+ duties.add(new KillAuditLog(config, metadataManager.audit()));
+ duties.add(new KillRules(config, metadataManager.rules()));
+ duties.add(new KillDatasourceMetadata(config, metadataManager.indexer(),
metadataManager.supervisors()));
+ duties.add(new KillCompactionConfig(config, metadataManager.segments(),
metadataManager.configs()));
+
+ if (centralizedDatasourceSchemaConfig.isEnabled()) {
Review Comment:
I would advise using a different config to control cleanup of the schema,
the same way that is done for the other metadata cleanup types. So, even if
someone has centralized schema enabled, they may choose to turn off schema
cleanup.
```
druid.coordinator.kill.segmentSchema.on=false (true by default)
druid.coordinator.kill.segmentSchema.period=PT1H
druid.coordinator.kill.segmentSchema.durationToRetain=PT6H
```
See any of the other metadata cleanup duties for reference.
Also, the new duty should extend `MetadataCleanupDuty`.
##########
sql/src/main/java/org/apache/druid/sql/calcite/schema/BrokerSegmentMetadataCache.java:
##########
@@ -148,6 +154,23 @@ public ServerView.CallbackAction
segmentSchemasAnnounced(SegmentSchemas segmentS
);
}
+ @LifecycleStart
+ public void start() throws InterruptedException
+ {
+ log.info("%s starting cache initialization.", getClass().getSimpleName());
Review Comment:
Logs anyway have the class name in them.
```suggestion
log.info("Initializing cache.");
```
##########
processing/src/main/java/org/apache/druid/segment/column/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.column;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+ // Mapping of segmentId to segment level information like schema fingerprint
and numRows.
+ private final Map<String, SegmentStats> segmentStatsMap;
+
+ // Mapping of schema fingerprint to payload.
+ private final Map<String, SchemaPayload> schemaPayloadMap;
Review Comment:
Rename to `schemaFingerprintToPayloadMap`.
##########
processing/src/main/java/org/apache/druid/metadata/MetadataStorageTablesConfig.java:
##########
@@ -81,6 +81,9 @@ public static MetadataStorageTablesConfig fromBase(String
base)
@JsonProperty("supervisors")
private final String supervisorTable;
+ @JsonProperty("segmentSchema")
Review Comment:
All table names use the plural form. Use `segmentSchemas` instead.
##########
integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java:
##########
@@ -163,4 +163,8 @@ public class TestNGGroup
public static final String HTTP_ENDPOINT = "http-endpoint";
public static final String CENTRALIZED_DATASOURCE_SCHEMA =
"centralized-datasource-schema";
+
+ public static final String CDS_TASK_SCHEMA_PUBLISH_DISABLED =
"cds-task-schema-publish-disabled";
+
+ public static final String CDS_COORDINATOR_SMQ_DISABLED =
"cds-coordinator-smq-disabled";
Review Comment:
What does "smq" stand for?
##########
processing/src/main/java/org/apache/druid/segment/column/MinimalSegmentSchemas.java:
##########
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.column;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Compact representation of segment schema for multiple segments.
+ */
+public class MinimalSegmentSchemas
+{
+ // Mapping of segmentId to segment level information like schema fingerprint
and numRows.
Review Comment:
Rather than these comments, name the fields in a way that is obvious. Maybe
rename this field to `segmentIdToMetadataMap`.
##########
server/src/main/java/org/apache/druid/server/http/MetadataResource.java:
##########
@@ -156,37 +158,43 @@ public Response getAllUsedSegments(
@QueryParam("includeRealtimeSegments") final @Nullable String
includeRealtimeSegments
)
{
- // realtime segments can be requested only when {@code
includeOverShadowedStatus} is set
- if (includeOvershadowedStatus == null && includeRealtimeSegments != null) {
- return Response.status(Response.Status.BAD_REQUEST).build();
- }
+ try {
+ // realtime segments can be requested only when {@code
includeOverShadowedStatus} is set
+ if (includeOvershadowedStatus == null && includeRealtimeSegments !=
null) {
+ return Response.status(Response.Status.BAD_REQUEST).build();
+ }
- if (includeOvershadowedStatus != null) {
- // note that realtime segments are returned only when
druid.centralizedDatasourceSchema.enabled is set on the Coordinator
- // when the feature is disabled we do not want to increase the payload
size polled by the Brokers, since they already have this information
- return getAllUsedSegmentsWithAdditionalDetails(req, dataSources,
includeRealtimeSegments);
- }
+ if (includeOvershadowedStatus != null) {
+ // note that realtime segments are returned only when
druid.centralizedDatasourceSchema.enabled is set on the Coordinator
+ // when the feature is disabled we do not want to increase the payload
size polled by the Brokers, since they already have this information
+ return getAllUsedSegmentsWithAdditionalDetails(req, dataSources,
includeRealtimeSegments);
+ }
- Collection<ImmutableDruidDataSource> dataSourcesWithUsedSegments =
- segmentsMetadataManager.getImmutableDataSourcesWithAllUsedSegments();
- if (dataSources != null && !dataSources.isEmpty()) {
- dataSourcesWithUsedSegments = dataSourcesWithUsedSegments
+ Collection<ImmutableDruidDataSource> dataSourcesWithUsedSegments =
+ segmentsMetadataManager.getImmutableDataSourcesWithAllUsedSegments();
+ if (dataSources != null && !dataSources.isEmpty()) {
+ dataSourcesWithUsedSegments = dataSourcesWithUsedSegments
+ .stream()
+ .filter(dataSourceWithUsedSegments ->
dataSources.contains(dataSourceWithUsedSegments.getName()))
+ .collect(Collectors.toList());
+ }
+ final Stream<DataSegment> usedSegments = dataSourcesWithUsedSegments
.stream()
- .filter(dataSourceWithUsedSegments ->
dataSources.contains(dataSourceWithUsedSegments.getName()))
- .collect(Collectors.toList());
- }
- final Stream<DataSegment> usedSegments = dataSourcesWithUsedSegments
- .stream()
- .flatMap(t -> t.getSegments().stream());
+ .flatMap(t -> t.getSegments().stream());
- final Function<DataSegment, Iterable<ResourceAction>> raGenerator =
segment -> Collections.singletonList(
-
AuthorizationUtils.DATASOURCE_READ_RA_GENERATOR.apply(segment.getDataSource()));
+ final Function<DataSegment, Iterable<ResourceAction>> raGenerator =
segment -> Collections.singletonList(
+
AuthorizationUtils.DATASOURCE_READ_RA_GENERATOR.apply(segment.getDataSource()));
- final Iterable<DataSegment> authorizedSegments =
- AuthorizationUtils.filterAuthorizedResources(req,
usedSegments::iterator, raGenerator, authorizerMapper);
+ final Iterable<DataSegment> authorizedSegments =
+ AuthorizationUtils.filterAuthorizedResources(req,
usedSegments::iterator, raGenerator, authorizerMapper);
- Response.ResponseBuilder builder = Response.status(Response.Status.OK);
- return builder.entity(authorizedSegments).build();
+ Response.ResponseBuilder builder = Response.status(Response.Status.OK);
+ return builder.entity(authorizedSegments).build();
+ }
+ catch (Exception e) {
Review Comment:
Build a cleaner error response in case of a `DruidException`.
```suggestion
catch (DruidException e) {
return ServletResourceUtils.buildErrorResponseFrom(e);
}
catch (Exception e) {
```
##########
server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnreferencedSegmentSchemas.java:
##########
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.coordinator.duty;
+
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.metadata.SegmentsMetadataManager;
+import org.apache.druid.segment.metadata.SegmentSchemaManager;
+import org.apache.druid.server.coordinator.DruidCoordinatorRuntimeParams;
+
+import javax.annotation.Nullable;
+
+/**
+ * Coordinator duty to clean up segment schema which are not referenced by any
segment.
+ */
+public class KillUnreferencedSegmentSchemas implements CoordinatorDuty
Review Comment:
This should extend `MetadataCleanupDuty`.
##########
extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/exec/ControllerImpl.java:
##########
@@ -330,6 +334,14 @@ public ControllerImpl(
this.isFailOnEmptyInsertEnabled =
MultiStageQueryContext.isFailOnEmptyInsertEnabled(
task.getQuerySpec().getQuery().context()
);
+
+ CentralizedDatasourceSchemaConfig centralizedDatasourceSchemaConfig =
+
context.injector().getInstance(CentralizedDatasourceSchemaConfig.class);
+ if (centralizedDatasourceSchemaConfig != null) {
+ publishSchema = centralizedDatasourceSchemaConfig.isEnabled();
+ } else {
+ publishSchema = false;
+ }
Review Comment:
```suggestion
this.publishSchema = centralizedDatasourceSchemaConfig != null
&&
centralizedDatasourceSchemaConfig.isEnabled();
```
##########
server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java:
##########
@@ -242,45 +242,86 @@ public CloseableIterator<DataSegmentPlus>
retrieveUnusedSegmentsPlus(
);
}
- public List<DataSegmentPlus> retrieveSegmentsById(String datasource,
Set<String> segmentIds)
+ public List<DataSegmentPlus> retrieveSegmentsById(
+ String datasource,
+ Set<String> segmentIds,
+ boolean includeSchemaInfo
Review Comment:
Rather than passing this boolean, better have two separate methods:
```
retrieveSegmentsById
retrieveSegmentsWithSchemaById
```
Internally, they may call the same private method.
##########
server/src/test/java/org/apache/druid/metadata/IndexerSqlMetadataStorageCoordinatorCommon.java:
##########
@@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.metadata;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.hash.Hashing;
+import com.google.common.io.BaseEncoding;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.jackson.JacksonUtils;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.column.MinimalSegmentSchemas;
+import org.apache.druid.segment.metadata.FingerprintGenerator;
+import org.apache.druid.segment.metadata.SegmentSchemaManager;
+import org.apache.druid.segment.metadata.SegmentSchemaTestUtils;
+import org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec;
+import org.apache.druid.server.http.DataSegmentPlus;
+import org.apache.druid.timeline.DataSegment;
+import org.apache.druid.timeline.SegmentId;
+import org.apache.druid.timeline.partition.LinearShardSpec;
+import org.apache.druid.timeline.partition.NoneShardSpec;
+import org.apache.druid.timeline.partition.NumberedShardSpec;
+import org.apache.druid.timeline.partition.ShardSpec;
+import org.apache.druid.timeline.partition.TombstoneShardSpec;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+import org.junit.Assert;
+import org.skife.jdbi.v2.PreparedBatch;
+import org.skife.jdbi.v2.ResultIterator;
+import org.skife.jdbi.v2.util.StringMapper;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class IndexerSqlMetadataStorageCoordinatorCommon
Review Comment:
```suggestion
public class IndexerSqlMetadataStorageCoordinatorTestBase
```
##########
server/src/test/java/org/apache/druid/segment/metadata/CoordinatorSegmentMetadataCacheCommon.java:
##########
@@ -36,9 +41,18 @@
public class CoordinatorSegmentMetadataCacheCommon extends
SegmentMetadataCacheCommon
Review Comment:
Druid style of naming base test classes:
```suggestion
public class CoordinatorSegmentMetadataCacheTestBase extends
SegmentMetadataCacheTestBase
```
##########
server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerCommon.java:
##########
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.metadata;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.metadata.SegmentSchemaCache;
+import org.apache.druid.segment.metadata.SegmentSchemaManager;
+import org.apache.druid.timeline.DataSegment;
+import org.apache.druid.timeline.partition.NoneShardSpec;
+import org.joda.time.DateTime;
+
+import java.io.IOException;
+
+public class SqlSegmentsMetadataManagerCommon
Review Comment:
```suggestion
public class SqlSegmentsMetadataManagerTestBase
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]