[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221371033 ## File path: sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java ## @@ -320,25 +322,32 @@ public void awaitInitialization() throws InterruptedException private void addSegment(final DruidServerMetadata server, final DataSegment segment) { synchronized (lock) { - final Map knownSegments = segmentSignatures.get(segment.getDataSource()); + final Map knownSegments = segmentMetadataInfo.get(segment.getDataSource()); if (knownSegments == null || !knownSegments.containsKey(segment)) { +final long isRealtime = server.segmentReplicatable() ? 0 : 1; Review comment: It's a pretty standard way in Druid of differentiating realtime and non-realtime servers. See CoordinatorBasedSegmentHandoffNotifier, DruidSchema, and CachingClusteredClient, all of which use this method to determine if segments are served by realtime servers or not. Maybe we could make this clearer by adding a new "isRealtimeServer" method. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221370122 ## File path: docs/content/querying/sql.md ## @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The sys schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number of replicas of this segment currently being served| +|num_rows|Number of rows in current segment, this value could be null if unkown to broker at query time| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|JSON-serialized datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| +|host|Hostname of the server| +|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is disabled| +|tls_port|TLS port of the server, or -1 if TLS is disabled| +|server_type|Type of Druid service. Possible values include: historical, realtime and indexer_executor.| Review comment: It's not the greatest name, but, it's what the servers announce themselves as (legacy reasons). This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221369584 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|Jsonified datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| Review comment: It does support concatenating, but I think we should keep this here. It's basically the primary key of the server table, and is used for joins with the segment_servers table. The other fields (host, plaintext_port, etc) are provided too as conveniences. With system tables, since they're all generated dynamically anyway, it's ok to have some redundancy when it makes the user experience more convenient. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221369004 ## File path: docs/content/querying/sql.md ## @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The sys schema provides visibility into Druid segments, servers and tasks. Review comment: I think it's good to put it in quotes, so like: The "sys" schema. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r218953728 ## File path: docs/content/querying/sql.md ## @@ -468,6 +468,11 @@ plan SQL queries. This metadata is cached on broker startup and also updated per [SegmentMetadata queries](segmentmetadataquery.html). Background metadata refreshing is triggered by segments entering and exiting the cluster, and can also be throttled through configuration. +Druid exposes system information through special system tables. There are two such schemas available: Information Schema and System Schema Review comment: Please use better grammar here: some punctuation is missing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212905285 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,537 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.TimelineServerView; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; +import org.joda.time.DateTime; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "sys"; + private static final String SEGMENTS_TABLE = "segments"; + private static final String SERVERS_TABLE = "servers"; + private static final String SEGMENT_SERVERS_TABLE = "segment_servers"; + private static final String TASKS_TABLE = "tasks"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SEGMENT_SERVERS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("segment_id", ValueType.STRING) + .add("datasource", ValueType.STRING) + .add("start", ValueType.STRING) + .add("end", ValueType.STRING) + .add("size", ValueType.LONG) + .add("version", ValueType.STRING) + .add("partition_num", ValueType.STRING) + .add("num_replicas", ValueType.LONG) + .add("is_published", ValueType.LONG) + .add("is_available", ValueType.LONG) + .add("is_realtime", ValueType.LONG) + .add("payload", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("server", ValueType.STRING) + .add("scheme", ValueType.STRING) + .add("server_type", ValueType.STRING) + .add("tier", ValueType.STRING) + .add("curr_size", ValueType.LONG) + .add("max_size", ValueType.LONG) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("server", ValueType.STRING) + .add("segment_id", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("task_id", ValueType.STRING) + .add("type", ValueType.STRING) + .add("datasource", ValueType.STRING) +
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219002133 ## File path: server/src/main/java/org/apache/druid/server/http/MetadataResource.java ## @@ -136,6 +137,23 @@ public Response getDatabaseSegmentDataSource( return Response.status(Response.Status.OK).entity(dataSource).build(); } + @GET + @Path("/segments") + @Produces(MediaType.APPLICATION_JSON) + @ResourceFilters(DatasourceResourceFilter.class) + public Response getDatabaseSegmentSegments() Review comment: Maybe just getDatabaseSegments? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212034814 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|Jsonified datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| +|scheme|Server scheme http or https| +|server_type|Type of druid service for example historical, realtime, bridge, indexer_executor| +|tier|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration)| +|current_size|Current size of segments in bytes on this server| +|max_size|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)| + +To retrieve information about all servers, use the query: +```sql +SELECT * FROM sys.servers; +``` + +### SEGMENT_SERVERS table + +SEGMENT_SERVERS is used to join SEGMENTS with SERVERS table + +|Column|Notes| +|--|-| +|server|Server name in format host:port (Primary key of [servers table](#SERVERS-table))| +|segment_id|Segment identifier (Primary key of [segments table](#SEGMENTS-table))| + +To retrieve information from segment_servers table, use the query: +```sql +SELECT * FROM sys.segment_servers; +``` + +### TASKS table + +The tasks table provides information about active and recently-completed indexing tasks. For more information Review comment: "check out" not "checkout out" This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212033491 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|Jsonified datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| Review comment: It'd be useful to have another field for just the host. Imagine doing stuff like `GROUP BY server_host` to collect together all servers run on the same machine. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212033111 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| Review comment: Number _of_ replicas This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212034699 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|Jsonified datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| +|scheme|Server scheme http or https| +|server_type|Type of druid service for example historical, realtime, bridge, indexer_executor| +|tier|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration)| +|current_size|Current size of segments in bytes on this server| +|max_size|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)| + +To retrieve information about all servers, use the query: +```sql +SELECT * FROM sys.servers; +``` + +### SEGMENT_SERVERS table + +SEGMENT_SERVERS is used to join SEGMENTS with SERVERS table + +|Column|Notes| +|--|-| +|server|Server name in format host:port (Primary key of [servers table](#SERVERS-table))| +|segment_id|Segment identifier (Primary key of [segments table](#SEGMENTS-table))| + +To retrieve information from segment_servers table, use the query: +```sql +SELECT * FROM sys.segment_servers; Review comment: Would be awesome to include an example here of a JOIN between "segments" and "servers". Maybe a query that shows the breakdown of number of segments for a specific datasource, by server. Something like grouping by server, filter by datasource, count segments. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212033839 ## File path: docs/content/querying/sql.md ## @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +SELECT * FROM sys.segments WHERE datasource = 'wikipedia' +``` + +### SEGMENTS table +Segments table provides details on all Druid segments, whether they are published yet or not. + + +|Column|Notes| +|--|-| +|segment_id|Unique segment identifier| +|datasource|Name of datasource| +|start|Interval start time (in ISO 8601 format)| +|end|Interval end time (in ISO 8601 format)| +|size|Size of segment in bytes| +|version|Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started)| +|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)| +|num_replicas|Number replicas of this segment currently being served| +|is_published|True if this segment has been published to the metadata store| +|is_available|True if this segment is currently being served by any server| +|is_realtime|True if this segment is being served on a realtime server| +|payload|Jsonified datasegment payload| + +### SERVERS table +Servers table lists all data servers(any server that hosts a segment). It includes both historicals and peons. + +|Column|Notes| +|--|-| +|server|Server name in the form host:port| +|scheme|Server scheme http or https| +|server_type|Type of druid service for example historical, realtime, bridge, indexer_executor| Review comment: Capitalize Druid, and this sentence needs a bit more punctuation. How about: > Type of Druid service. Possible values include: historical, realtime, bridge, and indexer_executor. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r218954294 ## File path: docs/content/querying/sql.md ## @@ -519,6 +524,89 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +The SYS schema provides visibility into Druid segments, servers and tasks. Review comment: It's "sys" now (not SYS). This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r218994134 ## File path: server/src/main/java/org/apache/druid/discovery/DruidLeaderClient.java ## @@ -133,6 +134,17 @@ public FullResponseHolder go(Request request) throws IOException, InterruptedExc return go(request, new FullResponseHandler(StandardCharsets.UTF_8)); } + public ListenableFuture goStream( + final Request request, + final HttpResponseHandler handler + ) + { +return httpClient.go( Review comment: No need to change this, but: IMO it would look better to write this on one line. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212030735 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212030245 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212030105 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212029930 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r212027406 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; Review comment: Hmm, good question. I think in SQL underscores are more normal, although `data_source` is very weird so let's not do that. Probably `datasource` is ok. If anyone else has an opinion please go for it. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210661609 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210663473 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210677643 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210662196 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210662719 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210659443 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r207957383 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). Review comment: To me this reads a bit unclear, I'd suggest trying something like: > Segments tables provides details on all Druid segments, whether they are published yet or not. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210677565 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210660722 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r209743197 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| +|IS_AVAILABLE|segment is being served| +|IS_REALTIME|segment served on a realtime server| +|PAYLOAD|jsonified datasegment payload| + +### SERVERS table + + +|Column|Notes| +|--|-| +|SERVER|| +|SERVER_TYPE|| +|TIER|| +|CURRENT_SIZE|| +|MAX_SIZE|| + +To retrieve all servers information, use the query +```sql +select * from SYS.SERVERS; +``` + +### SEGMENTSERVERS table + +SEGMENTSERVERS is used to join SEGMENTS with SERVERS table Review comment: `SEGMENT_SERVERS` would be a nicer name, I think. I think we should provide an example too. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r209744738 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| +|IS_AVAILABLE|segment is being served| +|IS_REALTIME|segment served on a realtime server| +|PAYLOAD|jsonified datasegment payload| + +### SERVERS table + + +|Column|Notes| +|--|-| +|SERVER|| +|SERVER_TYPE|| +|TIER|| +|CURRENT_SIZE|| +|MAX_SIZE|| + +To retrieve all servers information, use the query +```sql +select * from SYS.SERVERS; +``` + +### SEGMENTSERVERS table + +SEGMENTSERVERS is used to join SEGMENTS with SERVERS table + +|Column|Notes| +|--|-| +|SERVER|| +|SEGMENT_ID|| + +### TASKS table + +TASKS table provides tasks info from overlord. + +|Column|Notes| +|--|-| +|TASK_ID|| Review comment: These should all have comments too. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210663939 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r209748999 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210627821 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r208006429 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| +|IS_AVAILABLE|segment is being served| +|IS_REALTIME|segment served on a realtime server| +|PAYLOAD|jsonified datasegment payload| + +### SERVERS table + + +|Column|Notes| +|--|-| +|SERVER|| +|SERVER_TYPE|| +|TIER|| +|CURRENT_SIZE|| +|MAX_SIZE|| + +To retrieve all servers information, use the query Review comment: Better grammar: "To retrieve information about all servers, use the query:" This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r208004645 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| Review comment: It'd be clearer to expand this a bit: "True if this segment has been published to the metadata store." Similar comment for the other ones. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r210669152 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r209758183 ## File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java ## @@ -0,0 +1,435 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package io.druid.sql.calcite.schema; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Preconditions; +import com.google.common.collect.FluentIterable; +import com.google.common.collect.ImmutableMap; +import com.google.inject.Inject; +import io.druid.client.BrokerServerView; +import io.druid.client.DruidServer; +import io.druid.client.ImmutableDruidDataSource; +import io.druid.client.coordinator.Coordinator; +import io.druid.client.indexing.IndexingService; +import io.druid.client.selector.QueryableDruidServer; +import io.druid.discovery.DruidLeaderClient; +import io.druid.indexer.TaskStatusPlus; +import io.druid.java.util.common.ISE; +import io.druid.java.util.common.StringUtils; +import io.druid.java.util.common.logger.Logger; +import io.druid.java.util.http.client.response.FullResponseHolder; +import io.druid.segment.column.ValueType; +import io.druid.server.coordination.ServerType; +import io.druid.server.security.AuthorizerMapper; +import io.druid.sql.calcite.table.RowSignature; +import io.druid.timeline.DataSegment; +import org.apache.calcite.DataContext; +import org.apache.calcite.linq4j.Enumerable; +import org.apache.calcite.linq4j.Linq4j; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.schema.ScannableTable; +import org.apache.calcite.schema.Table; +import org.apache.calcite.schema.impl.AbstractSchema; +import org.apache.calcite.schema.impl.AbstractTable; +import org.jboss.netty.handler.codec.http.HttpMethod; +import org.jboss.netty.handler.codec.http.HttpResponseStatus; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SystemSchema extends AbstractSchema +{ + private static final Logger log = new Logger(SystemSchema.class); + + public static final String NAME = "SYS"; + private static final String SEGMENTS_TABLE = "SEGMENTS"; + private static final String SERVERS_TABLE = "SERVERS"; + private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS"; + private static final String TASKS_TABLE = "TASKS"; + private static final int SEGMENTS_TABLE_SIZE; + private static final int SERVERSEGMENTS_TABLE_SIZE; + + private static final RowSignature SEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SEGMENT_ID", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("START", ValueType.STRING) + .add("END", ValueType.STRING) + .add("IS_PUBLISHED", ValueType.STRING) + .add("IS_AVAILABLE", ValueType.STRING) + .add("IS_REALTIME", ValueType.STRING) + .add("PAYLOAD", ValueType.STRING) + .build(); + + private static final RowSignature SERVERS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SERVER_TYPE", ValueType.STRING) + .add("TIER", ValueType.STRING) + .add("CURR_SIZE", ValueType.STRING) + .add("MAX_SIZE", ValueType.STRING) + .build(); + + private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature + .builder() + .add("SERVER", ValueType.STRING) + .add("SEGMENT_ID", ValueType.STRING) + .build(); + + private static final RowSignature TASKS_SIGNATURE = RowSignature + .builder() + .add("TASK_ID", ValueType.STRING) + .add("TYPE", ValueType.STRING) + .add("DATASOURCE", ValueType.STRING) + .add("CREATED_TIME", ValueType.STRING) + .add("QUEUE_INSERTION_TIME", ValueType.STRING) + .add("STATUS", ValueType.STRING) + .add("RUNNER_STATUS", ValueType.STRING) + .add("DURATION",
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r209743270 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| +|IS_AVAILABLE|segment is being served| +|IS_REALTIME|segment served on a realtime server| +|PAYLOAD|jsonified datasegment payload| + +### SERVERS table + + +|Column|Notes| +|--|-| +|SERVER|| +|SERVER_TYPE|| +|TIER|| +|CURRENT_SIZE|| +|MAX_SIZE|| + +To retrieve all servers information, use the query +```sql +select * from SYS.SERVERS; +``` + +### SEGMENTSERVERS table + +SEGMENTSERVERS is used to join SEGMENTS with SERVERS table + +|Column|Notes| +|--|-| +|SERVER|| Review comment: Please include in the notes which column these correspond to in the other tables. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r207922200 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; Review comment: Lowercase seems more Druid-y, so I think I'd prefer `SELECT * FROM sys.segments WHERE dataSource = 'wikipedia'`. The only reason INFORMATION_SCHEMA isn't like this is because it's a standard thing and uppercase seems more normal for it from looking at other databases. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
[GitHub] gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)
gianm commented on a change in pull request #6094: Introduce SystemSchema tables (#5989) URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r208005582 ## File path: docs/content/querying/sql.md ## @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_ |COLLATION_NAME|| |JDBC_TYPE|Type code from java.sql.Types (Druid extension)| +## SYSTEM SCHEMA + +SYSTEM_TABLES provide visibility into the druid segments, servers and tasks. +For example to retrieve all segments for datasource "wikipedia", use the query: +```sql +select * from SYS.SEGMENTS where DATASOURCE='wikipedia'; +``` + +### SEGMENTS table +Segments tables provides details on all the segments, both published and served(but not published). + + +|Column|Notes| +|--|-| +|SEGMENT_ID|| +|DATASOURCE|| +|START|| +|END|| +|IS_PUBLISHED|segment in metadata store| +|IS_AVAILABLE|segment is being served| +|IS_REALTIME|segment served on a realtime server| +|PAYLOAD|jsonified datasegment payload| + +### SERVERS table + + +|Column|Notes| +|--|-| +|SERVER|| Review comment: Please include a description for all of these columns, including: - Server should detail the expected format (host:port? does it include scheme?) - Scheme should be somewhere in here. Possibly a separate field "scheme". - Server type should list the possible server types. - Max size should reference the historical docs and call out that it's referring to the `druid.server.maxSize` property. - Anything else that seems useful! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org