aljoscha commented on a change in pull request #8695: 
[FLINK-12805][FLINK-12808][FLINK-12809][table-api] Introduce 
PartitionableTableSource and PartitionableTableSink and OverwritableTableSink
URL: https://github.com/apache/flink/pull/8695#discussion_r298953952
 
 

 ##########
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sinks/PartitionableTableSink.java
 ##########
 @@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sinks;
+
+import org.apache.flink.annotation.Experimental;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * An interface for partitionable {@link TableSink}. A partitionable sink can 
writes
+ * query results to partitions.
+ *
+ * <p>Partition columns are defined via {@link #getPartitionFieldNames()} and 
the field names
+ * should be sorted in a strict order. And all the partition fields should 
exist in the
+ * {@link TableSink#getTableSchema()}.
+ *
+ * <p>For example, a partitioned table named {@code my_table} with a table 
schema
+ * {@code [a INT, b VARCHAR, c DOUBLE, dt VARCHAR, country VARCHAR]} is 
partitioned on columns
+ * {@code dt, country}. Then {@code dt} is the first partition column, and
+ * {@code country} is the secondary partition column.
+ *
+ * <p>We can insert data into table partitions using INSERT INTO PARTITION 
syntax, for example:
+ * <pre>
+ * <code>
+ *     INSERT INTO my_table PARTITION (dt='2019-06-20', country='bar') select 
a, b, c from my_view
+ * </code>
+ * </pre>
+ * When all the partition columns are set a value in PARTITION clause, it is 
inserting into a
+ * static partition. It will writes the query result into a static partition,
+ * i.e. {@code dt='2019-06-20', country='bar'}. The user specified static 
partitions will be told
+ * to the sink via {@link #setStaticPartition(Map)}.
+ *
+ * <p>The INSERT INTO PARTITION syntax also supports dynamic partition inserts.
+ * <pre>
+ * <code>
+ *     INSERT INTO my_table PARTITION (dt='2019-06-20') select a, b, c, 
country from another_view
+ * </code>
+ * </pre>
+ * When partial partition columns (prefix part of all partition columns) are 
set a value in
+ * PARTITION clause, it is writing the query result into a dynamic partition. 
In the above example,
+ * the static partition part is {@code dt='2019-06-20'} which will be told to 
the sink via
+ * {@link #setStaticPartition(Map)}. And the {@code country} is the dynamic 
partition which will be
+ * get from each record.
+ */
+@Experimental
+public interface PartitionableTableSink {
+
+       /**
+        * Gets the partition field names of the table. The partition field 
names should be sorted in
+        * a strict order, i.e. they have the order as specified in the 
PARTITION statement in DDL.
+        * This should be an empty set if the table is not partitioned.
+        *
+        * <p>All the partition fields should exist in the {@link 
TableSink#getTableSchema()}.
+        *
+        * @return partition field names of the table, empty if the table is 
not partitioned.
+        */
+       List<String> getPartitionFieldNames();
+
+       /**
+        * Sets the static partition into the {@link TableSink}. The static 
partition may be partial
+        * of all partition columns. See the class Javadoc for more details.
+        *
+        * <p>The static partition is represented as a {@code Map<String, 
String>} which maps from
+        * partition field name to partition value. The partition values are 
all encoded as strings,
+        * i.e. encoded using String.valueOf(...). For example, there is a 
static partition
+        * {@code f0=1024, f1="foo", f2="bar"}. f0 is an integer type, f1 and 
f2 are string types.
+        * They will all be encoded as strings: "1024", "foo", "bar". And can 
be decoded to original
+        * literals based on the field types.
+        *
+        * @param partitions user specified static partition
+        */
+       void setStaticPartition(Map<String, String> partitions);
+
+       /**
+        * If returns true, sink can trust all records will definitely be 
sorted by partition fields
+        * before consumed by the {@link TableSink}, i.e. the sink will receive 
data one partition
+        * at a time. For some sinks, this can be used to reduce number of the 
partition writers
+        * to improve writing performance.
+        *
+        * @param supportSort whether the execution mode supports sort, e.g. 
sort is only supported in
+        *                    batch mode, not supported in streaming mode.
+        *
+        * @return whether data need to be sorted by partition before consumed 
by the sink. Default is false.
+        * If {@code supportSort} is false, it should never return true 
(requires sort), otherwise it will fail.
+        */
+       default boolean requiresSortByPartition(boolean supportSort) {
 
 Review comment:
   I think this method is somewhat tricky because it both asks the source about 
what it wants and at the same time the source will configure itself. I.e, a 
source that returns `true` will set an internal field that changes its 
behaviour. I didn't consider that second aspect before, that's why I'm only 
noticing now. So maybe we should call it `configurePartitionGrouping(boolean)` 
and explicitly mention this also in the Javadoc?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to