Aitozi commented on code in PR #21522:
URL: https://github.com/apache/flink/pull/21522#discussion_r1139701322


##########
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java:
##########
@@ -96,11 +106,104 @@ public class HiveTableUtil {
 
     private HiveTableUtil() {}
 
-    public static TableSchema createTableSchema(
+    /** Create a Flink's Schema by hive client. */
+    public static org.apache.flink.table.api.Schema createSchema(
+            HiveConf hiveConf,
+            Table hiveTable,
+            HiveMetastoreClientWrapper client,
+            HiveShim hiveShim) {
+
+        Tuple4<List<FieldSchema>, List<FieldSchema>, Set<String>, 
Optional<UniqueConstraint>>
+                hiveTableInfo = extractHiveTableInfo(hiveConf, hiveTable, 
client, hiveShim);
+
+        return createSchema(
+                hiveTableInfo.f0,
+                hiveTableInfo.f1,
+                hiveTableInfo.f2,
+                hiveTableInfo.f3.orElse(null));
+    }
+
+    /** Create a Flink's Schema from Hive table's columns and partition keys. 
*/
+    public static org.apache.flink.table.api.Schema createSchema(
+            List<FieldSchema> nonPartCols,
+            List<FieldSchema> partitionKeys,
+            Set<String> notNullColumns,
+            @Nullable UniqueConstraint primaryKey) {
+        return Schema.newBuilder()
+                .fromResolvedSchema(
+                        createResolvedSchema(

Review Comment:
   > In here, we will fisrt convert to ResolvedSchema and then convert to 
Schema. So why not convert to Schema directly?
   
   It's intend to save the code, because creating `ResolvedSchema` and `Schema` 
actually doing the same thing. IMO, we do not have to avoid using 
`Schema.newBuilder().fromResolvedSchema().build()` as much as possible, I think 
its harmless. Since it is a bridge to covert the ResolvedSchema back to Schema 
when needed.
   
   In this case, we can create `ResolvedSchema` directly from the Hive table 
information. In my original thought, we even do not have to add the dedicated 
method as below, the caller just can do it depend on the requirements.
   
   ```
   Schema createScheam(List<FieldSchema> nonPartCols,
               List<FieldSchema> partitionKeys,
               Set<String> notNullColumns,
               @Nullable UniqueConstraint primaryKey) {
         return Schema.newBuilder()
                   .fromResolvedSchema(
                           createResolvedSchema(
                                   nonPartCols, partitionKeys, notNullColumns, 
primaryKey))
                   .build();
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to