lirui-apache commented on a change in pull request #12403:
URL: https://github.com/apache/flink/pull/12403#discussion_r435917242



##########
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTypeUtil.java
##########
@@ -182,11 +182,17 @@ private static DataType 
toFlinkPrimitiveType(PrimitiveTypeInfo hiveType) {
 
                @Override
                public TypeInfo visit(CharType charType) {
-                       if (charType.getLength() > HiveChar.MAX_CHAR_LENGTH) {
-                               throw new CatalogException(
-                                               String.format("HiveCatalog 
doesn't support char type with length of '%d'. " +
-                                                                       "The 
maximum length is %d",
+                       // Flink treats string literal UDF parameters as CHAR. 
Such types may have precisions not supported by
+                       // Hive, e.g. CHAR(0). Promote it to STRING in such 
case if we're told not to check precision.
+                       if (charType.getLength() > HiveChar.MAX_CHAR_LENGTH || 
charType.getLength() < 1) {

Review comment:
       I'm not sure whether that makes sense because currently we don't have a 
use case to involve a `VARCHAR(0)`.
   
   Even for CHAR, Flink also has some logic to [verify the 
length](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/CharType.java#L67)
 is > 0. So having this CHAR(0) as UDF parameter type seems a little 
inconsistent in the first place.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to