yihua commented on code in PR #18480:
URL: https://github.com/apache/hudi/pull/18480#discussion_r3069552450


##########
hudi-sync/hudi-sync-common/src/main/java/org/apache/hudi/sync/common/util/SparkSchemaUtils.java:
##########
@@ -56,6 +56,7 @@ private static String convertFieldType(HoodieSchema 
originalFieldSchema) {
         return "\"string\"";
       case BYTES:
       case FIXED:
+      case VECTOR:

Review Comment:
   🤖 While this covers the Spark sync path, 
`HoodieSchemaConverter.convertToDataType()` in `hudi-flink-client` (used by 
`HoodieHiveCatalog.getTable()` to reconstruct the Flink schema) hits a 
`default: throw new IllegalArgumentException("Unsupported HoodieSchemaType: 
VECTOR")` — so a Flink user opening a table with a VECTOR column via the Hive 
catalog would still get an exception. Should `case VECTOR: return 
DataTypes.BYTES().notNull()` (or similar) be added to 
`HoodieSchemaConverter.java` as part of this PR, or is that a deliberate 
follow-up?
   
   <sub><i>- Generated by an AI agent and may contain mistakes. Please verify 
any suggestions before applying.</i></sub>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to