bvaradar commented on code in PR #17450:
URL: https://github.com/apache/hudi/pull/17450#discussion_r2583514012


##########
hudi-sync/hudi-hive-sync/src/test/java/org/apache/hudi/hive/TestParquetToSparkSchemaUtils.java:
##########
@@ -52,35 +61,65 @@ private static SparkSqlParser createSqlParser() {
   }
 
   @Test
-  public void testConvertPrimitiveType() {
+  public void testConvertBasicTypes() {
     StructType sparkSchema = parser.parseTableSchema(
-            "f0 int, f1 string, f3 bigint,"
-                    + " f4 decimal(5,2), f5 timestamp, f6 date,"
-                    + " f7 short, f8 float, f9 double, f10 byte,"
-                    + " f11 tinyint, f12 smallint, f13 binary, f14 boolean");
+            "f0 int NOT NULL, f1 string NOT NULL, f2 bigint NOT NULL, f3 
float, f4 double, f5 boolean, f6 binary, f7 binary NOT NULL,"

Review Comment:
   Can you please rewrite the test case as is to give us the answer that we are 
not causing any regression here . With this change, I am unable to see how we 
tested and have confidence that  there is no regression by migrating parquet 
schema based schema sync to Hoodie Schema when doing Hive Sync 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to