voonhous commented on code in PR #8418:
URL: https://github.com/apache/hudi/pull/8418#discussion_r1162321558


##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/cluster/ITTestHoodieFlinkClustering.java:
##########
@@ -419,4 +425,179 @@ public void 
testHoodieFlinkClusteringScheduleAfterArchive() throws Exception {
         .stream().anyMatch(fg -> fg.getSlices()
             .stream().anyMatch(s -> 
s.getDataFilePath().contains(firstClusteringInstant))));
   }
+
+  /**
+   * Test to ensure that creating a table with a column of TIMESTAMP(9) will 
throw errors
+   * @throws Exception
+   */
+  @Test
+  public void testHoodieFlinkClusteringWithTimestampNanos() {
+    // create hoodie table and insert into data

Review Comment:
   If we want to support `TIMESTAMP(9)`, opening a separate JIRA issue + PR for 
it would be better. Let's fix the inconsistency between Stream and Append mode 
writing different physical types to parquet.
   
   The reason for doing it in another PR is as such:
   
   Stream mode does not allow `TIMESTAMP(9)` types. If Stream mode (upserts) 
does not allow `TIMESTAMP(9)`, there is no reason to allow `TIMESTAMP(9)` for 
APPEND mode. 
   
   IIUC, type support should be consistent for Hudi tables under all write 
modes.
   
   On top of that it's better to separate feature + bug fixes so that people 
who want to cherrypick fixes into their internal branch will find it easier to 
do so. 
   
   In short, let's fix the bug in this PR first and align the behaviour for 
both STREAM and APPEND modes. So, I'll undo the removal of `Timestamp96Writer` 
and tidy up the tests.
   
   Proper `TIMESTAMP(9)` support should be done in another PR. Whether to 
implement it as a INT96/BINARY type or INT64 type, we can discuss it in detail 
in a JIRA/PR.  WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to