voonhous commented on code in PR #8418:
URL: https://github.com/apache/hudi/pull/8418#discussion_r1162333501
##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/cluster/ITTestHoodieFlinkClustering.java:
##########
@@ -419,4 +425,179 @@ public void
testHoodieFlinkClusteringScheduleAfterArchive() throws Exception {
.stream().anyMatch(fg -> fg.getSlices()
.stream().anyMatch(s ->
s.getDataFilePath().contains(firstClusteringInstant))));
}
+
+ /**
+ * Test to ensure that creating a table with a column of TIMESTAMP(9) will
throw errors
+ * @throws Exception
+ */
+ @Test
+ public void testHoodieFlinkClusteringWithTimestampNanos() {
+ // create hoodie table and insert into data
Review Comment:
Alright then, if this is your worry, then maybe the correct fix is to allow
clustering to read TIMESTAMP(9) types?
Spark is able to read out TIMESTAMP(9) types, but it'll only display it in
the millis granularity from my tests.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]