the-other-tim-brown commented on code in PR #13290:
URL: https://github.com/apache/hudi/pull/13290#discussion_r2107723711
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java:
##########
@@ -2909,6 +2918,26 @@ public int getSecondaryIndexParallelism() {
return metadataConfig.getSecondaryIndexParallelism();
}
+ /**
+ * Whether to enable streaming writes to metadata table or not.
+ * We have support for streaming writes only in SPARK engine (due to spark
task retries intricacies) and for table version > 8 due to the
+ * pre-requisite of NBCC.
+ * To support streaming writes, we might need NBCC support for metadata
table, since there could an ingestion and a table service from data table
Review Comment:
`we might need` -> I think this is a requirement right?
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java:
##########
@@ -2909,6 +2918,26 @@ public int getSecondaryIndexParallelism() {
return metadataConfig.getSecondaryIndexParallelism();
}
+ /**
+ * Whether to enable streaming writes to metadata table or not.
+ * We have support for streaming writes only in SPARK engine (due to spark
task retries intricacies) and for table version > 8 due to the
+ * pre-requisite of NBCC.
+ * To support streaming writes, we might need NBCC support for metadata
table, since there could an ingestion and a table service from data table
+ * concurrently trying to write to metadata table.
+ * In Spark, when streaming writes are enabled, incremental operations like
insert, upsert, delete and table services (compaction and clustering)
Review Comment:
Be sure to include that these are operation on the data table
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]