nsivabalan commented on code in PR #18295:
URL: https://github.com/apache/hudi/pull/18295#discussion_r3018895760
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieLockConfig.java:
##########
@@ -241,6 +248,16 @@ public class HoodieLockConfig extends HoodieConfig {
@Deprecated
public static final String LOCK_PROVIDER_CLASS_PROP =
LOCK_PROVIDER_CLASS_NAME.key();
+ // Lock provider class names from modules not directly accessible in
hudi-client-common.
Review Comment:
in theory, this might look neat, but in reality, this may not land neatly as
we think. I did bring this up surya last week when he proposed this.
anyways, let me clarify. As of now, all touch points to mdt from data table
write client is fairly standard.
1. txnManager.beginTxn
2. mdtWriter = getMedataWriter
3. mdtWriter. apply updates
4. mdtWriter.close
5. txnManager.endTxn
but w/ above proposal, we might end up something like
```
SparkRDDWriteClient.performAsyncTableServicesInMDT() {
w/o acquiring any data table lock
1. mdtWriter = getMedataWriter
2. Execute compaction for pending compaction instants. // this means
that we need to expose apis in mdt writer just for compaction execution, but
not completing it.
3. txnManager.beginTxn // data table lock
4. complete the compaction for mdt // again, we need to expose apis
in mdt writer just to complete the compaction.
5. txnManager.endTxn // data table lock
}
```
This does not seem elegant and unnecessarily complicates the layering.
But if you folks have any other better alternative, let us know. Happy to
explore.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]