bvaradar commented on issue #2151:
URL: https://github.com/apache/hudi/issues/2151#issuecomment-705780717


   @tandonraghav : Regarding your question on compaction, Since you are using 
WriteClient level APIs, you can use
   
HoodieTable.getHoodieView().getPendingCompactionOperations().map(Pair::getKey()).distinct()
 will give you compaction timestamps.
   
   Regarding the setup, 100s of topics in kafka should be fine and not an 
anti-pattern.Curious how you manage various schema corresponding to each Mongo 
collection when writing to single kafka topic. Are you using avro encoding and 
schema registry ?
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to