minihippo commented on PR #5629:
URL: https://github.com/apache/hudi/pull/5629#issuecomment-1231814490

   > > How much gains do we get after the patch, feel nervous about this 
change, let's pay more attention about stability first.
   > 
   > @danny0405 The current benchmark result of RFC-46:
   > 
   > * no overhead introduced of avro log read/write with new merger
   > * Compare with avro record in reading/writing parquet log
   >   
   >   * for read, spark record improves ~33% (after e2e duration-before e2e 
duration)*before
   >   * for write, spark record improves 7%~9%. In the stage that transforms 
input to hoodie record, it improves ~25%. But the shuffle data becomes bigger 
that before. It costs more in the following shuffle read stage.
   > 
   > All based on spark batch write/read
   
   After adding map type to test dataset, we find that the e2e of writing 
improves 27%.
   - with further analysis,  interalRow2parquet improves 40%, which matches 
with 50% CPU cost in avro2parquet  got by flame graph.
   - The relationship between performance improvement and complex column num is 
not nonlinear. That means even if increase the complex column, the income will 
not increase.
   
   To reproduce the benchmark result, I add the simple one to the test part


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to