I'm not a parquet expert but I can confirm Hudi does not maintain
specific memory strategy for parquet writers.

Best,
Danny

nicolas paris <nicolas.pa...@riseup.net> 于2023年11月20日周一 17:54写道:
>
> hi everyone,
>
> from the tuning guide:
>
> > Off-heap memory : Hudi writes parquet files and that needs good
> amount of off-heap memory proportional to schema width. Consider
> setting something like spark.executor.memoryOverhead or
> spark.driver.memoryOverhead, if you are running into such failures.
>
>
> can you elaborate if off-heap usage is specific to hudi when writing
> parquet files or if this is a general parquet behavior ? Any details on
> this would help
>
> Thanks a lot

Reply via email to