I'm not a parquet expert but I can confirm Hudi does not maintain
specific memory strategy for parquet writers.
Best,
Danny
nicolas paris 于2023年11月20日周一 17:54写道:
>
> hi everyone,
>
> from the tuning guide:
>
> > Off-heap memory : Hudi writes parquet files and that needs good
> amount of
hi everyone,
from the tuning guide:
> Off-heap memory : Hudi writes parquet files and that needs good
amount of off-heap memory proportional to schema width. Consider
setting something like spark.executor.memoryOverhead or
spark.driver.memoryOverhead, if you are running into such failures.
can