Hi all, It looks scanning all columns of an iceberg table in Spark could cause memory issue in the driver by keeping all the stats.
*select * from iceberg_table limit 10;* I also created https://github.com/apache/iceberg/issues/5706 with more details. Is there any reason not to drop stats <https://github.com/apache/iceberg/blob/apache-iceberg-0.13.1/core/src/main/java/org/apache/iceberg/ManifestReader.java#L292> when columns contain ALL_COLUMNS(*)? Thanks, Manu