manuzhang opened a new issue, #5706:
URL: https://github.com/apache/iceberg/issues/5706

   ### Apache Iceberg version
   
   0.13.1
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   We recently migrated a Spark parquet table to Iceberg table. With the same 
query scanning all columns and return 10 rows, the Spark job after migration 
failed due to driver exceeding GC limit.
   
   ```sql
   select * from iceberg_table limit 10;
   ```
   
   The driver memory is set to 4g. After dumping the driver memory, I find it's 
most taken up by the `lowerBounds` maps from the `GenericDataFile`. 
   
   That lead me to 
https://github.com/apache/iceberg/blob/apache-iceberg-0.13.1/core/src/main/java/org/apache/iceberg/ManifestReader.java#L292
 where stats are returned when scanning all columns.
   
   ```java
     static boolean dropStats(Expression rowFilter, Collection<String> columns) 
{
       // Make sure we only drop all stats if we had projected all stats
       // We do not drop stats even if we had partially added some stats 
columns, except for record_count column.
       // Since we don't want to keep stats map which could be huge in size 
just because we select record_count, which
       // is a primitive type.
       if (rowFilter != Expressions.alwaysTrue() && columns != null &&
           !columns.containsAll(ManifestReader.ALL_COLUMNS)) {
         Set<String> intersection = Sets.intersection(Sets.newHashSet(columns), 
STATS_COLUMNS);
         return intersection.isEmpty() || 
intersection.equals(Sets.newHashSet("record_count"));
       }
       return false;
     }
   ```
   
   The issue is resolved by either removing 
`!columns.containsAll(ManifestReader.ALL_COLUMNS)` or not scanning all columns.
    
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to