Github user bersprockets commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21950#discussion_r212719073
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
 ---
    @@ -76,4 +78,16 @@ private[sql] object PruneFileSourcePartitions extends 
Rule[LogicalPlan] {
             op
           }
       }
    +
    +  private def calcPartSize(catalogTable: Option[CatalogTable], 
sizeInBytes: Long): Long = {
    +    val conf: SQLConf = SQLConf.get
    +    val factor = conf.sizeDeserializationFactor
    +    if (catalogTable.isDefined && factor != 1.0 &&
    +      // TODO: The serde check should be in a utility function, since it 
is also checked elsewhere
    +      catalogTable.get.storage.serde.exists(s => s.contains("Parquet") || 
s.contains("Orc"))) {
    --- End diff --
    
    @mgaido91 Good point. Also, I notice that even when the table's files are 
not compressed (say, a table backed by CSV files), the LongToUnsafeRowMap or 
BytesToBytesMap that backs the relation is roughly 3 times larger than the 
total file size. So even under the best of circumstances (i.e., the table's 
files are not compressed), Spark will get it wrong by several multiples.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to