Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17591#discussion_r110642316
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileStatusCache.scala
 ---
    @@ -94,13 +94,25 @@ private class SharedInMemoryCache(maxSizeInBytes: Long) 
extends Logging {
       // Opaque object that uniquely identifies a shared cache user
       private type ClientId = Object
     
    +  /* [[Weigher]].weigh returns Int so we could only cache objects < 2GB
    +   * instead, the weight is divided by this factor (which is smaller
    +   * than the size of one [[FileStatus]]).
    +   * so it will support objects up to 64GB in size.
    +   */
    +  private val weightScale = 32
    +
       private val warnedAboutEviction = new AtomicBoolean(false)
     
       // we use a composite cache key in order to distinguish entries inserted 
by different clients
       private val cache: Cache[(ClientId, Path), Array[FileStatus]] = 
CacheBuilder.newBuilder()
         .weigher(new Weigher[(ClientId, Path), Array[FileStatus]] {
           override def weigh(key: (ClientId, Path), value: Array[FileStatus]): 
Int = {
    -        (SizeEstimator.estimate(key) + SizeEstimator.estimate(value)).toInt
    +        val estimate = (SizeEstimator.estimate(key) + 
SizeEstimator.estimate(value)) / weightScale
    +        if (estimate > Int.MaxValue) {
    +          throw new IllegalStateException(
    --- End diff --
    
    Agree, though, this can only happen if the filesourcePartitionFileCacheSize 
is at least 64GB and some object is at least 64GB. The effect is to possibly 
cache things longer than they should, which seems better than failing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to