Github user dhruve commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21601#discussion_r199602993
  
    --- Diff: 
core/src/main/scala/org/apache/spark/input/WholeTextFileInputFormat.scala ---
    @@ -53,6 +53,19 @@ private[spark] class WholeTextFileInputFormat
         val totalLen = files.map(file => if (file.isDirectory) 0L else 
file.getLen).sum
         val maxSplitSize = Math.ceil(totalLen * 1.0 /
           (if (minPartitions == 0) 1 else minPartitions)).toLong
    +
    +    // For small files we need to ensure the min split size per node & 
rack <= maxSplitSize
    +    val config = context.getConfiguration
    +    val minSplitSizePerNode = 
config.getLong(CombineFileInputFormat.SPLIT_MINSIZE_PERNODE, 0L)
    +    val minSplitSizePerRack = 
config.getLong(CombineFileInputFormat.SPLIT_MINSIZE_PERRACK, 0L)
    +
    +    if (maxSplitSize < minSplitSizePerNode) {
    +      super.setMinSplitSizeNode(maxSplitSize)
    --- End diff --
    
    Also if a user specifies them via configs we are ensuring that these don't 
break the code. If we set them to `0L` where a user specifies them, we would 
end up breaking the code anyways as the way `CombineFileInputFormat` works is 
it checks to see if the setting is `0L` or not. If it is 0 it ends up picking 
the value from the config.  
https://github.com/apache/hadoop/blob/release-2.8.2-RC0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java#L182
 So we would have to atleast set the config to avoid hitting the error. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to