[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16967947#comment-16967947
 ] 

Gautam Pulla commented on SPARK-26570:
--------------------------------------

I'm hitting a similar issue - but at a more specific line of code - perhaps the 
following information will help.

When enumerating a large number of files (2-3 million files) in S3, we see an 
OOM at the line of code below which is concatenating all file-paths into a 
single string to log them. With 2-3 million paths (each 150 chars), the string 
would be 400 million characters - and represented as a JVM multi-byte-per-char 
unicode string, that can be close to 1 Gig of memory just to log the string. 
With fewer files - say 400k files - the concatenation/logging is successful - 
but it's still an annoyance as the output log contains a huge string with all 
the enumerated paths. 
{code:java}
class InMemoryFileIndex(

...

private[sql] def bulkListLeafFiles(
    paths: Seq[Path],
    hadoopConf: Configuration,
    filter: PathFilter,
    sparkSession: SparkSession): Seq[(Path, Seq[FileStatus])] = {

  // Short-circuits parallel listing when serial listing is likely to be faster.
  if (paths.size <= 
sparkSession.sessionState.conf.parallelPartitionDiscoveryThreshold) {
    return paths.map { path =>
      (path, listLeafFiles(path, hadoopConf, filter, Some(sparkSession)))
    }
  }

  logInfo(s"Listing leaf files and directories in parallel under: 
${paths.mkString(", ")}")  <<<<<<<<<<< Log line printing all files 
{code}

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> ------------------------------------------------------
>
>                 Key: SPARK-26570
>                 URL: https://issues.apache.org/jira/browse/SPARK-26570
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.2
>            Reporter: deshanxiao
>            Priority: Major
>         Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, image-2019-10-14-10-51-28-374.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to