Genmao Yu created SPARK-31913:
---------------------------------

             Summary: StackOverflowError in FileScanRDD
                 Key: SPARK-31913
                 URL: https://issues.apache.org/jira/browse/SPARK-31913
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.4.5, 3.0.0
            Reporter: Genmao Yu


Reading from FileScanRDD may failed with a StackOverflowError in my environment:
- There are a mass of empty files in table partition。
- Set `spark.sql.files.maxPartitionBytes`  with a large value: 1024MB

A quick workaround is set `spark.sql.files.maxPartitionBytes` with a small 
value, like default 128MB.

A better way is resolve the recursive calls in FileScanRDD.

{code}
java.lang.StackOverflowError
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.getSubject(Subject.java:297)
        at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:648)
        at 
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2828)
        at 
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2818)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2684)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at 
org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
        at 
org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:640)
        at 
org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:148)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:143)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:326)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to