[jira] [Commented] (SPARK-31913) StackOverflowError in FileScanRDD

2020-10-07 Thread Takeshi Yamamuro (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209946#comment-17209946
 ] 

Takeshi Yamamuro commented on SPARK-31913:
--

Since this issue looks env-dependent and the PR was automatically closed, I 
will close this.

> StackOverflowError in FileScanRDD
> -
>
> Key: SPARK-31913
> URL: https://issues.apache.org/jira/browse/SPARK-31913
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5, 3.0.0
>Reporter: Genmao Yu
>Priority: Minor
>
> Reading from FileScanRDD may failed with a StackOverflowError in my 
> environment:
> - There are a mass of empty files in table partition。
> - Set `spark.sql.files.maxPartitionBytes`  with a large value: 1024MB
> A quick workaround is set `spark.sql.files.maxPartitionBytes` with a small 
> value, like default 128MB.
> A better way is resolve the recursive calls in FileScanRDD.
> {code}
> java.lang.StackOverflowError
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.getSubject(Subject.java:297)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:648)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2828)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2818)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2684)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at 
> org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
>   at 
> org.apache.parquet.hadoop.ParquetFileReader.(ParquetFileReader.java:640)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:148)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:143)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:326)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31913) StackOverflowError in FileScanRDD

2020-06-05 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17126664#comment-17126664
 ] 

Apache Spark commented on SPARK-31913:
--

User 'uncleGen' has created a pull request for this issue:
https://github.com/apache/spark/pull/28737

> StackOverflowError in FileScanRDD
> -
>
> Key: SPARK-31913
> URL: https://issues.apache.org/jira/browse/SPARK-31913
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5, 3.0.0
>Reporter: Genmao Yu
>Priority: Minor
>
> Reading from FileScanRDD may failed with a StackOverflowError in my 
> environment:
> - There are a mass of empty files in table partition。
> - Set `spark.sql.files.maxPartitionBytes`  with a large value: 1024MB
> A quick workaround is set `spark.sql.files.maxPartitionBytes` with a small 
> value, like default 128MB.
> A better way is resolve the recursive calls in FileScanRDD.
> {code}
> java.lang.StackOverflowError
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.getSubject(Subject.java:297)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:648)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2828)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2818)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2684)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at 
> org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
>   at 
> org.apache.parquet.hadoop.ParquetFileReader.(ParquetFileReader.java:640)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:148)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:143)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:326)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org