[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557271#comment-17557271 ] Apache Spark commented on SPARK-39519: -- User 'LuciferYang' has created a pull request for this issue: https://github.com/apache/spark/pull/36954 > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > Attachments: image-2022-06-21-21-25-35-951.png, > image-2022-06-21-21-26-06-586.png, image-2022-06-21-21-26-26-563.png, > image-2022-06-21-21-26-38-146.png > > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557229#comment-17557229 ] Yang Jie commented on SPARK-39519: -- The default -XX:NewRatio is 2, change it to 3 for sql/core module to enlarge the size of the old area maybe ok. I'm testing it. > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > Attachments: image-2022-06-21-21-25-35-951.png, > image-2022-06-21-21-26-06-586.png, image-2022-06-21-21-26-26-563.png, > image-2022-06-21-21-26-38-146.png > > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557155#comment-17557155 ] Hyukjin Kwon commented on SPARK-39519: -- Thanks for your investigation. > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > Attachments: image-2022-06-21-21-25-35-951.png, > image-2022-06-21-21-26-06-586.png, image-2022-06-21-21-26-26-563.png, > image-2022-06-21-21-26-38-146.png > > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556911#comment-17556911 ] Yang Jie commented on SPARK-39519: -- I will continue to investigate this issue > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > Attachments: image-2022-06-21-21-25-35-951.png, > image-2022-06-21-21-26-06-586.png, image-2022-06-21-21-26-26-563.png, > image-2022-06-21-21-26-38-146.png > > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556910#comment-17556910 ] Yang Jie commented on SPARK-39519: -- [~hyukjin.kwon] Sorry, I think we should reopen this issue, from the memory dump as follows I found that `byte[]` occupies the most memory. Its content is 'X'. From this point , the most suspicious is still `SPARK-39387: BytesColumnVector should not throw RuntimeException due to overflow` !image-2022-06-21-21-26-06-586.png! !image-2022-06-21-21-26-26-563.png! !image-2022-06-21-21-26-38-146.png! > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > Attachments: image-2022-06-21-21-25-35-951.png, > image-2022-06-21-21-26-06-586.png, image-2022-06-21-21-26-26-563.png, > image-2022-06-21-21-26-38-146.png > > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556896#comment-17556896 ] Yang Jie commented on SPARK-39519: -- I get a oom dump and will analyze it later > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Yang Jie >Priority: Major > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556295#comment-17556295 ] Hyukjin Kwon commented on SPARK-39519: -- Thanks for investigating this issue. Let me leave it closed for now. > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Priority: Major > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171) > [info] at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) > [info] at > org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) > [info] at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) > [info] at > org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) > [info] at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) > [info] at > org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) > {code} > https://github.com/apache/spark/runs/6919076419?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-39519) Test failure in SPARK-39387 with JDK 11
[ https://issues.apache.org/jira/browse/SPARK-39519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556271#comment-17556271 ] Yang Jie commented on SPARK-39519: -- The failure of `SPARK-39387: BytesColumnVector should not throw RuntimeException due to overflow` seems to be due to the OOM in the previous suites: {code:java} 2022-06-16T14:30:19.8285352Z Caused by: java.lang.OutOfMemoryError: Java heap space 2022-06-16T14:30:19.8285963Zat org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.allocateBuffer(BytesColumnVector.java:300) 2022-06-16T14:30:19.8286885Zat org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.ensureValPreallocated(BytesColumnVector.java:218) 2022-06-16T14:30:19.8287675Zat org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:182) 2022-06-16T14:30:19.8288377Zat org.apache.orc.mapred.OrcMapredRecordWriter.setBinaryValue(OrcMapredRecordWriter.java:87) 2022-06-16T14:30:19.8289257Zat org.apache.orc.mapred.OrcMapredRecordWriter.setColumn(OrcMapredRecordWriter.java:235) 2022-06-16T14:30:19.8289956Zat org.apache.orc.mapred.OrcMapredRecordWriter.setStructValue(OrcMapredRecordWriter.java:133) 2022-06-16T14:30:19.8290654Zat org.apache.orc.mapred.OrcMapredRecordWriter.setColumn(OrcMapredRecordWriter.java:248) 2022-06-16T14:30:19.8291438Zat org.apache.orc.mapred.OrcMapredRecordWriter.setListValue(OrcMapredRecordWriter.java:162) 2022-06-16T14:30:19.8292127Zat org.apache.orc.mapred.OrcMapredRecordWriter.setColumn(OrcMapredRecordWriter.java:256) 2022-06-16T14:30:19.8292824Zat org.apache.orc.mapreduce.OrcMapreduceRecordWriter.write(OrcMapreduceRecordWriter.java:73) 2022-06-16T14:30:19.8293554Zat org.apache.spark.sql.execution.datasources.orc.OrcOutputWriter.write(OrcOutputWriter.scala:56) 2022-06-16T14:30:19.8294523Zat org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:175) 2022-06-16T14:30:19.8295436Zat org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithMetrics(FileFormatDataWriter.scala:85) 2022-06-16T14:30:19.8296370Zat org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:92) 2022-06-16T14:30:19.8297324Zat org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:332) 2022-06-16T14:30:19.8298066Zat org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$3352/0x000801926840.apply(Unknown Source) 2022-06-16T14:30:19.8298712Zat org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1524) 2022-06-16T14:30:19.8299367Zat org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:339) 2022-06-16T14:30:19.8300191Zat org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$21(FileFormatWriter.scala:257) 2022-06-16T14:30:19.8300915Zat org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$3335/0x00080190e840.apply(Unknown Source) 2022-06-16T14:30:19.8301540Zat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) 2022-06-16T14:30:19.8306947Zat org.apache.spark.scheduler.Task.run(Task.scala:139) 2022-06-16T14:30:19.8307501Zat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) 2022-06-16T14:30:19.8308058Zat org.apache.spark.executor.Executor$TaskRunner$$Lambda$2815/0x000801734440.apply(Unknown Source) 2022-06-16T14:30:19.8309236Z... 5 more 2022-06-16T14:30:19.8519092Z [0m[[0m[0minfo[0m] [0m[0m[31m- SPARK-39387: BytesColumnVector should not throw RuntimeException due to overflow *** FAILED *** (3 seconds, 393 {code} > Test failure in SPARK-39387 with JDK 11 > --- > > Key: SPARK-39519 > URL: https://issues.apache.org/jira/browse/SPARK-39519 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Priority: Major > > {code} > [info] - SPARK-39387: BytesColumnVector should not throw RuntimeException due > to overflow *** FAILED *** (3 seconds, 393 milliseconds) > [info] org.apache.spark.SparkException: Job aborted. > [info] at > org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:593) > [info] at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:279) > [info] at > org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) > [info] at > org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) > [info] at >