noslowerdna commented on pull request #31814:
URL: https://github.com/apache/spark/pull/31814#issuecomment-867692191


   In case it might be helpful to anyone, the compilation failure's stack trace 
looks something like this.
   
   ```
   [main] ERROR org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator 
- failed to compile: org.codehaus.commons.compiler.CompileException: File 
'generated.java', Line 32, Column 84: IDENTIFIER expected instead of '['
   org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 
32, Column 84: IDENTIFIER expected instead of '['
        at org.codehaus.janino.TokenStreamImpl.read(TokenStreamImpl.java:196)
        at org.codehaus.janino.Parser.read(Parser.java:3705)
        at org.codehaus.janino.Parser.parseQualifiedIdentifier(Parser.java:446)
        at org.codehaus.janino.Parser.parseReferenceType(Parser.java:2569)
        at org.codehaus.janino.Parser.parseType(Parser.java:2549)
        at org.codehaus.janino.Parser.parseFormalParameter(Parser.java:1688)
        at org.codehaus.janino.Parser.parseFormalParameterList(Parser.java:1639)
        at org.codehaus.janino.Parser.parseFormalParameters(Parser.java:1620)
        at 
org.codehaus.janino.Parser.parseMethodDeclarationRest(Parser.java:1518)
        at 
org.codehaus.janino.Parser.parseClassBodyDeclaration(Parser.java:1028)
        at org.codehaus.janino.Parser.parseClassBody(Parser.java:841)
        at org.codehaus.janino.Parser.parseClassDeclarationRest(Parser.java:736)
        at org.codehaus.janino.Parser.parseClassBodyDeclaration(Parser.java:941)
        at 
org.codehaus.janino.ClassBodyEvaluator.cook(ClassBodyEvaluator.java:234)
        at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:205)
        at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
        at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:1403)
        at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1500)
        at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1497)
        at 
org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
        at 
org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
        at 
org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
        at 
org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
        at org.sparkproject.guava.cache.LocalCache.get(LocalCache.java:4000)
        at 
org.sparkproject.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
        at 
org.sparkproject.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
        at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:1351)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec.liftedTree1$1(WholeStageCodegenExec.scala:721)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:720)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
        at 
org.apache.spark.sql.execution.ProjectExec.doExecute(basicPhysicalOperators.scala:92)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
        at org.apache.spark.sql.execution.SortExec.doExecute(SortExec.scala:112)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
        at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:184)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
        at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
        at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
        at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
        at 
org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
        at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
        at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
        at 
org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
        
   [main] WARN  org.apache.spark.sql.execution.WholeStageCodegenExec - 
Whole-stage codegen disabled for plan ...        
   ```
   
   Referenced line of generated code that doesn't compile:
   
   ```
   private UTF8String project_subExpr_1(boolean scan_isNull_2, boolean 
scan_isNull_1, [B scan_value_1, org.apache.spark.unsafe.types.UTF8String 
scan_value_2) {
   ```
   
   Corrected in Spark 3.1.2, it becomes:
   
   ```
   private UTF8String 
project_subExpr_1(org.apache.spark.unsafe.types.UTF8String scan_value_2, 
boolean scan_isNull_2, boolean scan_isNull_1, byte[] scan_value_1) {
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to