[ 
https://issues.apache.org/jira/browse/HADOOP-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17578631#comment-17578631
 ] 

ASF GitHub Bot commented on HADOOP-18402:
-----------------------------------------

steveloughran opened a new pull request, #4735:
URL: https://github.com/apache/hadoop/pull/4735

   
   jobId.toString() to only be called when the ID isn't null.
   
   this doesn't surface in MR, but spark seems to manage it
   
   
   ### How was this patch tested?
   
   through my downstream runs of spark integration tests
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> S3A committer NPE in spark job abort
> ------------------------------------
>
>                 Key: HADOOP-18402
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18402
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.3.9
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>
> NPE happening in spark {{HadoopMapReduceCommitProtocol.abortJob}} when jobID 
> is null
> {code}
> - save()/findClass() - non-partitioned table - Overwrite *** FAILED ***
>   java.lang.NullPointerException:
>   at 
> org.apache.hadoop.fs.s3a.commit.impl.CommitContext.<init>(CommitContext.java:159)
>   at 
> org.apache.hadoop.fs.s3a.commit.impl.CommitOperations.createCommitContext(CommitOperations.java:652)
>   at 
> org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.initiateJobOperation(AbstractS3ACommitter.java:856)
>   at 
> org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortJob(AbstractS3ACommitter.java:909)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:252)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:268)
>   at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:191)
>   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
>   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
>   at 
> org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
>   ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to