[
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17733526#comment-17733526
]
Chris Bevard commented on HADOOP-18706:
---------------------------------------
I see where I went wrong. Kind of a huge oversight, from bouncing back and
forth between Accumulo and Hadoop, and I thought both SNAPSHOT versions were on
Java 11, and that's the version I was building and testing with. Hadoop is
still on Java 8, and the automatic truncation for java.io.File.createTempFile()
was added in Java 9.
I've added validation to code similar to what's implemented in newer versions
of Java. I had to assume the 255 char limit for all systems, and that the
random suffix will always be the max possible length of 19 chars. Can I reopen
this pull request with the added validation?
[Java 8 generateFile() src (no
validation)|https://github.com/bpupadhyaya/openjdk-8/blob/45af329463a45955ea2759b89cb0ebfe40570c3f/jdk/src/share/classes/java/io/File.java#L1902]
[Java 9 generateFile() src
(validation)|https://github.com/AdoptOpenJDK/openjdk-jdk9/blob/f00b63d24697cce8067f468fe6cd8510374a46f5/jdk/src/java.base/share/classes/java/io/File.java#LL1925C8-L1925C8]
> Improve S3ABlockOutputStream recovery
> -------------------------------------
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Chris Bevard
> Assignee: Chris Bevard
> Priority: Minor
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If an application crashes during an S3ABlockOutputStream upload, it's
> possible to complete the upload if fast.upload.buffer is set to disk by
> uploading the s3ablock file with putObject as the final part of the multipart
> upload. If the application has multiple uploads running in parallel though
> and they're on the same part number when the application fails, then there is
> no way to determine which file belongs to which object, and recovery of
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every
> partial upload would be recoverable.
> h3. Important disclaimer
> This change does not directly add the Syncable semantics which applications
> that require {{Syncable.hsync()}} to only return after all pending data has
> been durably written to the destination path. S3 is not a filesystem and this
> change does not make it so.
> What is does do is assist anyone trying to implement some post-crash recovery
> process which
> # interrogates s3 to identofy pending uploads to a specific path and get a
> list of uploaded blocks yet to be committed
> # scans the local fs.s3a.buffer dir directories to identify in-progress-write
> blocks for the same target destination. That is those which were being
> uploaded, queued for uploaded and the single "new data being written to"
> block for an output stream
> # uploads all those pending blocks
> # generates a new POST to complete a multipart upload with all the blocks in
> the correct order
> All this patch does is ensure the buffered block filenames include the final
> path and block ID, to aid in identify which blocks need to be uploaded and
> what order.
> h2. warning
> causes HADOOP-18744 -always include the relevant fix when backporting
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]