Carlo Jelmini created OAK-10263:
-----------------------------------
Summary: Inconsistent state in TarWriter when close() fails to
write to Azure
Key: OAK-10263
URL: https://issues.apache.org/jira/browse/OAK-10263
Project: Jackrabbit Oak
Issue Type: Bug
Components: segment-azure, segment-tar
Reporter: Carlo Jelmini
When using AzurePersistence as backend, a TarWriter can end up in an
inconsistent state.
The root cause of this inconsistent state is the following sequence:
* TarFiles writes a new segment
* The current tar archive is now too large and a new tar archive needs to be
created
* However, first the TarWriter associated to the current archive need to be
closed
* In the TarWriter#close() method, first the {{closed}} flag is set, then it
proceeds to close the archive by writing the binary references file and the
graph file.
* The write to Azure storage fails because of a timeout: "The client could not
finish the operation within specified maximum execution timeout"
* The TarWriter is now closed (because the {{closed}} flag is already set),
but because the exception is basically ignored by FileStore#tryFlush(), the
TarWriter is kept in use and fails all read and write operations from that
point forward. The read operation failures cause calling code to interpret the
exception as a SNFE.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)