[
https://issues.apache.org/jira/browse/JAMES-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rene Cordier updated JAMES-2860:
--------------------------------
Description:
For some reasons, the test
readShouldNotReadPartiallyWhenDeletingConcurrentlyBigBlob in DeleteBlobContract
fails sometimes randomly with the Scality S3 object storage implementation.
It seems sometimes it manages to read partially a blob while deleting it
concurrently.
The issue doesn't seem to happen with Swift though...
---
So after having a meeting with people from Scality, this is what we concluded:
* When there is a delete operation, the blob gets marked as deleted first then
gets deleted, to avoid an other thread to try to read it same time. So it is a
bug.
* this bug was known and seemed to be happening with big blobs in old
versions. It seems S3 is partitioning blobs when they get bigger than 10MB, and
that bug was happening then
* we use an old version of Scality, and since the bug has been fixed on newer
versions
* however it seems Scality doesn't maintain anymore docker images on docker
hub, but the Dockerfile is present on their github
Solution: we need to generate ourselves our own scality docker image with the
latest changes (v8) and use it instead of the one we actually use (v6)
was:
For some reasons, the test
readShouldNotReadPartiallyWhenDeletingConcurrentlyBigBlob in DeleteBlobContract
fails sometimes randomly with the Scality S3 object storage implementation.
It seems sometimes it manages to read partially a blob while deleting it
concurrently.
The issue doesn't seem to happen with Swift though...
> Read partial blob issue with Scality
> ------------------------------------
>
> Key: JAMES-2860
> URL: https://issues.apache.org/jira/browse/JAMES-2860
> Project: James Server
> Issue Type: Bug
> Reporter: Rene Cordier
> Priority: Major
>
> For some reasons, the test
> readShouldNotReadPartiallyWhenDeletingConcurrentlyBigBlob in
> DeleteBlobContract fails sometimes randomly with the Scality S3 object
> storage implementation.
> It seems sometimes it manages to read partially a blob while deleting it
> concurrently.
> The issue doesn't seem to happen with Swift though...
> ---
> So after having a meeting with people from Scality, this is what we concluded:
> * When there is a delete operation, the blob gets marked as deleted first
> then gets deleted, to avoid an other thread to try to read it same time. So
> it is a bug.
> * this bug was known and seemed to be happening with big blobs in old
> versions. It seems S3 is partitioning blobs when they get bigger than 10MB,
> and that bug was happening then
> * we use an old version of Scality, and since the bug has been fixed on
> newer versions
> * however it seems Scality doesn't maintain anymore docker images on docker
> hub, but the Dockerfile is present on their github
> Solution: we need to generate ourselves our own scality docker image with the
> latest changes (v8) and use it instead of the one we actually use (v6)
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]