[ 
https://issues.apache.org/jira/browse/OAK-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-6661:
---------------------------------
    Description: 
As already explained in OAK-6659, there can be cases in which deleting the 
previous spool file fails (Windows) and new (duplicate) content is added under 
the hood to the old file. This way the persisted blob doesn't match in content 
and id with the original sent by the server.

By ensuring that the spool file has the same size as the original blob we solve 
this problem. This check is sufficient, since all the chunks received are 
individually checked by hash, before appending them to the spool file. 
Moreover, the single threaded nature of the client ensures that races in which 
a new thread starts appending new content, after the length check has just 
passed can never happen.

  was:
As already explained in OAK-6659, there can be cases in which deleting the 
previous spool file fails (Windows) and new (duplicate) content is added under 
the hood to the old file. This way the persisted blob doesn't match in content 
and id with the original sent by the server.

By ensuring that the spool file has the same size as the original blob we solve 
this problem. This check is sufficient, since all the chunks received are 
individually checked by hash, before appending them to the spool file.


> ResponseDecoder should check that the length of the received blob matches the 
> length of the sent blob
> -----------------------------------------------------------------------------------------------------
>
>                 Key: OAK-6661
>                 URL: https://issues.apache.org/jira/browse/OAK-6661
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segment-tar, tarmk-standby
>    Affects Versions: 1.7.6
>            Reporter: Andrei Dulceanu
>            Assignee: Andrei Dulceanu
>              Labels: cold-standby
>             Fix For: 1.7.8
>
>
> As already explained in OAK-6659, there can be cases in which deleting the 
> previous spool file fails (Windows) and new (duplicate) content is added 
> under the hood to the old file. This way the persisted blob doesn't match in 
> content and id with the original sent by the server.
> By ensuring that the spool file has the same size as the original blob we 
> solve this problem. This check is sufficient, since all the chunks received 
> are individually checked by hash, before appending them to the spool file. 
> Moreover, the single threaded nature of the client ensures that races in 
> which a new thread starts appending new content, after the length check has 
> just passed can never happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to