[
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
runzhiwang resolved HDDS-4308.
------------------------------
Resolution: Fixed
> Fix issue with quota update
> ---------------------------
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Reporter: Bharat Viswanadham
> Assignee: mingchao zhao
> Priority: Blocker
> Labels: pull-request-available
>
> Currently volumeArgs using getCacheValue and put the same object in
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 10000
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached
> Object put into doubleBuffer, it flushes to DB with the updated value from
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this
> info from TransactionInfo
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double
> buffer this can cause this kind of issue.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]