[
https://issues.apache.org/jira/browse/HDDS-1985?focusedWorklogId=329111&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-329111
]
ASF GitHub Bot logged work on HDDS-1985:
----------------------------------------
Author: ASF GitHub Bot
Created on: 16/Oct/19 11:10
Start Date: 16/Oct/19 11:10
Worklog Time Spent: 10m
Work Description: elek commented on pull request #33: HDDS-1985. Fix
listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 329111)
Remaining Estimate: 0h
Time Spent: 10m
> Fix listVolumes API
> -------------------
>
> Key: HDDS-1985
> URL: https://issues.apache.org/jira/browse/HDDS-1985
> Project: Hadoop Distributed Data Store
> Issue Type: Sub-task
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> This Jira is to fix lisVolumes API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache
> and return the response, later it will be picked by double buffer thread and
> it will flush to disk. So, now when do listVolumes, it should use both
> in-memory cache and rocksdb volume table to list volumes for a user.
>
> No fix is required for this, as the information is retrieved from the MPU Key
> table, this information is not retrieved through RocksDB Table iteration. (As
> when we use get() this checks from cache first, and then it checks table)
>
> Used this Jira to add an integration test to verify the behavior.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]