[ https://issues.apache.org/jira/browse/HDDS-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495415#comment-16495415 ]
Xiaoyu Yao commented on HDDS-116: --------------------------------- {quote}I think we can instead use AtomicLong? Thoughts? {quote} Yes, we should use AtomicLong. {quote}Even if we grab the VolumeSet lock here, the Container to which this volume is returned might still write to a removed volume. We should have the calling function grab a lock the VolumeSet when passing the volumeList to RRVolumeChoosingPolicy. {quote} Agree. Maybe we use composite pattern and make VolumeChoosingPolicy part of VolumeSet? This way, we don't have to expose the lock. HDDS currently uses a ContainerStorageLocation/StorageLocation and starts DU thread per location(volume) to get the usage info. From the design spec, it seems that we are going to use VolumeSet/VolumeInfo to replace ContainerStorageLocation/StorageLocation. However, the current VolumeInfo does not have ability to get the usage information itself like ContainerStorageLocation. We don't have to address it now if you plan to add it later with a different JIRA? > Implement VolumeSet to manage disk volumes > ------------------------------------------ > > Key: HDDS-116 > URL: https://issues.apache.org/jira/browse/HDDS-116 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Hanisha Koneru > Assignee: Hanisha Koneru > Priority: Major > Labels: ContainerIO > Fix For: 0.2.1 > > Attachments: HDDS-116-HDDS-48.001.patch, HDDS-116-HDDS-48.002.patch, > HDDS-116-HDDS-48.003.patch > > > VolumeSet would be responsible for managing volumes in the Datanode. Some of > its functions are: > # Initialize volumes on startup > # Provide APIs to add/ remove volumes > # Choose and return volume to calling service based on the volume choosing > policy (currently implemented Round Robin choosing policy) -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org