thank you for the reply.

[root@ovirt1 prod ~]# lsblk
sda                 8:0    0  1.1T  0 disk
├─sda1              8:1    0  500M  0 part /boot
├─sda2              8:2    0    4G  0 part [SWAP]
└─sda3              8:3    0  1.1T  0 part
  ├─rootvg01-lv01 253:0    0   50G  0 lvm  /
  └─rootvg01-lv02 253:1    0    1T  0 lvm  /ovirt-store

ovirt2 same.

[root@ovirt3 prod ~]# lsblk
sda                 8:0    0 279.4G  0 disk
├─sda1              8:1    0   500M  0 part /boot
├─sda2              8:2    0     4G  0 part [SWAP]
└─sda3              8:3    0 274.9G  0 part
  ├─rootvg01-lv01 253:0    0    50G  0 lvm  /
*  └─rootvg01-lv02 253:1    0 224.9G  0 lvm  /ovirt-store*

Ah ha! I missed that. Thank you!!
I can fix that.

Once I detached the storage domain it is no longer listed.
Is there some option to make it show detached volumes?


On 12/1/16 3:58 AM, Sahina Bose wrote:

On Thu, Dec 1, 2016 at 10:54 AM, Bill James < <>> wrote:

    I have a  3 node cluster with replica 3 gluster volume.
    But for some reason the volume is not using the full size available.
    I thought maybe it was because I had created a second gluster
    volume on same partition, so I tried to remove it.

    I was able to put it in maintenance mode and detach it, but in no
    window was I able to get the "remove" option to be enabled.
    Now if I select "attach data" I see ovirt thinks the volume is
    still there, although it is not.

    2 questions.

    1. how do I clear out the old removed volume from ovirt?

To remove the storage domain, you need to detach the domain from the Data Center sub tab of Storage Domain. Once detached, the remove and format domain option should be available to you. Once you detach - what is the status of the storage domain? Does it show as Detached?

    2. how do I get gluster to use the full disk space available?

    Its a 1T partition but it only created a 225G gluster volume. Why?
    How do I get the space back?

What's the output of "lsblk"? Is it consistent across all 3 nodes?

    All three nodes look the same:
    /dev/mapper/rootvg01-lv02  1.1T  135G  929G  13% /ovirt-store   225G  135G   91G  60%

    [root@ovirt1 prod]# gluster volume status
    Status of volume: gv1
    Gluster process                             TCP Port  RDMA Port
    Online  Pid
    k1/gv1                                      49152     0 Y      5218
    k1/gv1                                      49152     0 Y      5678
    k1/gv1                                      49152     0 Y      61386
    NFS Server on localhost                     2049      0 Y      31312
Self-heal Daemon on localhost N/A N/A Y 31320 NFS Server on <> 2049 0 Y 38109
    Self-heal Daemon on
    <>     N/A  N/A Y       38119
NFS Server on <> 2049 0 Y 5387
    Self-heal Daemon on
    <>     N/A  N/A Y       5402

    Task Status of Volume gv1
    There are no active volume tasks

    Users mailing list <>

Cloud Services for Business
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox

This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
Users mailing list

Reply via email to