On Thu, Dec 1, 2016 at 10:54 AM, Bill James <bill.ja...@j2.com> wrote:

> I have a  3 node cluster with replica 3 gluster volume.
> But for some reason the volume is not using the full size available.
> I thought maybe it was because I had created a second gluster volume on
> same partition, so I tried to remove it.
>
> I was able to put it in maintenance mode and detach it, but in no window
> was I able to get the "remove" option to be enabled.
> Now if I select "attach data" I see ovirt thinks the volume is still
> there, although it is not.
>
> 2 questions.
>
> 1. how do I clear out the old removed volume from ovirt?
>

To remove the storage domain, you need to detach the domain from the Data
Center sub tab of Storage Domain. Once detached, the remove and format
domain option should be available to you.
Once you detach - what is the status of the storage domain? Does it show as
Detached?


>
> 2. how do I get gluster to use the full disk space available?
>

> Its a 1T partition but it only created a 225G gluster volume. Why? How do
> I get the space back?
>

What's the output of "lsblk"? Is it consistent across all 3 nodes?


>
> All three nodes look the same:
> /dev/mapper/rootvg01-lv02  1.1T  135G  929G  13% /ovirt-store
> ovirt1-gl.j2noc.com:/gv1   225G  135G   91G  60%
> /rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1
>
>
> [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
> Status of volume: gv1
> Gluster process                             TCP Port  RDMA Port Online  Pid
> ------------------------------------------------------------
> ------------------
> Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
> k1/gv1                                      49152     0 Y       5218
> Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
> k1/gv1                                      49152     0 Y       5678
> Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
> k1/gv1                                      49152     0 Y       61386
> NFS Server on localhost                     2049      0 Y       31312
> Self-heal Daemon on localhost               N/A       N/A Y       31320
> NFS Server on ovirt3-gl.j2noc.com           2049      0 Y       38109
> Self-heal Daemon on ovirt3-gl.j2noc.com     N/A       N/A Y       38119
> NFS Server on ovirt2-gl.j2noc.com           2049      0 Y       5387
> Self-heal Daemon on ovirt2-gl.j2noc.com     N/A       N/A Y       5402
>
> Task Status of Volume gv1
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
>
> Thanks.
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to