Re: [ovirt-users] remove gluster storage domain and resize gluster storage domain

2016-12-01 Thread Ramesh Nachimuthu




- Original Message -
> From: "Bill James" 
> To: "Sahina Bose" 
> Cc: users@ovirt.org
> Sent: Thursday, December 1, 2016 8:15:03 PM
> Subject: Re: [ovirt-users] remove gluster storage domain and resize gluster 
> storage domain
> 
> thank you for the reply.
> 
> [root@ovirt1 prod ~]# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 1.1T 0 disk
> ├─sda1 8:1 0 500M 0 part /boot
> ├─sda2 8:2 0 4G 0 part [SWAP]
> └─sda3 8:3 0 1.1T 0 part
> ├─rootvg01-lv01 253:0 0 50G 0 lvm /
> └─rootvg01-lv02 253:1 0 1T 0 lvm /ovirt-store
> 
> ovirt2 same.
> ovirt3:
> 
> [root@ovirt3 prod ~]# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 279.4G 0 disk
> ├─sda1 8:1 0 500M 0 part /boot
> ├─sda2 8:2 0 4G 0 part [SWAP]
> └─sda3 8:3 0 274.9G 0 part
> ├─rootvg01-lv01 253:0 0 50G 0 lvm /
> └─rootvg01-lv02 253:1 0 224.9G 0 lvm /ovirt-store
> 

See the difference between ovirt3 and other two nodes. LV 'rootvg01-lv02' in 
ovirt3 has only 224 GB of capacity. In replicated Gluster volume, storage 
capacity of the volume is limited by the smallest replica brick. If you want to 
have 1TB gluster volume then please make sure that all the bricks in the 
replicated volume has minimum 1TB capacity.

Regards,
Ramesh

> Ah ha! I missed that. Thank you!!
> I can fix that.
> 
> 
> Once I detached the storage domain it is no longer listed.
> Is there some option to make it show detached volumes?
> 
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> 
> 
> On 12/1/16 3:58 AM, Sahina Bose wrote:
> 
> 
> 
> 
> 
> On Thu, Dec 1, 2016 at 10:54 AM, Bill James < bill.ja...@j2.com > wrote:
> 
> 
> I have a 3 node cluster with replica 3 gluster volume.
> But for some reason the volume is not using the full size available.
> I thought maybe it was because I had created a second gluster volume on same
> partition, so I tried to remove it.
> 
> I was able to put it in maintenance mode and detach it, but in no window was
> I able to get the "remove" option to be enabled.
> Now if I select "attach data" I see ovirt thinks the volume is still there,
> although it is not.
> 
> 2 questions.
> 
> 1. how do I clear out the old removed volume from ovirt?
> 
> To remove the storage domain, you need to detach the domain from the Data
> Center sub tab of Storage Domain. Once detached, the remove and format
> domain option should be available to you.
> Once you detach - what is the status of the storage domain? Does it show as
> Detached?
> 
> 
> 
> 
> 2. how do I get gluster to use the full disk space available?
> 
> 
> 
> Its a 1T partition but it only created a 225G gluster volume. Why? How do I
> get the space back?
> 
> What's the output of "lsblk"? Is it consistent across all 3 nodes?
> 
> 
> 
> 
> All three nodes look the same:
> /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store
> ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/
> ovirt1-gl.j2noc.com :_gv1
> 
> 
> [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
> Status of volume: gv1
> Gluster process TCP Port RDMA Port Online Pid
> --
> Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 5218
> Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 5678
> Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 61386
> NFS Server on localhost 2049 0 Y 31312
> Self-heal Daemon on localhost N/A N/A Y 31320
> NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109
> Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119
> NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387
> Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
> 
> Task Status of Volume gv1
> --
> There are no active volume tasks
> 
> 
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> 
> 
> This email, its contents and attachments contain information from j2 Global,
> Inc . and/or its affiliates which may be privileged, confidential or
> otherwise protected from disclosure. The information is intended to be for
> the addressee(s) only. If you are not an addressee, any disclosure, copy,
> distribution, or use of the contents of this message is prohibited. If you
> have received this email in error please notify the sender by reply e-mail
> and delete the original message and any copies. � 2015 j2 Global, Inc . All
> rights reserved. eFax � , eVoice � , Campaigner � , FuseMail � , KeepItSafe
> � and Onebox � are r egistered trademarks of j2 Global, Inc . and its
> affiliates.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] remove gluster storage domain and resize gluster storage domain

2016-12-01 Thread Bill James

thank you for the reply.

[root@ovirt1 prod ~]# lsblk
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda 8:00  1.1T  0 disk
├─sda1  8:10  500M  0 part /boot
├─sda2  8:204G  0 part [SWAP]
└─sda3  8:30  1.1T  0 part
  ├─rootvg01-lv01 253:00   50G  0 lvm  /
  └─rootvg01-lv02 253:101T  0 lvm  /ovirt-store

ovirt2 same.
ovirt3:

[root@ovirt3 prod ~]# lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:00 279.4G  0 disk
├─sda1  8:10   500M  0 part /boot
├─sda2  8:20 4G  0 part [SWAP]
└─sda3  8:30 274.9G  0 part
  ├─rootvg01-lv01 253:0050G  0 lvm  /
*  └─rootvg01-lv02 253:10 224.9G  0 lvm  /ovirt-store*

Ah ha! I missed that. Thank you!!
I can fix that.


Once I detached the storage domain it is no longer listed.
Is there some option to make it show detached volumes?

ovirt-engine-3.6.4.1-1.el7.centos.noarch


On 12/1/16 3:58 AM, Sahina Bose wrote:



On Thu, Dec 1, 2016 at 10:54 AM, Bill James > wrote:


I have a  3 node cluster with replica 3 gluster volume.
But for some reason the volume is not using the full size available.
I thought maybe it was because I had created a second gluster
volume on same partition, so I tried to remove it.

I was able to put it in maintenance mode and detach it, but in no
window was I able to get the "remove" option to be enabled.
Now if I select "attach data" I see ovirt thinks the volume is
still there, although it is not.

2 questions.

1. how do I clear out the old removed volume from ovirt?


To remove the storage domain, you need to detach the domain from the 
Data Center sub tab of Storage Domain. Once detached, the remove and 
format domain option should be available to you.
Once you detach - what is the status of the storage domain? Does it 
show as Detached?



2. how do I get gluster to use the full disk space available?


Its a 1T partition but it only created a 225G gluster volume. Why?
How do I get the space back?


What's the output of "lsblk"? Is it consistent across all 3 nodes?


All three nodes look the same:
/dev/mapper/rootvg01-lv02  1.1T  135G  929G  13% /ovirt-store
ovirt1-gl.j2noc.com:/gv1   225G  135G   91G  60%
/rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com
:_gv1


[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
Status of volume: gv1
Gluster process TCP Port  RDMA Port
Online  Pid

--
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y  5218
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y  5678
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y  61386
NFS Server on localhost 2049  0 Y  31312
Self-heal Daemon on localhost   N/A   N/A Y 
 31320
NFS Server on ovirt3-gl.j2noc.com    
   2049 0 Y   38109

Self-heal Daemon on ovirt3-gl.j2noc.com
 N/A  N/A Y   38119
NFS Server on ovirt2-gl.j2noc.com    
   2049 0 Y   5387

Self-heal Daemon on ovirt2-gl.j2noc.com
 N/A  N/A Y   5402

Task Status of Volume gv1

--
There are no active volume tasks


Thanks.
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] remove gluster storage domain and resize gluster storage domain

2016-12-01 Thread Sahina Bose
On Thu, Dec 1, 2016 at 10:54 AM, Bill James  wrote:

> I have a  3 node cluster with replica 3 gluster volume.
> But for some reason the volume is not using the full size available.
> I thought maybe it was because I had created a second gluster volume on
> same partition, so I tried to remove it.
>
> I was able to put it in maintenance mode and detach it, but in no window
> was I able to get the "remove" option to be enabled.
> Now if I select "attach data" I see ovirt thinks the volume is still
> there, although it is not.
>
> 2 questions.
>
> 1. how do I clear out the old removed volume from ovirt?
>

To remove the storage domain, you need to detach the domain from the Data
Center sub tab of Storage Domain. Once detached, the remove and format
domain option should be available to you.
Once you detach - what is the status of the storage domain? Does it show as
Detached?


>
> 2. how do I get gluster to use the full disk space available?
>

> Its a 1T partition but it only created a 225G gluster volume. Why? How do
> I get the space back?
>

What's the output of "lsblk"? Is it consistent across all 3 nodes?


>
> All three nodes look the same:
> /dev/mapper/rootvg01-lv02  1.1T  135G  929G  13% /ovirt-store
> ovirt1-gl.j2noc.com:/gv1   225G  135G   91G  60%
> /rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1
>
>
> [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
> Status of volume: gv1
> Gluster process TCP Port  RDMA Port Online  Pid
> 
> --
> Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
> k1/gv1  49152 0 Y   5218
> Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
> k1/gv1  49152 0 Y   5678
> Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
> k1/gv1  49152 0 Y   61386
> NFS Server on localhost 2049  0 Y   31312
> Self-heal Daemon on localhost   N/A   N/A Y   31320
> NFS Server on ovirt3-gl.j2noc.com   2049  0 Y   38109
> Self-heal Daemon on ovirt3-gl.j2noc.com N/A   N/A Y   38119
> NFS Server on ovirt2-gl.j2noc.com   2049  0 Y   5387
> Self-heal Daemon on ovirt2-gl.j2noc.com N/A   N/A Y   5402
>
> Task Status of Volume gv1
> 
> --
> There are no active volume tasks
>
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] remove gluster storage domain and resize gluster storage domain

2016-11-30 Thread Bill James

I have a  3 node cluster with replica 3 gluster volume.
But for some reason the volume is not using the full size available.
I thought maybe it was because I had created a second gluster volume on 
same partition, so I tried to remove it.


I was able to put it in maintenance mode and detach it, but in no window 
was I able to get the "remove" option to be enabled.
Now if I select "attach data" I see ovirt thinks the volume is still 
there, although it is not.


2 questions.

1. how do I clear out the old removed volume from ovirt?

2. how do I get gluster to use the full disk space available?

Its a 1T partition but it only created a 225G gluster volume. Why? How 
do I get the space back?


All three nodes look the same:
/dev/mapper/rootvg01-lv02  1.1T  135G  929G  13% /ovirt-store
ovirt1-gl.j2noc.com:/gv1   225G  135G   91G  60% 
/rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1



[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
Status of volume: gv1
Gluster process TCP Port  RDMA Port Online  Pid
--
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   5218
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   5678
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   61386
NFS Server on localhost 2049  0 Y   31312
Self-heal Daemon on localhost   N/A   N/A Y   31320
NFS Server on ovirt3-gl.j2noc.com   2049  0 Y   38109
Self-heal Daemon on ovirt3-gl.j2noc.com N/A   N/A Y   38119
NFS Server on ovirt2-gl.j2noc.com   2049  0 Y   5387
Self-heal Daemon on ovirt2-gl.j2noc.com N/A   N/A Y   5402

Task Status of Volume gv1
--
There are no active volume tasks


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users