[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-14 Thread Adrian Quintero
Thanks Gobinda,
I am in the process of finishing up the 9 node cluster, once done I will
test this ansible role...



On Fri, Jun 14, 2019 at 12:45 PM Gobinda Das  wrote:

> We have ansible role to replace gluster node.I think it works only with
> same FQDN.
> https://github.com/sac/gluster-ansible-maintenance
> I am not sure if it covers all senarios, but you can try with same FQDN.
>
> On Fri, Jun 14, 2019 at 7:13 AM Adrian Quintero 
> wrote:
>
>> Strahil,
>> Thanks for all the follow up, I will try to reproduce the same scenario
>> today, deploy a 9 node cluster, Completely kill the initiating node (vmm10)
>> and see If i can recover using the extra server approach (Different
>> IP/FQDN). If I am able to recover I will also try to test with your
>> suggested second approach (Using same IP/FQDN).
>> My objective here is to document the possible recovery scenarios without
>> any downtime or impact.
>>
>> I have documented a few setup and recovery scenarios with 6 and 9 nodes
>> already with a hyperconverged setup and I will make them available to the
>> community, hopefully this week, including the tests that you have been
>> helping me with. Hopefully this will provide help to others that are in the
>> same situation that I am, and it will also provide me with feedback from
>> more knowledgeable admins out there so that I can get this into production
>> in the near future.
>>
>>
>> Thanks again.
>>
>>
>>
>> On Wed, Jun 12, 2019 at 11:58 PM Strahil  wrote:
>>
>>> Hi Adrian,
>>>
>>> Please keep in mind that when a server dies, the easiest way to recover
>>> is to get another freshly installed server with different IP/FQDN .
>>> Then you will need to use 'replace-brick' and once gluster replaces that
>>> node - you should be able to remove the old entry in oVirt.
>>> Once the old entry is gone, you can add the new installation in oVirt
>>> via the UI.
>>>
>>> Another approach is to have the same IP/FQDN for the fresh install.In
>>> this situation, you need to have the same gluster ID (which should be a
>>> text file) and the peer IDs. Most probably you can create them on your own
>>> , based on data on the other gluster peers.
>>> Once the fresh install is available in 'gluster peer' , you can initiate
>>> a reset-brick' (don't forget to set the SELINUX , firewall and repos) and a
>>> full heal.
>>> From there you can reinstall the machine from the UI and it should be
>>> available for usage.
>>>
>>> P.S.: I know that the whole procedure is not so easy :)
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>> On Jun 12, 2019 19:02, Adrian Quintero  wrote:
>>>
>>> Strahil, I dont use the GUI that much, in this case I need to understand
>>> how all is tied together if I want to move to production. As far as Gluster
>>> goes, I can get do the administration thru CLI, however when my test
>>> environment was set up it was setup using geodeploy for Hyperconverged
>>> setup under oVirt.
>>> The initial setup was 3 servers with the same amount of physical disks:
>>> sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>>>
>>> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> *As you can see from the above the the engine volume is conformed of
>>> hosts vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12
>>> and on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also
>>> on /dev/sdb (2600Gb LV).*
>>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
>>>   100G  2.0G   98G   2% /gluster_bricks/engine
>>> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
>>>   2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
>>> /dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
>>>   2.7T  4.6G  2.7T   1% /gluster_bricks/data1
>>> /dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
>>>   2.7T  9.5G  2.7T   1% /gluster_bricks/data2
>>> vmm10.mydomain.com:/engine
>>> fuse.glusterfs  300G  9.2G  291G   4%
>>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
>>> vmm10.mydomain.com:/vmstore1
>>> fuse.glusterfs  5.1T   53G  5.1T   2%
>>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
>>> vmm10.mydomain.com:/data1
>>>  fuse.glusterfs  8.0T   95G  7.9T   2%
>>> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-14 Thread Gobinda Das
We have ansible role to replace gluster node.I think it works only with
same FQDN.
https://github.com/sac/gluster-ansible-maintenance
I am not sure if it covers all senarios, but you can try with same FQDN.

On Fri, Jun 14, 2019 at 7:13 AM Adrian Quintero 
wrote:

> Strahil,
> Thanks for all the follow up, I will try to reproduce the same scenario
> today, deploy a 9 node cluster, Completely kill the initiating node (vmm10)
> and see If i can recover using the extra server approach (Different
> IP/FQDN). If I am able to recover I will also try to test with your
> suggested second approach (Using same IP/FQDN).
> My objective here is to document the possible recovery scenarios without
> any downtime or impact.
>
> I have documented a few setup and recovery scenarios with 6 and 9 nodes
> already with a hyperconverged setup and I will make them available to the
> community, hopefully this week, including the tests that you have been
> helping me with. Hopefully this will provide help to others that are in the
> same situation that I am, and it will also provide me with feedback from
> more knowledgeable admins out there so that I can get this into production
> in the near future.
>
>
> Thanks again.
>
>
>
> On Wed, Jun 12, 2019 at 11:58 PM Strahil  wrote:
>
>> Hi Adrian,
>>
>> Please keep in mind that when a server dies, the easiest way to recover
>> is to get another freshly installed server with different IP/FQDN .
>> Then you will need to use 'replace-brick' and once gluster replaces that
>> node - you should be able to remove the old entry in oVirt.
>> Once the old entry is gone, you can add the new installation in oVirt via
>> the UI.
>>
>> Another approach is to have the same IP/FQDN for the fresh install.In
>> this situation, you need to have the same gluster ID (which should be a
>> text file) and the peer IDs. Most probably you can create them on your own
>> , based on data on the other gluster peers.
>> Once the fresh install is available in 'gluster peer' , you can initiate
>> a reset-brick' (don't forget to set the SELINUX , firewall and repos) and a
>> full heal.
>> From there you can reinstall the machine from the UI and it should be
>> available for usage.
>>
>> P.S.: I know that the whole procedure is not so easy :)
>>
>> Best Regards,
>> Strahil Nikolov
>> On Jun 12, 2019 19:02, Adrian Quintero  wrote:
>>
>> Strahil, I dont use the GUI that much, in this case I need to understand
>> how all is tied together if I want to move to production. As far as Gluster
>> goes, I can get do the administration thru CLI, however when my test
>> environment was set up it was setup using geodeploy for Hyperconverged
>> setup under oVirt.
>> The initial setup was 3 servers with the same amount of physical disks:
>> sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>>
>> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> *As you can see from the above the the engine volume is conformed of
>> hosts vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12
>> and on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also
>> on /dev/sdb (2600Gb LV).*
>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
>>   100G  2.0G   98G   2% /gluster_bricks/engine
>> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
>>   2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
>> /dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
>>   2.7T  4.6G  2.7T   1% /gluster_bricks/data1
>> /dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
>>   2.7T  9.5G  2.7T   1% /gluster_bricks/data2
>> vmm10.mydomain.com:/engine
>> fuse.glusterfs  300G  9.2G  291G   4%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
>> vmm10.mydomain.com:/vmstore1
>> fuse.glusterfs  5.1T   53G  5.1T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
>> vmm10.mydomain.com:/data1
>>  fuse.glusterfs  8.0T   95G  7.9T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
>> vmm10.mydomain.com:/data2
>>  fuse.glusterfs  8.0T  112G  7.8T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2
>>
>>
>>
>>
>> *before any issues I increased the size of the cluster and the gluster
>> cluster with the following, creating 4 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-13 Thread Adrian Quintero
Strahil,
Thanks for all the follow up, I will try to reproduce the same scenario
today, deploy a 9 node cluster, Completely kill the initiating node (vmm10)
and see If i can recover using the extra server approach (Different
IP/FQDN). If I am able to recover I will also try to test with your
suggested second approach (Using same IP/FQDN).
My objective here is to document the possible recovery scenarios without
any downtime or impact.

I have documented a few setup and recovery scenarios with 6 and 9 nodes
already with a hyperconverged setup and I will make them available to the
community, hopefully this week, including the tests that you have been
helping me with. Hopefully this will provide help to others that are in the
same situation that I am, and it will also provide me with feedback from
more knowledgeable admins out there so that I can get this into production
in the near future.


Thanks again.



On Wed, Jun 12, 2019 at 11:58 PM Strahil  wrote:

> Hi Adrian,
>
> Please keep in mind that when a server dies, the easiest way to recover is
> to get another freshly installed server with different IP/FQDN .
> Then you will need to use 'replace-brick' and once gluster replaces that
> node - you should be able to remove the old entry in oVirt.
> Once the old entry is gone, you can add the new installation in oVirt via
> the UI.
>
> Another approach is to have the same IP/FQDN for the fresh install.In
> this situation, you need to have the same gluster ID (which should be a
> text file) and the peer IDs. Most probably you can create them on your own
> , based on data on the other gluster peers.
> Once the fresh install is available in 'gluster peer' , you can initiate a
> reset-brick' (don't forget to set the SELINUX , firewall and repos) and a
> full heal.
> From there you can reinstall the machine from the UI and it should be
> available for usage.
>
> P.S.: I know that the whole procedure is not so easy :)
>
> Best Regards,
> Strahil Nikolov
> On Jun 12, 2019 19:02, Adrian Quintero  wrote:
>
> Strahil, I dont use the GUI that much, in this case I need to understand
> how all is tied together if I want to move to production. As far as Gluster
> goes, I can get do the administration thru CLI, however when my test
> environment was set up it was setup using geodeploy for Hyperconverged
> setup under oVirt.
> The initial setup was 3 servers with the same amount of physical disks:
> sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>
> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> *As you can see from the above the the engine volume is conformed of hosts
> vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12 and
> on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also on
> /dev/sdb (2600Gb LV).*
> /dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
> 100G  2.0G   98G   2% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
> 2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
> /dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
> 2.7T  4.6G  2.7T   1% /gluster_bricks/data1
> /dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
> 2.7T  9.5G  2.7T   1% /gluster_bricks/data2
> vmm10.mydomain.com:/engine
> fuse.glusterfs  300G  9.2G  291G   4%
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
> vmm10.mydomain.com:/vmstore1
> fuse.glusterfs  5.1T   53G  5.1T   2%
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
> vmm10.mydomain.com:/data1
>  fuse.glusterfs  8.0T   95G  7.9T   2%
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
> vmm10.mydomain.com:/data2
>  fuse.glusterfs  8.0T  112G  7.8T   2%
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2
>
>
>
>
> *before any issues I increased the size of the cluster and the gluster
> cluster with the following, creating 4 distributed replicated volumes
> (engine, vmstore1, data1, data2)*
>
> vmm13.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm13.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm13.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm13.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm14.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-12 Thread Strahil
Hi Adrian,

Please keep in mind that when a server dies, the easiest way to recover is to 
get another freshly installed server with different IP/FQDN .
Then you will need to use 'replace-brick' and once gluster replaces that node - 
you should be able to remove the old entry in oVirt.
Once the old entry is gone, you can add the new installation in oVirt via the 
UI.

Another approach is to have the same IP/FQDN for the fresh install.In this 
situation, you need to have the same gluster ID (which should be a text file) 
and the peer IDs. Most probably you can create them on your own , based on data 
on the other gluster peers.
Once the fresh install is available in 'gluster peer' , you can initiate a 
reset-brick' (don't forget to set the SELINUX , firewall and repos) and a full 
heal.
From there you can reinstall the machine from the UI and it should be available 
for usage.

P.S.: I know that the whole procedure is not so easy :)

Best Regards,
Strahil NikolovOn Jun 12, 2019 19:02, Adrian Quintero 
 wrote:
>
> Strahil, I dont use the GUI that much, in this case I need to understand how 
> all is tied together if I want to move to production. As far as Gluster goes, 
> I can get do the administration thru CLI, however when my test environment 
> was set up it was setup using geodeploy for Hyperconverged setup under oVirt.
> The initial setup was 3 servers with the same amount of physical disks: sdb, 
> sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>
> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> As you can see from the above the the engine volume is conformed of hosts 
> vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12 and on 
> block device /dev/sdb (100Gb LV), also the vmstore1 volume is also on 
> /dev/sdb (2600Gb LV).
> /dev/mapper/gluster_vg_sdb-gluster_lv_engine                   xfs            
>  100G  2.0G   98G   2% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1                 xfs            
>  2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
> /dev/mapper/gluster_vg_sdc-gluster_lv_data1                    xfs            
>  2.7T  4.6G  2.7T   1% /gluster_bricks/data1
> /dev/mapper/gluster_vg_sdd-gluster_lv_data2                    xfs            
>  2.7T  9.5G  2.7T   1% /gluster_bricks/data2
> vmm10.mydomain.com:/engine                                       
> fuse.glusterfs  300G  9.2G  291G   4% 
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
> vmm10.mydomain.com:/vmstore1                                     
> fuse.glusterfs  5.1T   53G  5.1T   2% 
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
> vmm10.mydomain.com:/data1                                        
> fuse.glusterfs  8.0T   95G  7.9T   2% 
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
> vmm10.mydomain.com:/data2                                        
> fuse.glusterfs  8.0T  112G  7.8T   2% 
> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2
>
>
>
> before any issues I increased the size of the cluster and the gluster cluster 
> with the following, creating 4 distributed replicated volumes (engine, 
> vmstore1, data1, data2)
>
> vmm13.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm13.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm13.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm13.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm14.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm14.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm14.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm14.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm15.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm15.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm15.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm15.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm16.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm16.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm16.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm16.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm17.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm17.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-12 Thread Adrian Quintero
Strahil, I dont use the GUI that much, in this case I need to understand
how all is tied together if I want to move to production. As far as Gluster
goes, I can get do the administration thru CLI, however when my test
environment was set up it was setup using geodeploy for Hyperconverged
setup under oVirt.
The initial setup was 3 servers with the same amount of physical disks:
sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)

vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

*As you can see from the above the the engine volume is conformed of hosts
vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12 and
on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also on
/dev/sdb (2600Gb LV).*
/dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
100G  2.0G   98G   2% /gluster_bricks/engine
/dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
/dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
2.7T  4.6G  2.7T   1% /gluster_bricks/data1
/dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
2.7T  9.5G  2.7T   1% /gluster_bricks/data2
vmm10.mydomain.com:/engine
fuse.glusterfs  300G  9.2G  291G   4%
/rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
vmm10.mydomain.com:/vmstore1
fuse.glusterfs  5.1T   53G  5.1T   2%
/rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
vmm10.mydomain.com:/data1
 fuse.glusterfs  8.0T   95G  7.9T   2%
/rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
vmm10.mydomain.com:/data2
 fuse.glusterfs  8.0T  112G  7.8T   2%
/rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2




*before any issues I increased the size of the cluster and the gluster
cluster with the following, creating 4 distributed replicated volumes
(engine, vmstore1, data1, data2)*

vmm13.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm13.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm13.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm13.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm14.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm14.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm14.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm14.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm15.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm15.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm15.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm15.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm16.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm16.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm16.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm16.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm17.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm17.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm17.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm17.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2

vmm18.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
vmm18.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
vmm18.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
vmm18.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data


*with your first suggestion I dont think it is possible to recover as I
will lose the engine if I stop the "engine" volume, It might be doable for
vmstore1, data1 and data2 but not the engine.*
A) If you have space on another gluster volume (or volumes) or on NFS-based
storage, you can migrate all VMs live . Once you do it,  the simple way
will be to stop and remove the storage domain (from UI) and gluster volume
that correspond to the problematic brick. Once gone, you can  remove the
entry in oVirt for the old host and add the newly built one. Then you can
recreate your volume and migrate the data back.

*I tried removing the brick using CLI but get the following error:*
volume remove-brick start: failed: Host node of the brick
vmm10.mydomain.com:/gluster_bricks/engine/engine is down

*So I used the force command:*
gluster vol remove-brick engine
vmm10.mydomain.com:/gluster_bricks/engine/engine
 vmm11.mydomain.com:/gluster_bricks/engine/engine
vmm12.mydomain.com:/gluster_bricks/engine/engine
force
Remove-brick force 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Strahil Nikolov
 Do you have empty space to store the VMs ? If yes, you can always script the 
migration of the disks via the API . Even a bash script and curl can do the 
trick.
About the /dev/sdb , I still don't get it . A pure "df -hT" from a node will 
make it way clear. I guess '/dev/sdb' is a PV and you got 2 LVs ontop of it.
Note: I should admit that as an admin - I don't use UI for gluster management.
For now do not try to remove the brick. The approach is either to migrate the 
qemu disks to another storage or to reset-brick/replace-brick in order to 
restore the replica count.I will check the file and I will try to figure it out.
Redeployment never fixes the issue, it just speeds up the recovery. If you can 
afford the time to spent on fixing the issue - then do not redeploy.
I would be able to take a look next week , but keep in mind that I'm not so in 
deep with oVirt - I have started playing with it when I deployed my lab.
Best Regards,Strahil Nikolov 
 Strahil,
  
Looking at yoursuggestions I think I need to provide a bit more info on my 
currentsetup. 



   
   -
I have 9 hosts in total
 
   -
I have 5 storage domains:
   
  -   
hosted_storage (Data Master)
 
  -   
vmstore1 (Data)
 
  -   
data1 (Data)
 
  -   
data2 (Data)
 
  -   
ISO (NFS) //had to create this one because oVirt 4.3.3.1 would not let me 
upload disk images to a data domain without an ISO (I think this is due to a 
bug)  
  
 
 
 
   -
Each volume is of the type “Distributed Replicate” and each one is composed of 
9 bricks.   
I started with 3 bricks per volume due to the initial Hyperconverged setup, 
then I expanded the cluster and the gluster cluster by 3 hosts at a time until 
I got to a total of 9 hosts.

   
   
   -
Disks, bricks and sizes used per volume   
 / dev/sdb engine 100GB   
 / dev/sdb vmstore1 2600GB   
 / dev/sdc data1 2600GB   
 / dev/sdd data2 2600GB   
/ dev/sde  400GB SSD Used for caching purposes   
   
>From the above layout a few questions came up:
   
  -   
Using the web UI, How can I create a 100GB brick and a 2600GB brick to replace 
the bad bricks for “engine” and “vmstore1” within the same block device (sdb) ? 
  
  
What about / dev/sde (caching disk), When I tried creating a new brick thru the 
UI I saw that I could use / dev/sde for caching but only for 1 brick (i.e. 
vmstore1) so if I try to create another brick how would I specify it is the 
same / dev/sde device to be used for caching?
 
 



   
   -
If I want to remove a brick and it being a replica 3, I go to storage > Volumes 
> select the volume > bricks once in there I can select the 3 servers that 
compose the replicated bricks and click remove, this gives a pop-up window with 
the following info:   
   
Are you sure you want to remove the following Brick(s)?   
- vmm11:/gluster_bricks/vmstore1/vmstore1   
- vmm12.virt.iad3p:/gluster_bricks/vmstore1/vmstore1   
- 192.168.0.100:/gluster-bricks/vmstore1/vmstore1   
- Migrate Data from the bricks?   
   
If I proceed with this that means I will have to do this for all the 4 volumes, 
that is just not very efficient, but if that is the only way, then I am 
hesitant to put this into a real production environment as there is no way I 
can take that kind of a hit for +500 vms :) and also I wont have that much 
storage or extra volumes to play with in a real sceneario.   
   
 
 
   -
After modifying yesterday / etc/vdsm/vdsm.id by following 
(https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids) I was 
able to add the server back to the cluster using a new fqdn and a new IP, and 
tested replacing one of the bricks and this is my mistake as mentioned in #3 
above I used / dev/sdb entirely for 1 brick because thru the UI I could not 
separate the block device and be used for 2 bricks (one for the engine and one 
for vmstore1). So in the “gluster vol info” you might see vmm102.mydomain.com 
but in reality it is myhost1.mydomain.com   
   
 
 
   -
I am also attaching gluster_peer_status.txt  and in the last 2 entries of that 
file you will see and entry vmm10.mydomain.com (old/bad entry) and 
vmm102.mydomain.com (new entry, same server vmm10, but renamed to vmm102). Also 
please find gluster_vol_info.txt file.   
   
 
 
   -
I am ready to redeploy this environment if needed, but I am also ready to test 
any other suggestion. If I can get a good understanding on how to recover from 
this I will be ready to move to production.   
   
 
 
   -
Wondering if you’d be willing to have a look at my setup through a shared 
screen?   
   
   
 


Thanks




Adrian

On Mon, Jun 10, 2019 at 11:41 PM Strahil  wrote:


Hi Adrian,

You have several options:
A) If you have space on another gluster volume (or volumes) or on NFS-based 
storage, you can migrate all VMs live . Once you do it,  the simple way will be 
to stop and remove the storage domain (from UI) and gluster volume that 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Adrian Quintero
adding gluster pool list:
UUID Hostname State
2c86fa95-67a2-492d-abf0-54da625417f8  vmm12.mydomain.com Connected
ab099e72-0f56-4d33-a16b-ba67d67bdf9d  vmm13.mydomain.com Connected
c35ad74d-1f83-4032-a459-079a27175ee4 vmm14.mydomain.com Connected
aeb7712a-e74e-4492-b6af-9c266d69bfd3  vmm17.mydomain.com Connected
4476d434-d6ff-480f-b3f1-d976f642df9c vmm16.mydomain.com Connected
22ec0c0a-a5fc-431c-9f32-8b17fcd80298   vmm15.mydomain.com Connected
caf84e9f-3e03-4e6f-b0f8-4c5ecec4bef6vmm18.mydomain.com Connected
18385970-aba6-4fd1-85a6-1b13f663e60b  vmm10.mydomain.com * Disconnected
//server that went bad.*
b152fd82-8213-451f-93c6-353e96aa3be9  vmm102.mydomain.com Connected
//vmm10 but with different name
228a9282-c04e-4229-96a6-67cb47629892 localhost
Connected

On Tue, Jun 11, 2019 at 11:24 AM Adrian Quintero 
wrote:

> Strahil,
>
> Looking at your suggestions I think I need to provide a bit more info on
> my current setup.
>
>
>
>1.
>
>I have 9 hosts in total
>2.
>
>I have 5 storage domains:
>-
>
>   hosted_storage (Data Master)
>   -
>
>   vmstore1 (Data)
>   -
>
>   data1 (Data)
>   -
>
>   data2 (Data)
>   -
>
>   ISO (NFS) //had to create this one because oVirt 4.3.3.1 would not
>   let me upload disk images to a data domain without an ISO (I think this 
> is
>   due to a bug)
>
>   3.
>
>Each volume is of the type “Distributed Replicate” and each one is
>composed of 9 bricks.
>I started with 3 bricks per volume due to the initial Hyperconverged
>setup, then I expanded the cluster and the gluster cluster by 3 hosts at a
>time until I got to a total of 9 hosts.
>
>
>-
>
>
>
>
>
>
>
>
> *Disks, bricks and sizes used per volume / dev/sdb engine 100GB / dev/sdb
>   vmstore1 2600GB / dev/sdc data1 2600GB / dev/sdd data2 2600GB / dev/sde
>    400GB SSD Used for caching purposes From the above layout a few
>   questions came up:*
>   1.
>
>
>
> *Using the web UI, How can I create a 100GB brick and a 2600GB brick to
>  replace the bad bricks for “engine” and “vmstore1” within the same 
> block
>  device (sdb) ? What about / dev/sde (caching disk), When I tried 
> creating a
>  new brick thru the UI I saw that I could use / dev/sde for caching 
> but only
>  for 1 brick (i.e. vmstore1) so if I try to create another brick how 
> would I
>  specify it is the same / dev/sde device to be used for caching?*
>
>
>
>1.
>
>If I want to remove a brick and it being a replica 3, I go to storage
>> Volumes > select the volume > bricks once in there I can select the 3
>servers that compose the replicated bricks and click remove, this gives a
>pop-up window with the following info:
>
>Are you sure you want to remove the following Brick(s)?
>- vmm11:/gluster_bricks/vmstore1/vmstore1
>- vmm12.virt.iad3p:/gluster_bricks/vmstore1/vmstore1
>- 192.168.0.100:/gluster-bricks/vmstore1/vmstore1
>- Migrate Data from the bricks?
>
>If I proceed with this that means I will have to do this for all the 4
>volumes, that is just not very efficient, but if that is the only way, then
>I am hesitant to put this into a real production environment as there is no
>way I can take that kind of a hit for +500 vms :) and also I wont have
>that much storage or extra volumes to play with in a real sceneario.
>
>2.
>
>After modifying yesterday */ etc/vdsm/vdsm.id  by
>following
>(https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids
>) I
>was able to add the server **back **to the cluster using a new fqdn
>and a new IP, and tested replacing one of the bricks and this is my mistake
>as mentioned in #3 above I used / dev/sdb entirely for 1 brick because thru
>the UI I could not separate the block device and be used for 2 bricks (one
>for the engine and one for vmstore1). **So in the “gluster vol info”
>you might see vmm102.mydomain.com  *
> *but in reality it is myhost1.mydomain.com  *
>3.
>
>*I am also attaching gluster_peer_status.txt * *and in the last 2
>entries of that file you will see and entry vmm10.mydomain.com
> (old/bad entry) and vmm102.mydomain.com
> (new entry, same server vmm10, but renamed to
>vmm102). *
> *Also please find gluster_vol_info.txt file. *
>4.
>
>*I am ready *
> *to redeploy this environment if needed, but I am also ready to test any
>other suggestion. If I can get a good understanding on how to recover from
>this I will be ready to move to production. *
>5.
>
>
>
> *Wondering if you’d be willing to have a look at my setup through a shared
>screen? *
>
> *Thanks *
>
>
> *Adrian*
>
> On Mon, Jun 10, 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Adrian Quintero
Strahil,

Looking at your suggestions I think I need to provide a bit more info on my
current setup.



   1.

   I have 9 hosts in total
   2.

   I have 5 storage domains:
   -

  hosted_storage (Data Master)
  -

  vmstore1 (Data)
  -

  data1 (Data)
  -

  data2 (Data)
  -

  ISO (NFS) //had to create this one because oVirt 4.3.3.1 would not
  let me upload disk images to a data domain without an ISO (I
think this is
  due to a bug)

  3.

   Each volume is of the type “Distributed Replicate” and each one is
   composed of 9 bricks.
   I started with 3 bricks per volume due to the initial Hyperconverged
   setup, then I expanded the cluster and the gluster cluster by 3 hosts at a
   time until I got to a total of 9 hosts.


   -








*Disks, bricks and sizes used per volume / dev/sdb engine 100GB / dev/sdb
  vmstore1 2600GB / dev/sdc data1 2600GB / dev/sdd data2 2600GB / dev/sde
   400GB SSD Used for caching purposes From the above layout a few
  questions came up:*
  1.



*Using the web UI, How can I create a 100GB brick and a 2600GB brick to
 replace the bad bricks for “engine” and “vmstore1” within the
same block
 device (sdb) ? What about / dev/sde (caching disk), When I
tried creating a
 new brick thru the UI I saw that I could use / dev/sde for
caching but only
 for 1 brick (i.e. vmstore1) so if I try to create another
brick how would I
 specify it is the same / dev/sde device to be used for caching?*



   1.

   If I want to remove a brick and it being a replica 3, I go to storage >
   Volumes > select the volume > bricks once in there I can select the 3
   servers that compose the replicated bricks and click remove, this gives a
   pop-up window with the following info:

   Are you sure you want to remove the following Brick(s)?
   - vmm11:/gluster_bricks/vmstore1/vmstore1
   - vmm12.virt.iad3p:/gluster_bricks/vmstore1/vmstore1
   - 192.168.0.100:/gluster-bricks/vmstore1/vmstore1
   - Migrate Data from the bricks?

   If I proceed with this that means I will have to do this for all the 4
   volumes, that is just not very efficient, but if that is the only way, then
   I am hesitant to put this into a real production environment as there is no
   way I can take that kind of a hit for +500 vms :) and also I wont have
   that much storage or extra volumes to play with in a real sceneario.

   2.

   After modifying yesterday */ etc/vdsm/vdsm.id  by
   following
   (https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids
   ) I
   was able to add the server **back **to the cluster using a new fqdn and
   a new IP, and tested replacing one of the bricks and this is my mistake as
   mentioned in #3 above I used / dev/sdb entirely for 1 brick because thru
   the UI I could not separate the block device and be used for 2 bricks (one
   for the engine and one for vmstore1). **So in the “gluster vol info” you
   might see vmm102.mydomain.com  *
*but in reality it is myhost1.mydomain.com  *
   3.

   *I am also attaching gluster_peer_status.txt * *and in the last 2
   entries of that file you will see and entry vmm10.mydomain.com
    (old/bad entry) and vmm102.mydomain.com
    (new entry, same server vmm10, but renamed to
   vmm102). *
*Also please find gluster_vol_info.txt file. *
   4.

   *I am ready *
*to redeploy this environment if needed, but I am also ready to test any
   other suggestion. If I can get a good understanding on how to recover from
   this I will be ready to move to production. *
   5.



*Wondering if you’d be willing to have a look at my setup through a shared
   screen? *

*Thanks *


*Adrian*

On Mon, Jun 10, 2019 at 11:41 PM Strahil  wrote:

> Hi Adrian,
>
> You have several options:
> A) If you have space on another gluster volume (or volumes) or on
> NFS-based storage, you can migrate all VMs live . Once you do it,  the
> simple way will be to stop and remove the storage domain (from UI) and
> gluster volume that correspond to the problematic brick. Once gone, you
> can  remove the entry in oVirt for the old host and add the newly built
> one.Then you can recreate your volume and migrate the data back.
>
> B)  If you don't have space you have to use a more riskier approach
> (usually it shouldn't be risky, but I had bad experience in gluster v3):
> - New server has same IP and hostname:
> Use command line and run the 'gluster volume reset-brick VOLNAME
> HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit'
> Replace VOLNAME with your volume name.
> A more practical example would be:
> 'gluster volume reset-brick data ovirt3:/gluster_bricks/data/brick
> ovirt3:/gluster_ ricks/data/brick commit'
>
> If it refuses, then you have to cleanup '/gluster_bricks/data' 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Strahil
Hi Adrian,

You have several options:
A) If you have space on another gluster volume (or volumes) or on NFS-based 
storage, you can migrate all VMs live . Once you do it,  the simple way will be 
to stop and remove the storage domain (from UI) and gluster volume that 
correspond to the problematic brick. Once gone, you can  remove the entry in 
oVirt for the old host and add the newly built one.Then you can recreate your 
volume and migrate the data back.

B)  If you don't have space you have to use a more riskier approach (usually it 
shouldn't be risky, but I had bad experience in gluster v3):
- New server has same IP and hostname:
Use command line and run the 'gluster volume reset-brick VOLNAME 
HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit'
Replace VOLNAME with your volume name.
A more practical example would be:
'gluster volume reset-brick data ovirt3:/gluster_bricks/data/brick 
ovirt3:/gluster_ ricks/data/brick commit'

If it refuses, then you have to cleanup '/gluster_bricks/data' (which should be 
empty).
Also check if the new peer has been probed via 'gluster peer status'.Check the 
firewall is allowing gluster communication (you can compare it to the firewalls 
on another gluster host).


The automatic healing will kick in 10 minutes (if it succeeds) and will stress 
the other 2 replicas, so pick your time properly.
Note: I'm not recommending you to use the 'force' option in the previous 
command ... for now :)

- The new server has a different IP/hostname:
Instead of 'reset-brick' you can use  'replace-brick':
It should be like this:
gluster volume replace-brick data old-server:/path/to/brick 
new-server:/new/path/to/brick commit force

In both cases check the status via:
gluster volume info VOLNAME

If your cluster is in production , I really recommend you the first option as 
it is less risky and the chance for unplanned downtime will be minimal.

The 'reset-brick'  in your previous e-mail shows that one of the servers is not 
connected. Check peer status on all servers, if they are less than they should 
check for network and/or firewall issues.
On the new node check if glusterd is enabled and running.

In order to debug - you should provide more info like 'gluster volume info' and 
the peer status from each node.

Best Regards,
Strahil Nikolov

On Jun 10, 2019 20:10, Adrian Quintero  wrote:
>
> Can you let me know how to fix the gluster and missing brick?,
> I tried removing it by going to "storage > Volumes > vmstore > bricks > 
> selected the brick
> However it is showing as an unknown status (which is expected because the 
> server was completely wiped) so if I try to "remove", "replace brick" or 
> "reset brick" it wont work 
> If i do remove brick: Incorrect bricks selected for removal in Distributed 
> Replicate volume. Either all the selected bricks should be from the same sub 
> volume or one brick each for every sub volume!
> If I try "replace brick" I cant because I dont have another server with extra 
> bricks/disks
> And if I try "reset brick": Error while executing action Start Gluster Volume 
> Reset Brick: Volume reset brick commit force failed: rc=-1 out=() err=['Host 
> myhost1_mydomain_com  not connected']
>
> Are you suggesting to try and fix the gluster using command line? 
>
> Note that I cant "peer detach"   the sever , so if I force the removal of the 
> bricks would I need to force downgrade to replica 2 instead of 3? what would 
> happen to oVirt as it only supports replica 3?
>
> thanks again.
>
> On Mon, Jun 10, 2019 at 12:52 PM Strahil  wrote:
>>
>> Hi Adrian,
>> Did you fix the issue with the gluster and the missing brick?
>> If yes, try to set the 'old' host in maintenance an___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7SZKSXEWVJC6UNU7GOEYXURXERGZCQ2Y/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Thanks for pointing me in the right direction, I was able to add the server
to the cluster by adding /etc/vdsm/vdsm.id
I will now try to create the new bricks and try a replacement brick, this
part I think I will have to do thru command line because my Hyperconverged
setup with a replica 3 is as follows:
*/dev/sdb = /gluster_bricks/engine  100G*
*/dev/sdb = /gluster_bricks/vmstore12600G*

/dev/sdc = /gluster_bricks/data1  2700G
/dev/sdd = /gluster_bricks/data2  2700G

/dev/sde = caching disk.

The issue i see here is that I don't see an option through the WEB UI to
create 2 bricks in the same /dev/sdb (one of 100Gb for the engine and one
of 2600Gb for vmstore1).

So if you have any ideas they are most welcome.

thanks again.

On Mon, Jun 10, 2019 at 4:35 PM Leo David  wrote:

> https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids
>
> On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
> wrote:
>
>> Ok I have tried reinstalling the server from scratch with a different
>> name and IP address and when trying to add it to cluster I get the
>> following error:
>>
>> Event details
>> ID: 505
>> Time: Jun 10, 2019, 10:00:00 AM
>> Message: Host myshost2.virt.iad3p installation failed. Host
>> myhost2.mydomain.com reports unique id which already registered for
>> myhost1.mydomain.com
>>
>> I am at a loss here, I don't have a brand new server to do this and in
>> need to re-use what I have.
>>
>>
>> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
>> 2019-06-10 10:57:59,950-04 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
>> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
>> Host myhost2.mydomain.com reports unique id which already registered for
>> myhost1.mydomain.com
>>
>> So in the
>> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
>> of the ovirt engine I see that the host deploy is running  the following
>> command to identify the system, if this is the case then it will never work
>> :( because it identifies each host using the system uuid.
>>
>> *dmidecode -s system-uuid*
>> b64d566e-055d-44d4-83a2-d3b83f25412e
>>
>>
>> Any suggestions?
>>
>> Thanks
>>
>> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
>> wrote:
>>
>>> Leo,
>>> I did try putting it under maintenance and checking to ignore gluster
>>> and it did not work.
>>> Error while executing action:
>>> -Cannot remove host. Server having gluster volume.
>>>
>>> Note: the server was already reinstalled so gluster will never see the
>>> volumes or bricks for this server.
>>>
>>> I will rename the server to myhost2.mydomain.com and try to replace the
>>> bricks hopefully that might work, however it would be good to know that you
>>> can re-install from scratch an existing cluster server and put it back to
>>> the cluster.
>>>
>>> Still doing research hopefully we can find a way.
>>>
>>> thanks again
>>>
>>> Adrian
>>>
>>>
>>>
>>>
>>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>>
 You will need to remove the storage role from that server first ( not
 being part of gluster cluster ).
 I cannot test this right now on production,  but maybe putting host
 although its already died under "mantainance" while checking to ignore
 guster warning will let you remove it.
 Maybe I am wrong about the procedure,  can anybody input an advice
 helping with this situation ?
 Cheers,

 Leo




 On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> I tried removing the bad host but running into the following issue ,
> any idea?
> Operation Canceled
> Error while executing action:
>
> host1.mydomain.com
>
>- Cannot remove Host. Server having Gluster volume.
>
>
>
>
> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>> wondering how that setup should be achieved?
>>
>> thanks,
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>> adrianquint...@gmail.com> wrote:
>>
>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>
>>> Will test tomorrow and post the results.
>>>
>>> Thanks again
>>>
>>> Adrian
>>>
>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>
 Hi Adrian,
 I think the steps are:
 - reinstall the host
 - join it to virtualisation cluster
 And if was member of gluster cluster as well:
 - go to host - storage devices
 - create the bricks on the devices - as they are on the other hosts
 - go to storage - volumes
 - replace each failed brick with the corresponding new 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/

>>> --
>> Adrian Quintero
>>
>
>
> --
> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
Hi, i think you can generate and use a new uuid, althought i am not sure
about the procedure right now..

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/

>>> --
>> Adrian Quintero

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Can you let me know how to fix the gluster and missing brick?,
I tried removing it by going to "storage > Volumes > vmstore > bricks >
selected the brick
However it is showing as an unknown status (which is expected because the
server was completely wiped) so if I try to "remove", "replace brick" or
"reset brick" it wont work
If i do remove brick: Incorrect bricks selected for removal in Distributed
Replicate volume. Either all the selected bricks should be from the same
sub volume or one brick each for every sub volume!
If I try "replace brick" I cant because I dont have another server with
extra bricks/disks
And if I try "reset brick": Error while executing action Start Gluster
Volume Reset Brick: Volume reset brick commit force failed: rc=-1 out=()
err=['Host myhost1_mydomain_com  not connected']

Are you suggesting to try and fix the gluster using command line?

Note that I cant "peer detach"   the sever , so if I force the removal of
the bricks would I need to force downgrade to replica 2 instead of 3? what
would happen to oVirt as it only supports replica 3?

thanks again.

On Mon, Jun 10, 2019 at 12:52 PM Strahil  wrote:

> Hi Adrian,
> Did you fix the issue with the gluster and the missing brick?
> If yes, try to set the 'old' host in maintenance and then forcefully
> remove it from oVirt.
> If it succeeds (and it should), then you can add the server back and then
> check what happens.
>
> Best Regards,
> Strahil Nikolov
> On Jun 10, 2019 18:12, Adrian Quintero  wrote:
>
> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the /var/log/ovirt-engine/host-deploy/
> ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log of the
> ovirt engine I see that the host deploy is running  the following command
> to identify the system, if this is the case then it will never work :(
> because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
> Leo,
> I did try putting it under maintenance and checking to ignore gluster and
> it did not work.
> Error while executing action:
> -Cannot remove host. Server having gluster volume.
>
> Note: the server was already reinstalled so gluster will never see the
> volumes or bricks for this server.
>
> I will rename the server to myhost2.mydomain.com and try to replace the
> bricks hopefully that might work, however it would be good to know that you
> can re-install from scratch an existing cluster server and put it back to
> the cluster.
>
> Still doing research hopefully we can find a way.
>
> thanks again
>
> Adrian
>
>
>
>
> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>
> You will need to remove the storage role from that server first ( not
> being part of gluster cluster ).
> I cannot test this right now on production,  but maybe putting host
> although its already died under "mantainance" while checking to ignore
> guster warning will let you remove it.
> Maybe I am wrong about the procedure,  can anybody input an advice helping
> with this situation ?
> Cheers,
>
> Leo
>
>
>
>
> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
> wrote:
>
> I tried removing the bad host but running into the following issue , any
> idea?
>
>

-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PF7KEND4VZAZ4QF34LDF6YEWJRQ2R52Y/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Dmitry Filonov
At this point I'd go to engine VM and remove host from the postgres DB
manually.
A bit of a hack, but...

ssh root@
su - postgres
cd /opt/rh/rh-postgresql10/
 source enable
psql engine
select vds_id from vds_static where host_name='myhost1.mydomain.com';
select DeleteVds('');

Of course, keep in mind that editing database directly is the last resort
and not supported in any way.

--
Dmitry Filonov
Linux Administrator
SBGrid Core | Harvard Medical School
250 Longwood Ave, SGM-114
Boston, MA 02115


On Mon, Jun 10, 2019 at 11:16 AM Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Strahil
Hi Adrian,
Did you fix the issue with the gluster and the missing brick?
If yes, try to set the 'old' host in maintenance and then forcefully remove it 
from oVirt.
If it succeeds (and it should), then you can add the server back and then check 
what happens.

Best Regards,
Strahil NikolovOn Jun 10, 2019 18:12, Adrian Quintero 
 wrote:
>
> Ok I have tried reinstalling the server from scratch with a different name 
> and IP address and when trying to add it to cluster I get the following error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host 
> myhost2.mydomain.com reports unique id which already registered for 
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in need 
> to re-use what I have. 
>
>
> From the oVirt engine log (/var/log/ovirt-engine/engine.log): 
> 2019-06-10 10:57:59,950-04 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID: 
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed. Host 
> myhost2.mydomain.com reports unique id which already registered for 
> myhost1.mydomain.com
>
> So in the 
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
>  of the ovirt engine I see that the host deploy is running  the following 
> command to identify the system, if this is the case then it will never work 
> :( because it identifies each host using the system uuid.
>
> dmidecode -s system-uuid
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero  
> wrote:
>>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and it 
>> did not work.
>> Error while executing action: 
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the 
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the 
>> bricks hopefully that might work, however it would be good to know that you 
>> can re-install from scratch an existing cluster server and put it back to 
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>>
>>> You will need to remove the storage role from that server first ( not being 
>>> part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host 
>>> although its already died under "mantainance" while checking to ignore 
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice helping 
>>> with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero  
>>> wrote:

 I tried removing the bad host but running into the following issue , any 
 idea?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNJ3SAOTYFG5YGWL6USSEFS6PL2DSZKU/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Ok I have tried reinstalling the server from scratch with a different name
and IP address and when trying to add it to cluster I get the following
error:

Event details
ID: 505
Time: Jun 10, 2019, 10:00:00 AM
Message: Host myshost2.virt.iad3p installation failed. Host
myhost2.mydomain.com reports unique id which already registered for
myhost1.mydomain.com

I am at a loss here, I don't have a brand new server to do this and in need
to re-use what I have.


*From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
2019-06-10 10:57:59,950-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
Host myhost2.mydomain.com reports unique id which already registered for
myhost1.mydomain.com

So in the
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
of the ovirt engine I see that the host deploy is running  the following
command to identify the system, if this is the case then it will never work
:( because it identifies each host using the system uuid.

*dmidecode -s system-uuid*
b64d566e-055d-44d4-83a2-d3b83f25412e


Any suggestions?

Thanks

On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
wrote:

> Leo,
> I did try putting it under maintenance and checking to ignore gluster and
> it did not work.
> Error while executing action:
> -Cannot remove host. Server having gluster volume.
>
> Note: the server was already reinstalled so gluster will never see the
> volumes or bricks for this server.
>
> I will rename the server to myhost2.mydomain.com and try to replace the
> bricks hopefully that might work, however it would be good to know that you
> can re-install from scratch an existing cluster server and put it back to
> the cluster.
>
> Still doing research hopefully we can find a way.
>
> thanks again
>
> Adrian
>
>
>
>
> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>
>> You will need to remove the storage role from that server first ( not
>> being part of gluster cluster ).
>> I cannot test this right now on production,  but maybe putting host
>> although its already died under "mantainance" while checking to ignore
>> guster warning will let you remove it.
>> Maybe I am wrong about the procedure,  can anybody input an advice
>> helping with this situation ?
>> Cheers,
>>
>> Leo
>>
>>
>>
>>
>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>> wrote:
>>
>>> I tried removing the bad host but running into the following issue , any
>>> idea?
>>> Operation Canceled
>>> Error while executing action:
>>>
>>> host1.mydomain.com
>>>
>>>- Cannot remove Host. Server having Gluster volume.
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
>>> adrianquint...@gmail.com> wrote:
>>>
 Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
 wondering how that setup should be achieved?

 thanks,

 Adrian

 On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>
> Will test tomorrow and post the results.
>
> Thanks again
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>
>> Hi Adrian,
>> I think the steps are:
>> - reinstall the host
>> - join it to virtualisation cluster
>> And if was member of gluster cluster as well:
>> - go to host - storage devices
>> - create the bricks on the devices - as they are on the other hosts
>> - go to storage - volumes
>> - replace each failed brick with the corresponding new one.
>> Hope it helps.
>> Cheers,
>> Leo
>>
>>
>> On Wed, Jun 5, 2019, 23:09  wrote:
>>
>>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>>> hyperconverged setup with gluster storage?
>>>
>>> One of my hosts is completely dead, I need to do a fresh install
>>> using ovirt node iso, can anybody point me to the proper steps?
>>>
>>> thanks,
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>>
>> --
> Adrian Quintero
>


 --
 Adrian Quintero

>>>
>>>
>>> --
>>> Adrian Quintero
>>>
>>
>>
>> --
>> Best regards, Leo David
>>
>
>
> --
> Adrian Quintero
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-08 Thread Adrian Quintero
Leo,
I did try putting it under maintenance and checking to ignore gluster and
it did not work.
Error while executing action:
-Cannot remove host. Server having gluster volume.

Note: the server was already reinstalled so gluster will never see the
volumes or bricks for this server.

I will rename the server to myhost2.mydomain.com and try to replace the
bricks hopefully that might work, however it would be good to know that you
can re-install from scratch an existing cluster server and put it back to
the cluster.

Still doing research hopefully we can find a way.

thanks again

Adrian




On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:

> You will need to remove the storage role from that server first ( not
> being part of gluster cluster ).
> I cannot test this right now on production,  but maybe putting host
> although its already died under "mantainance" while checking to ignore
> guster warning will let you remove it.
> Maybe I am wrong about the procedure,  can anybody input an advice helping
> with this situation ?
> Cheers,
>
> Leo
>
>
>
>
> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
> wrote:
>
>> I tried removing the bad host but running into the following issue , any
>> idea?
>> Operation Canceled
>> Error while executing action:
>>
>> host1.mydomain.com
>>
>>- Cannot remove Host. Server having Gluster volume.
>>
>>
>>
>>
>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero 
>> wrote:
>>
>>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>>> wondering how that setup should be achieved?
>>>
>>> thanks,
>>>
>>> Adrian
>>>
>>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>>> adrianquint...@gmail.com> wrote:
>>>
 Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.

 Will test tomorrow and post the results.

 Thanks again

 Adrian

 On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:

> Hi Adrian,
> I think the steps are:
> - reinstall the host
> - join it to virtualisation cluster
> And if was member of gluster cluster as well:
> - go to host - storage devices
> - create the bricks on the devices - as they are on the other hosts
> - go to storage - volumes
> - replace each failed brick with the corresponding new one.
> Hope it helps.
> Cheers,
> Leo
>
>
> On Wed, Jun 5, 2019, 23:09  wrote:
>
>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>> hyperconverged setup with gluster storage?
>>
>> One of my hosts is completely dead, I need to do a fresh install
>> using ovirt node iso, can anybody point me to the proper steps?
>>
>> thanks,
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>
> --
 Adrian Quintero

>>>
>>>
>>> --
>>> Adrian Quintero
>>>
>>
>>
>> --
>> Adrian Quintero
>>
>
>
> --
> Best regards, Leo David
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/636DDIHCY5J4KPFE2G54KC53GIQR7R5R/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-07 Thread Leo David
You will need to remove the storage role from that server first ( not being
part of gluster cluster ).
I cannot test this right now on production,  but maybe putting host
although its already died under "mantainance" while checking to ignore
guster warning will let you remove it.
Maybe I am wrong about the procedure,  can anybody input an advice helping
with this situation ?
Cheers,

Leo




On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
wrote:

> I tried removing the bad host but running into the following issue , any
> idea?
> Operation Canceled
> Error while executing action:
>
> host1.mydomain.com
>
>- Cannot remove Host. Server having Gluster volume.
>
>
>
>
> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero 
> wrote:
>
>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>> wondering how that setup should be achieved?
>>
>> thanks,
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero 
>> wrote:
>>
>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>
>>> Will test tomorrow and post the results.
>>>
>>> Thanks again
>>>
>>> Adrian
>>>
>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>
 Hi Adrian,
 I think the steps are:
 - reinstall the host
 - join it to virtualisation cluster
 And if was member of gluster cluster as well:
 - go to host - storage devices
 - create the bricks on the devices - as they are on the other hosts
 - go to storage - volumes
 - replace each failed brick with the corresponding new one.
 Hope it helps.
 Cheers,
 Leo


 On Wed, Jun 5, 2019, 23:09  wrote:

> Anybody have had to replace a failed host from a 3, 6, or 9 node
> hyperconverged setup with gluster storage?
>
> One of my hosts is completely dead, I need to do a fresh install using
> ovirt node iso, can anybody point me to the proper steps?
>
> thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>
 --
>>> Adrian Quintero
>>>
>>
>>
>> --
>> Adrian Quintero
>>
>
>
> --
> Adrian Quintero
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNYP3FKNBF6QPV46R5L3LRBWTTIC3OHO/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Dmitry Filonov
Can you remove bricks that belong to a fried server? Either from a GUI or
CLI
You should be able to do so and then it should allow you to remove host
from the oVirt setup.



--
Dmitry Filonov
Linux Administrator
SBGrid Core | Harvard Medical School
250 Longwood Ave, SGM-114
Boston, MA 02115


On Thu, Jun 6, 2019 at 4:36 PM  wrote:

> Definitely is a challenge trying to replace a bad host.
>
> So let me tell you what I see and have done so far:
>
> 1.-I have a host that went bad due to HW issues.
> 2.-This bad host is still showing in the compute --> hosts section.
> 3.-This host was part of a hyperconverged setup with Gluster.
> 4.-The gluster bricks for this server show up with a "?" mark inside the
> volumes under Storage ---> Volumes ---> Myvolume ---> bricks
> 5.-Under Compute ---> Hosts --> mybadhost.mydomain.com the host  is in
> maintenance mode.
> 6.-When I try to remove that host (with "Force REmove" ticked) I keep
> getting:
> Operation Canceled
>  Error while executing action:
> mybadhost.mydomain.com
> - Cannot remove Host. Server having Gluster volume.
> Note: I have also confirmed "host has been rebooted"
>
> Since the bad host was not recoverable (it was fried), I took a brand new
> server with the same specs and installed oVirt 4.3.3 on it and have it
> ready to add it to the cluster with the same hostname and IP but I cant do
> this until I remove the old entries on the WEB UI of the Hosted Engine VM.
>
> If this is not possible would I really need to add this new host with a
> different name and IP?
> What would be the correct and best procedure to fix this?
>
> Note that my setup is a 9 node setup with hyperconverged and replica 3
> bricks and  in a  distributed replicated volume scenario.
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4HFTCWNFTOJJ34VSBHY5NKK5ZQAEDB7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ADQBVAZ3RGDIG5SRODOVJBZUOEMAC3Z/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Edward Berger
I'll presume you didn't fully backup your hosts root file systems on the
host which was fried.

It may be easier to replace with a new hostname/IP.
I would focus on the gluster config first, since it was hyperconverged.

I don't know which way engine UI is using to detect gluster mount on
missing host and decides not to remove the old host.
You probably also have the storage domain "mounted in the data-center" with
backup volume servers pointing at the old host details
The remaining gluster peers also notice the outage, and it could be
detecting that?

I would try to make gluster changes, so maybe the engine UI will allow you
to remove old hyperconverged host entry.
(The Engine UI is really trying to protect your gluster data).
I'd try changing the mount options and there is a way to tell gluster to
only use two hosts and stop trying to connect to the
third, but I don't remember the details.










On Thu, Jun 6, 2019 at 4:32 PM  wrote:

> Definitely is a challenge trying to replace a bad host.
>
> So let me tell you what I see and have done so far:
>
> 1.-I have a host that went bad due to HW issues.
> 2.-This bad host is still showing in the compute --> hosts section.
> 3.-This host was part of a hyperconverged setup with Gluster.
> 4.-The gluster bricks for this server show up with a "?" mark inside the
> volumes under Storage ---> Volumes ---> Myvolume ---> bricks
> 5.-Under Compute ---> Hosts --> mybadhost.mydomain.com the host  is in
> maintenance mode.
> 6.-When I try to remove that host (with "Force REmove" ticked) I keep
> getting:
> Operation Canceled
>  Error while executing action:
> mybadhost.mydomain.com
> - Cannot remove Host. Server having Gluster volume.
> Note: I have also confirmed "host has been rebooted"
>
> Since the bad host was not recoverable (it was fried), I took a brand new
> server with the same specs and installed oVirt 4.3.3 on it and have it
> ready to add it to the cluster with the same hostname and IP but I cant do
> this until I remove the old entries on the WEB UI of the Hosted Engine VM.
>
> If this is not possible would I really need to add this new host with a
> different name and IP?
> What would be the correct and best procedure to fix this?
>
> Note that my setup is a 9 node setup with hyperconverged and replica 3
> bricks and  in a  distributed replicated volume scenario.
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4HFTCWNFTOJJ34VSBHY5NKK5ZQAEDB7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPDZYAPS4LEKVFFNHKEXK4H4LQO5LOL6/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread adrianquintero
Definitely is a challenge trying to replace a bad host.

So let me tell you what I see and have done so far:

1.-I have a host that went bad due to HW issues.
2.-This bad host is still showing in the compute --> hosts section.
3.-This host was part of a hyperconverged setup with Gluster.
4.-The gluster bricks for this server show up with a "?" mark inside the 
volumes under Storage ---> Volumes ---> Myvolume ---> bricks
5.-Under Compute ---> Hosts --> mybadhost.mydomain.com the host  is in 
maintenance mode.
6.-When I try to remove that host (with "Force REmove" ticked) I keep getting:
Operation Canceled
 Error while executing action: 
mybadhost.mydomain.com
- Cannot remove Host. Server having Gluster volume.
Note: I have also confirmed "host has been rebooted"

Since the bad host was not recoverable (it was fried), I took a brand new 
server with the same specs and installed oVirt 4.3.3 on it and have it ready to 
add it to the cluster with the same hostname and IP but I cant do this until I 
remove the old entries on the WEB UI of the Hosted Engine VM.

If this is not possible would I really need to add this new host with a 
different name and IP?  
What would be the correct and best procedure to fix this?

Note that my setup is a 9 node setup with hyperconverged and replica 3  bricks 
and  in a  distributed replicated volume scenario.

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4HFTCWNFTOJJ34VSBHY5NKK5ZQAEDB7/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Strahil Nikolov
 Have you tried with "Force remove" tick ?
Best Regards,Strahil Nikolov
В четвъртък, 6 юни 2019 г., 21:47:20 ч. Гринуич+3, Adrian Quintero 
 написа:  
 
 I tried removing the bad host but running into the following issue , any idea?

Operation Canceled
Error while executing action: 

host1.mydomain.com   
   - Cannot remove Host. Server having Gluster volume.



On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero  
wrote:

Leo, I forgot to mention that I have 1 SSD disk for caching purposes, wondering 
how that setup should be achieved?
thanks,
Adrian

On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero  
wrote:

Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
Will test tomorrow and post the results.
Thanks again
Adrian
On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:

Hi Adrian,I think the steps are:- reinstall the host- join it to virtualisation 
clusterAnd if was member of gluster cluster as well:- go to host - storage 
devices- create the bricks on the devices - as they are on the other hosts- go 
to storage - volumes- replace each failed brick with the corresponding new 
one.Hope it helps.Cheers,Leo

On Wed, Jun 5, 2019, 23:09  wrote:

Anybody have had to replace a failed host from a 3, 6, or 9 node hyperconverged 
setup with gluster storage?

One of my hosts is completely dead, I need to do a fresh install using ovirt 
node iso, can anybody point me to the proper steps?

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/


-- 
Adrian Quintero



-- 
Adrian Quintero



-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB2YWWPO2TRJ6EYXAETPUV2DSVQLXDRR/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6EDIM2TLIFPEKANZ2QIUTXGSIWKYC2ET/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Adrian Quintero
I tried removing the bad host but running into the following issue , any
idea?
Operation Canceled
Error while executing action:

host1.mydomain.com

   - Cannot remove Host. Server having Gluster volume.




On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero 
wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero 
> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install using
 ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/

>>> --
>> Adrian Quintero
>>
>
>
> --
> Adrian Quintero
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB2YWWPO2TRJ6EYXAETPUV2DSVQLXDRR/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Adrian Quintero
Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
wondering how that setup should be achieved?

thanks,

Adrian

On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero 
wrote:

> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>
> Will test tomorrow and post the results.
>
> Thanks again
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>
>> Hi Adrian,
>> I think the steps are:
>> - reinstall the host
>> - join it to virtualisation cluster
>> And if was member of gluster cluster as well:
>> - go to host - storage devices
>> - create the bricks on the devices - as they are on the other hosts
>> - go to storage - volumes
>> - replace each failed brick with the corresponding new one.
>> Hope it helps.
>> Cheers,
>> Leo
>>
>>
>> On Wed, Jun 5, 2019, 23:09  wrote:
>>
>>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>>> hyperconverged setup with gluster storage?
>>>
>>> One of my hosts is completely dead, I need to do a fresh install using
>>> ovirt node iso, can anybody point me to the proper steps?
>>>
>>> thanks,
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>>
>> --
> Adrian Quintero
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45ERP2ZTABEPRBV7P2XANRZIEBBCFGX3/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-05 Thread Adrian Quintero
Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.

Will test tomorrow and post the results.

Thanks again

Adrian

On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:

> Hi Adrian,
> I think the steps are:
> - reinstall the host
> - join it to virtualisation cluster
> And if was member of gluster cluster as well:
> - go to host - storage devices
> - create the bricks on the devices - as they are on the other hosts
> - go to storage - volumes
> - replace each failed brick with the corresponding new one.
> Hope it helps.
> Cheers,
> Leo
>
>
> On Wed, Jun 5, 2019, 23:09  wrote:
>
>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>> hyperconverged setup with gluster storage?
>>
>> One of my hosts is completely dead, I need to do a fresh install using
>> ovirt node iso, can anybody point me to the proper steps?
>>
>> thanks,
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>
> --
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WQQUJTFSDLIAIO2OS7ZHAUQ6N6VXNETN/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-05 Thread Leo David
Hi Adrian,
I think the steps are:
- reinstall the host
- join it to virtualisation cluster
And if was member of gluster cluster as well:
- go to host - storage devices
- create the bricks on the devices - as they are on the other hosts
- go to storage - volumes
- replace each failed brick with the corresponding new one.
Hope it helps.
Cheers,
Leo


On Wed, Jun 5, 2019, 23:09  wrote:

> Anybody have had to replace a failed host from a 3, 6, or 9 node
> hyperconverged setup with gluster storage?
>
> One of my hosts is completely dead, I need to do a fresh install using
> ovirt node iso, can anybody point me to the proper steps?
>
> thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L2FT4DA5B6MTT5TXIT4N5MTH5VTG25F7/