[ovirt-users] Re: Disk (brick) failure on my stack

2021-06-21 Thread Prajith Kesava Prasad
Hi @Dominique,

Once you have added the SSD's to your host, you can follow this link[1]  to
do a replace host.

Replace host is essentially
1) Preparing your node based on your current healthy nodes in your cluster
2) Post adding the node in the cluster, gluster syncs with the nodes, and
your cluster will be back to healthy.

[1]
https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L57

Regards,
Prajith,


On Tue, Jun 22, 2021 at 10:56 AM Ritesh Chikatwar 
wrote:

> Adding Prajith,
>
>
> Will replace host work in this case..? If yes please update your thoughts
> here
>
> On Tue, Jun 22, 2021 at 9:09 AM Strahil Nikolov via Users 
> wrote:
>
>> I'm not sure about the GUI (but I think it has the option) , but under
>> command line you got several options.
>>
>> 1. Use gluster's remove-brick replica 2 (with flag force)
>> and then 'add-brick replica 3'
>> 2. Use the old way 'replace-brick'
>>
>> If you need guidance, please provide the 'gluster volume info ' .
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Tue, Jun 22, 2021 at 2:01, Dominique D
>>  wrote:
>> yesterday I had a disk failure on my stack of 3 Ovirt 4.4.1 node
>>
>> on each server I have 3 Bricks (engine, data, vmstore)
>>
>> brick data 4X600Gig raid0. /dev/gluster_vg_sdb/gluster_lv_data mount
>> /gluster_bricks/data
>> brick engine 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_engine mount
>> /gluster_bricks/engine
>> brick vmstore 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_vmstore mount
>> /gluster_bricks/vmstore
>>
>> Everything was configured by the gui (hyperconverge and hosted-engine)
>>
>> It is the raid0 of the 2nd server who broke.
>>
>> all VMs were automatically moved to the other two servers, I haven't lost
>> any data.
>>
>> the host2 is now in maintenance mode.
>>
>> I am going to buy 4 new SSD disks to replace the 4 disks of the defective
>> raid0.
>>
>> When I'm going to erase the faulty raid0 and create the new raid with the
>> new disks on the raid controler, how do I add in ovirt so that they
>> resynchronize with the other bricks data?
>>
>> Status of volume: data
>> Gluster processTCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick 172.16.70.91:/gluster_bricks/data/dat
>> a  491530  Y
>> 79168
>> Brick 172.16.70.92:/gluster_bricks/data/dat
>> a  N/A  N/AN  N/A
>> Brick 172.16.70.93:/gluster_bricks/data/dat
>> a  491520  Y  3095
>> Self-heal Daemon on localhost  N/A  N/AY  2528
>> Self-heal Daemon on 172.16.70.91N/A  N/AY
>> 225523
>> Self-heal Daemon on 172.16.70.93N/A  N/AY
>> 3121
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4XAPOA35NEFSQ5CGL5OV7KKCZMBGJUK/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBYPWP23TIKH6KOYBFLBSWLOFWVYVV7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAC6UXQYLZ2DC5LLLYOXUTOD7HAZ5RYG/


[ovirt-users] Re: How to replace a failed oVirt Hyperconverged Host

2021-03-10 Thread Prajith Kesava Prasad
Hi Ramon,

We have an ansible playbook[2] for replacing a failed host in a
gluster-enabled cluster, do check it out [1], and see if that would work
out for you.

[1]
https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L57
[2]https://github.com/gluster/gluster-ansible


Regards,
Prajith Kesava Prasad

On Wed, Mar 10, 2021 at 11:20 PM Ramon Sierra  wrote:

> Hi,
>
> We have a three hosts hyperconverged ovirt setup. A few weeks ago one of
> the hosts failed and we lost a RAID5 array on it. We removed it from the
> cluster and repaired. We are trying to setup and add it back to the
> cluster but we are not clear on how to proceed. There is a replica 2
> with 1 arbiter Gluster setup on the cluster and I have no idea on how to
> recreate the LVM partitions, gluster bricks, and then add it to the
> cluster in order to start the healing process.
>
> Any help on how to proceed with this scenario will be very welcome.
>
> Ramon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YIS5OHW7BMTGIDPD4O3TTB2YEZUVQ3QQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGBXL4U4M3T5TQHLQBVZUQCLWIBCADAY/


[ovirt-users] Re: Using Ansible to automate failed Drive replacement

2020-09-07 Thread Prajith Kesava Prasad
apologies for the typo in my  previous email :-) , dont mind the second
"and then".
:-
[EDITED ]Im unable to view the original thread, so apologies if the content
of my message is already repeated,
judging by the title of the email, i would say sandro is right, if one of
the node in your gluster enabled cluster has a failed disk, you could
replace the disk with a new one(thus losing all your old data thereby
making your host new) and run "same node fqdn replace host procedure" .
follow this instruction to setup your replace host  playbook instructions
<https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52>
.
>
> Im unable to view the original thread, so apologies if the content of my
>> message is already repeated,
>> judging by the title of the email, i would say sandro is right, if one of
>> the node in your gluster enabled cluster has a failed disk, you could
>> replace the disk with a new one(thus losing all your old data thereby
>> making your host new) and run "same node fqdn replace host procedure" .
>> follow this instruction to setup your replace host  playbook instructions
>> <https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52>
>> .
>>
>> Kind Regards,
>> Prajith Kesava Prasad.
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PWWCVXJZ7GD7YE34QGF5NIUTTI5XJTP/


[ovirt-users] Re: Using Ansible to automate failed Drive replacement

2020-09-07 Thread Prajith Kesava Prasad
Im unable to view the original thread, so apologies if the content of my
message is already repeated,
judging by the title of the email, i would say sandro is right, if one of
the node in your gluster enabled cluster has a failed disk, you could
replace the disk with a new one(thus losing all your old data thereby
making your host new) and run "same node fqdn replace host procedure" and
then follow this instruction to setup your replace host  playbook
instructions
<https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52>
.

Kind Regards,
Prajith Kesava Prasad.

On Mon, Sep 7, 2020 at 3:45 PM Sandro Bonazzola  wrote:

> I think replacing a failed disk won't be much different than replacing a
> failing host, there's a session today at the conference *Replacing
> gluster host in oVirt-Engine <https://youtu.be/dFWW5fEupYQ>* – Prajith
> Kesava Prasad <https://twitter.com/PrajithKPrasad>
> +Prajith Kesava Prasad  or +Gobinda Das
>  can probably elaborate on this.
>
>
> Il giorno lun 7 set 2020 alle ore 12:06  ha
> scritto:
>
>> that's correct Sandro!!!
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALHN35S2N4MWKKSCG3TMXIOYRZSXIQP4/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQUWXWLMLXFIRM6BTBXQJE6BDA4RZLQ3/


[ovirt-users] Re: Hyperconverged set up with Gluster and 6 Nodes

2020-09-07 Thread Prajith Kesava Prasad
Edit (adding users , so that they can correct me if im wrong , or provide
more inputs)

Hi Holger,

Yes, (someone else correct me if im wrong),  it is possible via gluster
ansible deployment, and if i'm not wrong , initially you need to deploy the
usual 3 node setup and then add then add in further hosts in the sets of 3
( i believe maximum of 12 nodes is supported) and then create a new gluster
volume in the new nodes and then expand the existing nodes to
accommodate new disks on the new host.

i believe this doc
<https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/pdf/automating_rhhi_for_virtualization_deployment/Red_Hat_Hyperconverged_Infrastructure_for_Virtualization-1.6-Automating_RHHI_for_Virtualization_deployment-en-US.pdf>
should
you give you a little insight:- its a little old but gives you an idea

hope this helps!


Regards,
Prajith Kesava Prasad.

On Mon, Sep 7, 2020 at 11:19 PM Holger Petrick 
wrote:

> Dear All,
>
> in all  documentation to set up oVirt in a Hyperconverged system with
> Gluster says that it must have 3 Enterprise nodes.
> In our test environment we set up successfully with oVirt wizard an oVirt
> cluster with gluster + Arbiter and hosted engine.
>
> Is it also possible to build a cluster with 6 nodes? The wizard gives only
> the option to add 3 nodes. Is this kind of design supported?
>
> My idea is, first to set up the Gluster, then install the oVirt nodes and
> deploy the engine.
> Would this work? Any documentation avaiable?
>
> Thanks
> Holger
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y3BITDLDM75EXMRJOK4U3RONLUYCBHYU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWERRRESJOSI7UFHIZN2STOBTKOSTVFO/