[ovirt-users] Re: Mirror oVirt content

2020-04-21 Thread Adrian Quintero
elines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AMBMC2U745ZOIVMQ6SAOU5PVODCVHZPE/ >> > > > -- > Anton Marchukov > Associate Manager - RHV DevOps - Red Hat > -- Adrian Quintero ___ User

[ovirt-users] Re: oVirt 4.3.7 + Gluster 6.6 unsynced entries

2020-04-19 Thread Adrian Quintero
-6a195082fbd6 glusterfs.gfid.string="cf395203-bb5a-4ca2-9641-78457b502ba8" Thanks, Adrian On Sat, Apr 18, 2020 at 2:51 PM Strahil Nikolov wrote: > On April 18, 2020 5:58:03 PM GMT+03:00, Adrian Quintero < > adrianquint...@gmail.com> wrote: > >ok I did that however I stil

[ovirt-users] Re: oVirt 4.3.7 + Gluster 6.6 unsynced entries

2020-04-18 Thread Adrian Quintero
g '/' from absolute path names # file: mnt/vmstore/ef503f3c-d57f-457d-a7a6-6a195082fbd6/ trusted.glusterfs.pathinfo="( )" Thanks, On Sat, Apr 18, 2020 at 10:44 AM Adrian Quintero wrote: > ah ok.. > want me to do it on any of the hosts? > > On Sat, Apr 18, 2020 at 1

[ovirt-users] Re: oVirt 4.3.7 + Gluster 6.6 unsynced entries

2020-04-18 Thread Adrian Quintero
ah ok.. want me to do it on any of the hosts? On Sat, Apr 18, 2020 at 10:34 AM Strahil Nikolov wrote: > On April 18, 2020 4:46:35 PM GMT+03:00, Adrian Quintero < > adrianquint...@gmail.com> wrote: > >Hi Strahil, > >Here are my findings > > > > > >1.-

[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread Adrian Quintero
Hi Strahil, no, just the, .meta files and that solved everything. On Sun, Feb 16, 2020, 8:36 PM Strahil Nikolov wrote: > On February 16, 2020 9:16:14 PM GMT+02:00, adrianquint...@gmail.com wrote: > >After a couple of hours all looking good and seems that the timestamps > >corrected themselves

[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread Adrian Quintero
>> > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > >> > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPTAODQ3Q4ZDKJ7W5BCKYC4NNM3TFQ2V/ > >> ___ >

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-14 Thread Adrian Quintero
c/gluster-ansible-maintenance > I am not sure if it covers all senarios, but you can try with same FQDN. > > On Fri, Jun 14, 2019 at 7:13 AM Adrian Quintero > wrote: > >> Strahil, >> Thanks for all the follow up, I will try to reproduce the same scenario >> today

[ovirt-users] Re: 4.3.4 caching disk error during hyperconverged deployment

2019-06-13 Thread Adrian Quintero
er_thinpool_gluster_vg_sdb", > "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname": > "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": > "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5} > > > > PLAY RECAP > **

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-13 Thread Adrian Quintero
firewall and repos) and a > full heal. > From there you can reinstall the machine from the UI and it should be > available for usage. > > P.S.: I know that the whole procedure is not so easy :) > > Best Regards, > Strahil Nikolov > On Jun 12, 2019 19:02, Adrian Quinter

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-12 Thread Adrian Quintero
new-server:/new/path/to/brick commit force > > In both cases check the status via: > gluster volume info VOLNAME > > If your cluster is in production , I really recommend you the first option > as it is less risky and the chance for unplanned downtime will be minimal. > &g

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Adrian Quintero
* Disconnected //server that went bad.* b152fd82-8213-451f-93c6-353e96aa3be9 vmm102.mydomain.com Connected //vmm10 but with different name 228a9282-c04e-4229-96a6-67cb47629892 localhost Connected On Tue, Jun 11, 2019 at 11:24 AM Adrian Quintero wrote: > Strahil, > > Looking at your suggestions I thi

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Adrian Quintero
all issues. > On the new node check if glusterd is enabled and running. > > In order to debug - you should provide more info like 'gluster volume > info' and the peer status from each node. > > Best Regards, > Strahil Nikolov > > On Jun 10, 2019 20:10, Adrian Qu

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
t; On Mon, Jun 10, 2019, 18:13 Adrian Quintero > wrote: > >> Ok I have tried reinstalling the server from scratch with a different >> name and IP address and when trying to add it to cluster I get the >> following error: >> >> Event details >> ID: 505

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
un 10, 2019 at 12:52 PM Strahil wrote: > Hi Adrian, > Did you fix the issue with the gluster and the missing brick? > If yes, try to set the 'old' host in maintenance and then forcefully > remove it from oVirt. > If it succeeds (and it should), then you can add the server back and then > c

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
system-uuid* b64d566e-055d-44d4-83a2-d3b83f25412e Any suggestions? Thanks On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero wrote: > Leo, > I did try putting it under maintenance and checking to ignore gluster and > it did not work. > Error while executing action: > -Cannot remo

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-08 Thread Adrian Quintero
ile checking to ignore > guster warning will let you remove it. > Maybe I am wrong about the procedure, can anybody input an advice helping > with this situation ? > Cheers, > > Leo > > > > > On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero > wrote: > >>

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Adrian Quintero
I tried removing the bad host but running into the following issue , any idea? Operation Canceled Error while executing action: host1.mydomain.com - Cannot remove Host. Server having Gluster volume. On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero wrote: > Leo, I forgot to mention tha

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Adrian Quintero
Leo, I forgot to mention that I have 1 SSD disk for caching purposes, wondering how that setup should be achieved? thanks, Adrian On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero wrote: > Hi Leo, yes, this helps a lot, this confirms the plan we had in mind. > > Will test tomorrow

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-05 Thread Adrian Quintero
gt;> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/ >> > -- Adrian Quintero __

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Adrian Quintero
o in prod as follows: > Let's say you are on 4.2.8 > Next step would be to go to 4.3.latest and then to 4.4.latest . > > A test cluster (even in VMs ) is also benefitial. > > Despite the hiccups I have stumbled upon, I think that the project is > great. > > Best Regards, > S

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Adrian Quintero
l don't have the full hang of it and the RHV 4.1 course is way to old :) Thanks again for helping out with this. -AQ On Tue, May 21, 2019 at 3:29 AM Sachidananda URS wrote: > > > On Tue, May 21, 2019 at 12:16 PM Sahina Bose wrote: > >> >> >> On Mon, May 20, 2

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Adrian Quintero
, "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluste

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
id/pvuuid > /dev/mapper/multipath-uuid > /dev/sdb > > Linux will not allow you to work with /dev/sdb , when multipath is locking > the block device. > > Best Regards, > Strahil Nikolov > > В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < &

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
what will be the > impact? > > > > thanks again. > > ___ > > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/site/privacy-p

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < > adrianquint...@gmail.com> написа: > > > Strahil, > this is the issue I am seeing now > > [image: image.png] > > The is thru the UI when I try to create a new brick. > > So my concern is if I modify the

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread Adrian Quintero
o unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/