[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-23 Thread Strahil Nikolov via Users
As  far as  I know, oVirt 4.4  uses gluster  v7.X , so you will eventually have 
to upgrade the version.

As  I mentioned, I have created my new volume  while I was running a higher  
version and copied the data to it, which prevented  the acl bug hitting me 
again.

I  can recommend you to:
1. Mount the new gluster  volume via FUSE
2. Wipe the data,  as  you don't need it
3. Attach the new gluster volume as  a fresh storage domain in  ovirt
4. Live migrate the VM disks  to the new  volume prior  to upgrading gluster

I cannot guarantee that the issue will be avoided ,  but it's worth trying .

Best Regards,
Strahil Nikolov

На 23 юни 2020 г. 23:42:13 GMT+03:00, C Williams  
написа:
>Strahil,
>
>Thanks for getting back with me !
>
>Sounds like it is best to evacuate VM disks to another storage domain 
>--
>if possible from a Gluster storage domain -- prior to an upgrade .
>
>Per your questions ...
>
>1. Which version of oVirt and Gluster are you using ?
>oVirt 4.3.8.2-1
>Gluster 6.5 , 6.7, 6.9 (depends on the cluster)
>
>2. You now have your old  gluster volume attached  to oVirt and the new
>volume unused, right ?
>Correct -- intending to dispose of data on the new volume since the old
>one is now working
>
>3. Did you copy the contents of the old volume to the new one ?
>I did prior to trying to downgrade the Gluster version on the hosts for
>the old volume. I am planning to delete the data now that the old
>volume is
>working.
>
>Thanks Again For Your Help !!
>
>On Mon, Jun 22, 2020 at 11:34 PM Strahil Nikolov
>
>wrote:
>
>> As  I told you, you could just downgrade gluster on all nodes and
>later
>> plan to live migrate the VM disks.
>> I had to copy my data to the new volume, so I can avoid the ACL bug ,
>when
>> I use newer versions of gluster.
>>
>>
>> Let's clarify some details:
>> 1. Which version of oVirt and Gluster are you using ?
>> 2. You now have your old  gluster volume attached  to oVirt and the
>new
>> volume unused, right ?
>> 3. Did you copy the contents of the old volume to the new one ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 23 юни 2020 г. 4:34:19 GMT+03:00, C Williams
>
>> написа:
>> >Strahil,
>> >
>> >Thank You For Help !
>> >
>> >Downgrading Gluster to 6.5 got the original storage domain working
>> >again !
>> >
>> >After, I finished my copy of the contents of the problematic volume
>to
>> >a
>> >new volume, I did the following
>> >
>> >Unmounted the mount points
>> >Stopped the original problematic Gluster volume
>> >On each problematic peer,
>> >I downgraded Gluster to 6.5
>> >(yum downgrade glusterfs-6.5-1.el7.x86_64
>> >vdsm-gluster-4.30.46-1.el7.x86_64
>> >python2-gluster-6.5-1.el7.x86_64 glusterfs-libs-6.5-1.el7.x86_64
>> >glusterfs-cli-6.5-1.el7.x86_64 glusterfs-fuse-6.5-1.el7.x86_64
>> >glusterfs-rdma-6.5-1.el7.x86_64 glusterfs-api-6.5-1.el7.x86_64
>> >glusterfs-server-6.5-1.el7.x86_64 glusterfs-events-6.5-1.el7.x86_64
>> >glusterfs-client-xlators-6.5-1.el7.x86_64
>> >glusterfs-geo-replication-6.5-1.el7.x86_64)
>> >Restarted glusterd (systemctl restart glusterd)
>> >Restarted the problematic Gluster  volume
>> >Reattached the problematic storage domain
>> >Started the problematic storage domain
>> >
>> >Things work now. I can now run VMs and write data, copy virtual
>disks,
>> >move
>> >virtual disks to other storage domains, etc.
>> >
>> >I am very thankful that the storage domain is working again !
>> >
>> >How can I safely perform upgrades on Gluster ! When will it be safe
>to
>> >do
>> >so ?
>> >
>> >Thank You Again For Your Help !
>> >
>> >On Mon, Jun 22, 2020 at 10:58 AM C Williams
>
>> >wrote:
>> >
>> >> Strahil,
>> >>
>> >> I have downgraded the target. The copy from the problematic volume
>to
>> >the
>> >> target is going on now.
>> >> Once I have the data copied, I might downgrade the problematic
>> >volume's
>> >> Gluster to 6.5.
>> >> At that point I might reattach the original ovirt domain and see
>if
>> >it
>> >> will work again.
>> >> But the copy is going on right now.
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov
>> >
>> >> wrote:
>> >>
>> >>> You should ensure

[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread Strahil Nikolov via Users
As  I told you, you could just downgrade gluster on all nodes and later plan to 
live migrate the VM disks.
I had to copy my data to the new volume, so I can avoid the ACL bug , when I 
use newer versions of gluster.


Let's clarify some details:
1. Which version of oVirt and Gluster are you using ?
2. You now have your old  gluster volume attached  to oVirt and the new volume 
unused, right ?
3. Did you copy the contents of the old volume to the new one ?

Best Regards,
Strahil Nikolov

На 23 юни 2020 г. 4:34:19 GMT+03:00, C Williams  
написа:
>Strahil,
>
>Thank You For Help !
>
>Downgrading Gluster to 6.5 got the original storage domain working
>again !
>
>After, I finished my copy of the contents of the problematic volume to
>a
>new volume, I did the following
>
>Unmounted the mount points
>Stopped the original problematic Gluster volume
>On each problematic peer,
>I downgraded Gluster to 6.5
>(yum downgrade glusterfs-6.5-1.el7.x86_64
>vdsm-gluster-4.30.46-1.el7.x86_64
>python2-gluster-6.5-1.el7.x86_64 glusterfs-libs-6.5-1.el7.x86_64
>glusterfs-cli-6.5-1.el7.x86_64 glusterfs-fuse-6.5-1.el7.x86_64
>glusterfs-rdma-6.5-1.el7.x86_64 glusterfs-api-6.5-1.el7.x86_64
>glusterfs-server-6.5-1.el7.x86_64 glusterfs-events-6.5-1.el7.x86_64
>glusterfs-client-xlators-6.5-1.el7.x86_64
>glusterfs-geo-replication-6.5-1.el7.x86_64)
>Restarted glusterd (systemctl restart glusterd)
>Restarted the problematic Gluster  volume
>Reattached the problematic storage domain
>Started the problematic storage domain
>
>Things work now. I can now run VMs and write data, copy virtual disks,
>move
>virtual disks to other storage domains, etc.
>
>I am very thankful that the storage domain is working again !
>
>How can I safely perform upgrades on Gluster ! When will it be safe to
>do
>so ?
>
>Thank You Again For Your Help !
>
>On Mon, Jun 22, 2020 at 10:58 AM C Williams 
>wrote:
>
>> Strahil,
>>
>> I have downgraded the target. The copy from the problematic volume to
>the
>> target is going on now.
>> Once I have the data copied, I might downgrade the problematic
>volume's
>> Gluster to 6.5.
>> At that point I might reattach the original ovirt domain and see if
>it
>> will work again.
>> But the copy is going on right now.
>>
>> Thank You For Your Help !
>>
>> On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov
>
>> wrote:
>>
>>> You should ensure  that in the storage domain tab, the old  storage
>is
>>> not visible.
>>>
>>> I  still wander why yoiu didn't try to downgrade first.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams
>
>>> написа:
>>> >Strahil,
>>> >
>>> >The GLCL3 storage domain was detached prior to attempting to add
>the
>>> >new
>>> >storage domain.
>>> >
>>> >Should I also "Remove" it ?
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >-- Forwarded message -
>>> >From: Strahil Nikolov 
>>> >Date: Mon, Jun 22, 2020 at 12:50 AM
>>> >Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
>>> >To: C Williams 
>>> >Cc: users 
>>> >
>>> >
>>> >You can't add the new volume as it contains the same data (UUID) as
>the
>>> >old
>>> >one , thus you need to detach the old one before adding the new one
>-
>>> >of
>>> >course this means downtime for all VMs on that storage.
>>> >
>>> >As you see , downgrading is more simpler. For me v6.5 was working,
>>> >while
>>> >anything above (6.6+) was causing complete lockdown.  Also v7.0 was
>>> >working, but it's supported  in oVirt 4.4.
>>> >
>>> >Best Regards,
>>> >Strahil Nikolov
>>> >
>>> >На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
>>> >
>>> >написа:
>>> >>Another question
>>> >>
>>> >>What version could I downgrade to safely ? I am at 6.9 .
>>> >>
>>> >>Thank You For Your Help !!
>>> >>
>>> >>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>>> >>
>>> >>wrote:
>>> >>
>>> >>> You are definitely reading it wrong.
>>> >>> 1. I didn't create a new storage  domain ontop this new volume.
>>> >>> 2. I used cli
>>> >>>
>>> >>&

[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread C Williams
Strahil,

I have downgraded the target. The copy from the problematic volume to the
target is going on now.
Once I have the data copied, I might downgrade the problematic volume's
Gluster to 6.5.
At that point I might reattach the original ovirt domain and see if it will
work again.
But the copy is going on right now.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov 
wrote:

> You should ensure  that in the storage domain tab, the old  storage is not
> visible.
>
> I  still wander why yoiu didn't try to downgrade first.
>
> Best Regards,
> Strahil Nikolov
>
> На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >The GLCL3 storage domain was detached prior to attempting to add the
> >new
> >storage domain.
> >
> >Should I also "Remove" it ?
> >
> >Thank You For Your Help !
> >
> >-- Forwarded message ---------
> >From: Strahil Nikolov 
> >Date: Mon, Jun 22, 2020 at 12:50 AM
> >Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
> >To: C Williams 
> >Cc: users 
> >
> >
> >You can't add the new volume as it contains the same data (UUID) as the
> >old
> >one , thus you need to detach the old one before adding the new one -
> >of
> >course this means downtime for all VMs on that storage.
> >
> >As you see , downgrading is more simpler. For me v6.5 was working,
> >while
> >anything above (6.6+) was causing complete lockdown.  Also v7.0 was
> >working, but it's supported  in oVirt 4.4.
> >
> >Best Regards,
> >Strahil Nikolov
> >
> >На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
> >
> >написа:
> >>Another question
> >>
> >>What version could I downgrade to safely ? I am at 6.9 .
> >>
> >>Thank You For Your Help !!
> >>
> >>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
> >>
> >>wrote:
> >>
> >>> You are definitely reading it wrong.
> >>> 1. I didn't create a new storage  domain ontop this new volume.
> >>> 2. I used cli
> >>>
> >>> Something like this  (in your case it should be 'replica 3'):
> >>> gluster volume create newvol replica 3 arbiter 1
> >>ovirt1:/new/brick/path
> >>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
> >>> gluster volume start newvol
> >>>
> >>> #Detach oldvol from ovirt
> >>>
> >>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
> >>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
> >>> cp -a /mnt/oldvol/* /mnt/newvol
> >>>
> >>> #Add only newvol as a storage domain in oVirt
> >>> #Import VMs
> >>>
> >>> I still think that you should downgrade your gluster packages!!!
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
> >>
> >>> написа:
> >>> >Strahil,
> >>> >
> >>> >It sounds like  you used a "System Managed Volume" for the new
> >>storage
> >>> >domain,is that correct?
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
> >
> >>> >wrote:
> >>> >
> >>> >> Strahil,
> >>> >>
> >>> >> So you made another oVirt Storage Domain -- then copied the data
> >>with
> >>> >cp
> >>> >> -a from the failed volume to the new volume.
> >>> >>
> >>> >> At the root of the volume there will be the old domain folder id
> >>ex
> >>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
> >>> >>  in my case. Did that cause issues with making the new domain
> >>since
> >>> >it is
> >>> >> the same folder id as the old one ?
> >>> >>
> >>> >> Thank You For Your Help !
> >>> >>
> >>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
> >>> >
> >>> >> wrote:
> >>> >>
> >>> >>> In my situation I had  only the ovirt nodes.
> >>> >>>
> >>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
> >>> >
> >>> >>> написа:
> >>> >>> >Strahil,
> >>> >>> 

[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread Strahil Nikolov via Users
You should ensure  that in the storage domain tab, the old  storage is not 
visible.

I  still wander why yoiu didn't try to downgrade first.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams  
написа:
>Strahil,
>
>The GLCL3 storage domain was detached prior to attempting to add the
>new
>storage domain.
>
>Should I also "Remove" it ?
>
>Thank You For Your Help !
>
>-- Forwarded message -
>From: Strahil Nikolov 
>Date: Mon, Jun 22, 2020 at 12:50 AM
>Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
>To: C Williams 
>Cc: users 
>
>
>You can't add the new volume as it contains the same data (UUID) as the
>old
>one , thus you need to detach the old one before adding the new one -
>of
>course this means downtime for all VMs on that storage.
>
>As you see , downgrading is more simpler. For me v6.5 was working,
>while
>anything above (6.6+) was causing complete lockdown.  Also v7.0 was
>working, but it's supported  in oVirt 4.4.
>
>Best Regards,
>Strahil Nikolov
>
>На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
>
>написа:
>>Another question
>>
>>What version could I downgrade to safely ? I am at 6.9 .
>>
>>Thank You For Your Help !!
>>
>>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>>
>>wrote:
>>
>>> You are definitely reading it wrong.
>>> 1. I didn't create a new storage  domain ontop this new volume.
>>> 2. I used cli
>>>
>>> Something like this  (in your case it should be 'replica 3'):
>>> gluster volume create newvol replica 3 arbiter 1
>>ovirt1:/new/brick/path
>>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>>> gluster volume start newvol
>>>
>>> #Detach oldvol from ovirt
>>>
>>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>>> cp -a /mnt/oldvol/* /mnt/newvol
>>>
>>> #Add only newvol as a storage domain in oVirt
>>> #Import VMs
>>>
>>> I still think that you should downgrade your gluster packages!!!
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>>
>>> написа:
>>> >Strahil,
>>> >
>>> >It sounds like  you used a "System Managed Volume" for the new
>>storage
>>> >domain,is that correct?
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
>
>>> >wrote:
>>> >
>>> >> Strahil,
>>> >>
>>> >> So you made another oVirt Storage Domain -- then copied the data
>>with
>>> >cp
>>> >> -a from the failed volume to the new volume.
>>> >>
>>> >> At the root of the volume there will be the old domain folder id
>>ex
>>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>>> >>  in my case. Did that cause issues with making the new domain
>>since
>>> >it is
>>> >> the same folder id as the old one ?
>>> >>
>>> >> Thank You For Your Help !
>>> >>
>>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>>> >
>>> >> wrote:
>>> >>
>>> >>> In my situation I had  only the ovirt nodes.
>>> >>>
>>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>>> >
>>> >>> написа:
>>> >>> >Strahil,
>>> >>> >
>>> >>> >So should I make the target volume on 3 bricks which do not
>have
>>> >ovirt
>>> >>> >--
>>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
>>> >>> >
>>> >>> >Thank You For Your Help !
>>> >>> >
>>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>>> >
>>> >>> >wrote:
>>> >>> >
>>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
>>> >domain),
>>> >>> >set
>>> >>> >> the  original  storage  domain  in maintenance and detached
>>it.
>>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
>>Next,
>>> >I
>>> >>> >just
>>> >>> >> added  the  new  stor