[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
t;>> >> >> >>>
>> >>> >> >> >>> I wish I had a downgrade path.
>> >>> >> >> >>>
>> >>> >> >> >>> Thank You For The Help !!
>> >>> >> >> >>>
>> >>> >> >> >>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
>> >>> >> >> >
>> >>> >> >> >>> wrote:
>> >>> >> >> >>>
>> >>> >> >> >>>> Hi ,
>> >>> >> >> >>>>
>> >>> >> >> >>>>
>> >>> >> >> >>>> This one really looks like the ACL bug I was hit with
>> >when I
>> >>> >> >> >updated
>> >>> >> >> >>>> from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
>> >>> >> >> >>>>
>> >>> >> >> >>>> Did you update your setup recently ? Did you upgrade
>> >gluster
>> >>> >> >also ?
>> >>> >> >> >>>>
>> >>> >> >> >>>> You have to check the gluster logs in order to verify
>> >that,
>> >>> >so
>> >>> >> >you
>> >>> >> >> >can
>> >>> >> >> >>>> try:
>> >>> >> >> >>>>
>> >>> >> >> >>>> 1. Set Gluster logs to trace level (for details check:
>> >>> >> >> >>>>
>> >>> >> >> >
>> >>> >> >>
>> >>> >> >
>> >>> >>
>> >>> >
>> >>>
>> >
>>
>https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>> >>> >> >> >>>> )
>> >>> >> >> >>>> 2. Power up a VM that was already off , or retry the
>> >>> >procedure
>> >>> >> >from
>> >>> >> >> >the
>> >>> >> >> >>>> logs you sent.
>> >>> >> >> >>>> 3. Stop the trace level of the logs
>> >>> >> >> >>>> 4. Check libvirt logs on the host that was supposed to
>> >power
>> >>> >up
>> >>> >> >the
>> >>> >> >> >VM
>> >>> >> >> >>>> (in case a VM was powered on)
>> >>> >> >> >>>> 5. Check the gluster brick logs on all nodes for ACL
>> >errors.
>> >>> >> >> >>>> Here is a sample from my old logs:
>> >>> >> >> >>>>
>> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >>> >> >> >13:19:41.489047] I
>> >>> >> >> >>>> [MSGID: 139001]
>> >[posix-acl.c:262:posix_acl_log_permit_denied]
>> >>> >> >> >>>> 0-data_fast4-access-control: client:
>> >CTX_ID:4a654305-d2e4-
>> >>> >> >> >>>>
>> >>> >> >>
>> >>> >> >>
>> >>> >>
>> >>> >>
>> >>>
>> >>>
>>
>>
>>>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>> >acl:-)
>> >>> >> >> >>>> [Permission denied]
>> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >>> >> >> >13:22:51.818796] I
>> >>> >> >> >>>> [MSGID: 139001]
>> >[posix-acl.c:262:posix_acl_log_permit_denied]
>> >>> >> >> >>>> 0-data_fast4-access-control: client:
>> >CTX_ID:4a654305-d2e4-
>> >>> >> &

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
gt;>> >> >> >>>>
> >>> >> >> >
> >>> >> >>
> >>> >> >
> >>> >>
> >>> >
> >>>
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
> >>> >> >> >>>> )
> >>> >> >> >>>> 2. Power up a VM that was already off , or retry the
> >>> >procedure
> >>> >> >from
> >>> >> >> >the
> >>> >> >> >>>> logs you sent.
> >>> >> >> >>>> 3. Stop the trace level of the logs
> >>> >> >> >>>> 4. Check libvirt logs on the host that was supposed to
> >power
> >>> >up
> >>> >> >the
> >>> >> >> >VM
> >>> >> >> >>>> (in case a VM was powered on)
> >>> >> >> >>>> 5. Check the gluster brick logs on all nodes for ACL
> >errors.
> >>> >> >> >>>> Here is a sample from my old logs:
> >>> >> >> >>>>
> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >>> >> >> >13:19:41.489047] I
> >>> >> >> >>>> [MSGID: 139001]
> >[posix-acl.c:262:posix_acl_log_permit_denied]
> >>> >> >> >>>> 0-data_fast4-access-control: client:
> >CTX_ID:4a654305-d2e4-
> >>> >> >> >>>>
> >>> >> >>
> >>> >> >>
> >>> >>
> >>> >>
> >>>
> >>>
>
> >>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
> >acl:-)
> >>> >> >> >>>> [Permission denied]
> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >>> >> >> >13:22:51.818796] I
> >>> >> >> >>>> [MSGID: 139001]
> >[posix-acl.c:262:posix_acl_log_permit_denied]
> >>> >> >> >>>> 0-data_fast4-access-control: client:
> >CTX_ID:4a654305-d2e4-
> >>> >> >> >>>>
> >>> >> >>
> >>> >> >>
> >>> >>
> >>> >>
> >>>
> >>>
>
> >>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
> >acl:-)
> >>> >> >> >>>> [Permission denied]
> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >>> >> >> >13:24:43.732856] I
> >>> >> >> >>>> [MSGID: 139001]
> >[posix-acl.c:262:posix_acl_log_permit_denied]
> >>> >> >> >>>> 0-data_fast4-access-control: client:
> >CTX_ID:4a654305-d2e4-
> >>> >> >> >>>>
> >>> >> >>
> >>> >> >>
> >>> >>
> >>> >>
> >>>
> >>>
>
> >>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
> >acl:-)
> >>> >> >> >>>> [Permission denied]
> >>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >>> >> >> >13:26:50.758178] I
> >>> >> >> >>>> [MSGID: 139001

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Here is what I did to make my volume

 gluster volume create imgnew2a replica 3 transport tcp
ov12.strg.srcle.com:/bricks/brick10/imgnew2a
ov13.strg.srcle.com:/bricks/brick11/imgnew2a ov14.strg.srcle.com:
/bricks/brick12/imgnew2a

on a host with the old volume I did this

 mount -t glusterfs yy.yy.24.18:/images3/ /mnt/test/  -- Old defective
volume
  974  ls /mnt/test
  975  mount
  976  mkdir /mnt/test1trg
  977  mount -t glusterfs yy.yy.24.24:/imgnew2a /mnt/test1trg    New
volume
  978  mount
  979  ls /mnt/test
  980  cp -a /mnt/test/* /mnt/test1trg/

When I tried to add the storage domain -- I got the errors described
previously about needing to clean out the old domain.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 12:01 AM C Williams  wrote:

> Thanks Strahil
>
> I made a new gluster volume using only gluster CLI. Mounted the old volume
> and the new volume. Copied my data from the old volume to the new domain.
> Set the volume options like the old domain via the CLI. Tried to make a new
> storage domain using the paths to the new servers. However, oVirt
> complained that there was already a domain there and that I needed to clean
> it first. .
>
> What to do ?
>
> Thank You For Your Help !
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KR4ARQ5IN6LEZ2HCAPBEH5G6GA3LPRJ2/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Thanks Strahil

I made a new gluster volume using only gluster CLI. Mounted the old volume
and the new volume. Copied my data from the old volume to the new domain.
Set the volume options like the old domain via the CLI. Tried to make a new
storage domain using the paths to the new servers. However, oVirt
complained that there was already a domain there and that I needed to clean
it first. .

What to do ?

Thank You For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQJYO6CK4WQHQLM2FJ435MXH3H2BI6JL/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
gt;> >>
>>> >>
>>> >>
>>>
>>>
>>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>acl:-)
>>> >> >> >>>> [Permission denied]
>>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>>> >> >> >13:22:51.818796] I
>>> >> >> >>>> [MSGID: 139001]
>[posix-acl.c:262:posix_acl_log_permit_denied]
>>> >> >> >>>> 0-data_fast4-access-control: client:
>CTX_ID:4a654305-d2e4-
>>> >> >> >>>>
>>> >> >>
>>> >> >>
>>> >>
>>> >>
>>>
>>>
>>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>acl:-)
>>> >> >> >>>> [Permission denied]
>>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>>> >> >> >13:24:43.732856] I
>>> >> >> >>>> [MSGID: 139001]
>[posix-acl.c:262:posix_acl_log_permit_denied]
>>> >> >> >>>> 0-data_fast4-access-control: client:
>CTX_ID:4a654305-d2e4-
>>> >> >> >>>>
>>> >> >>
>>> >> >>
>>> >>
>>> >>
>>>
>>>
>>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>acl:-)
>>> >> >> >>>> [Permission denied]
>>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>>> >> >> >13:26:50.758178] I
>>> >> >> >>>> [MSGID: 139001]
>[posix-acl.c:262:posix_acl_log_permit_denied]
>>> >> >> >>>> 0-data_fast4-access-control: client:
>CTX_ID:4a654305-d2e4-
>>> >> >> >>>>
>>> >> >>
>>> >> >>
>>> >>
>>> >>
>>>
>>>
>>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>acl:-)
>>> >> >> >>>> [Permission denied]
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>> In my case , the workaround was to downgrade the gluster
>>> >> >packages
>>> >> >> >on all
>>> >> >> >>>> nodes (and reboot each node 1 by 1 ) if the major version
>is
>>> >the
>>> >> >> >same, but
>>> >> >> >>>> if you upgraded to v7.X - then you can try the v7.0 .
>>> >> >> >>>>
>>> >> >> >>>> Best Regards,
>>> >> >> >>>> Strahil Nikolov
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>>
>>> >> >> >>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C
>Williams <
>>> >> >> >>>> cwilliams3...@gmail.com> написа:
>>> >> >> >>>>
>&

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
gt; >>> on 6/17.
>> >> >> >>>
>> >> >> >>> I wish I had a downgrade path.
>> >> >> >>>
>> >> >> >>> Thank You For The Help !!
>> >> >> >>>
>> >> >> >>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
>> >> >> >
>> >> >> >>> wrote:
>> >> >> >>>
>> >> >> >>>> Hi ,
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> This one really looks like the ACL bug I was hit with when I
>> >> >> >updated
>> >> >> >>>> from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
>> >> >> >>>>
>> >> >> >>>> Did you update your setup recently ? Did you upgrade gluster
>> >> >also ?
>> >> >> >>>>
>> >> >> >>>> You have to check the gluster logs in order to verify that,
>> >so
>> >> >you
>> >> >> >can
>> >> >> >>>> try:
>> >> >> >>>>
>> >> >> >>>> 1. Set Gluster logs to trace level (for details check:
>> >> >> >>>>
>> >> >> >
>> >> >>
>> >> >
>> >>
>> >
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>> >> >> >>>> )
>> >> >> >>>> 2. Power up a VM that was already off , or retry the
>> >procedure
>> >> >from
>> >> >> >the
>> >> >> >>>> logs you sent.
>> >> >> >>>> 3. Stop the trace level of the logs
>> >> >> >>>> 4. Check libvirt logs on the host that was supposed to power
>> >up
>> >> >the
>> >> >> >VM
>> >> >> >>>> (in case a VM was powered on)
>> >> >> >>>> 5. Check the gluster brick logs on all nodes for ACL errors.
>> >> >> >>>> Here is a sample from my old logs:
>> >> >> >>>>
>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >> >13:19:41.489047] I
>> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >> >>>>
>> >> >>
>> >> >>
>> >>
>> >>
>>
>> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >> >>>> [Permission denied]
>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >> >13:22:51.818796] I
>> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >> >>>>
>> >> >>
>> >> >>
>> >>
>> >>
>>
>> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >> >>>> [Permission denied]
>> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >> >13:24:43.732856] I
>> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >> >>>>
>> >> >>
>> >> >>

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
ou have to check the gluster logs in order to verify that,
> >so
> >> >you
> >> >> >can
> >> >> >>>> try:
> >> >> >>>>
> >> >> >>>> 1. Set Gluster logs to trace level (for details check:
> >> >> >>>>
> >> >> >
> >> >>
> >> >
> >>
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
> >> >> >>>> )
> >> >> >>>> 2. Power up a VM that was already off , or retry the
> >procedure
> >> >from
> >> >> >the
> >> >> >>>> logs you sent.
> >> >> >>>> 3. Stop the trace level of the logs
> >> >> >>>> 4. Check libvirt logs on the host that was supposed to power
> >up
> >> >the
> >> >> >VM
> >> >> >>>> (in case a VM was powered on)
> >> >> >>>> 5. Check the gluster brick logs on all nodes for ACL errors.
> >> >> >>>> Here is a sample from my old logs:
> >> >> >>>>
> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >> >13:19:41.489047] I
> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >> >>>>
> >> >>
> >> >>
> >>
> >>
>
> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >>>> [Permission denied]
> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >> >13:22:51.818796] I
> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >> >>>>
> >> >>
> >> >>
> >>
> >>
>
> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >>>> [Permission denied]
> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >> >13:24:43.732856] I
> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >> >>>>
> >> >>
> >> >>
> >>
> >>
>
> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >>>> [Permission denied]
> >> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >> >13:26:50.758178] I
> >> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >> >>>>
> >> >>
> >> >>
> >>
> >>
>
> >>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >>>> [Permission den

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
: client: CTX_ID:4a654305-d2e4-
>> >> >>>>
>> >>
>> >>
>>
>>
>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >>>> [Permission denied]
>> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >13:22:51.818796] I
>> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >>>>
>> >>
>> >>
>>
>>
>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >>>> [Permission denied]
>> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >13:24:43.732856] I
>> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >>>>
>> >>
>> >>
>>
>>
>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >>>> [Permission denied]
>> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>> >> >13:26:50.758178] I
>> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> >> >>>>
>> >>
>> >>
>>
>>
>>>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >>>> [Permission denied]
>> >> >>>>
>> >> >>>>
>> >> >>>> In my case , the workaround was to downgrade the gluster
>> >packages
>> >> >on all
>> >> >>>> nodes (and reboot each node 1 by 1 ) if the major version is
>the
>> >> >same, but
>> >> >>>> if you upgraded to v7.X - then you can try the v7.0 .
>> >> >>>>
>> >> >>>> Best Regards,
>> >> >>>> Strahil Nikolov
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
>> >> >>>> cwilliams3...@gmail.com> написа:
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Hello,
>> >> >>>>
>> >> >>>> Here are additional log tiles as well as a tree of the
>> >problematic
>> >> >>>> Gluster storage domain. During this time I attempted to copy
>a
>> >> >virtual disk
>> >> >>>> to another domain, move a virtual disk to another domain and
>run
>> >a
>> >> >VM where
>> >> >>>> the virtual hard disk would be used.
>> >> >>>>
>> >> >>>> The copies/moves failed and the VM went into pause mode when
>the
>> >> >virtual
>> >> >>>> HDD was involved.
>> >> >>>>
>>

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
806,
> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >>>> [Permission denied]
> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >13:24:43.732856] I
> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >>>>
> >>
> >>
>
> >>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >>>> [Permission denied]
> >> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >> >13:26:50.758178] I
> >> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >> >>>>
> >>
> >>
>
> >>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >>>> [Permission denied]
> >> >>>>
> >> >>>>
> >> >>>> In my case , the workaround was to downgrade the gluster
> >packages
> >> >on all
> >> >>>> nodes (and reboot each node 1 by 1 ) if the major version is the
> >> >same, but
> >> >>>> if you upgraded to v7.X - then you can try the v7.0 .
> >> >>>>
> >> >>>> Best Regards,
> >> >>>> Strahil Nikolov
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
> >> >>>> cwilliams3...@gmail.com> написа:
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> Hello,
> >> >>>>
> >> >>>> Here are additional log tiles as well as a tree of the
> >problematic
> >> >>>> Gluster storage domain. During this time I attempted to copy a
> >> >virtual disk
> >> >>>> to another domain, move a virtual disk to another domain and run
> >a
> >> >VM where
> >> >>>> the virtual hard disk would be used.
> >> >>>>
> >> >>>> The copies/moves failed and the VM went into pause mode when the
> >> >virtual
> >> >>>> HDD was involved.
> >> >>>>
> >> >>>> Please check these out.
> >> >>>>
> >> >>>> Thank You For Your Help !
> >> >>>>
> >> >>>> On Sat, Jun 20, 2020 at 9:54 AM C Williams
> >> >
> >> >>>> wrote:
> >> >>>> > Strahil,
> >> >>>> >
> >> >>>> > I understand. Please keep me posted.
> >> >>>> >
> >> >>>> > Thanks For The Help !
> >> >>>> >
> >> >>>> > On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov
> >> >
> >> >>>> wrote:
> >> >>>> >> Hey C Williams,
> >> >>>> >>
> >> >>>> >> sorry for the delay,  but I couldn't get somw time to check
> >your
> >> >>>> logs.  Will  try a  little  bit later.
> >> >>>> >>
> >> >>>> >> Best Regards,
> >> >>>> >> Strahil  Nikolov
> >> >>>> >>
> >> >>>> >> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams <
> >> >>>> cwilliams3...@gmail.com> написа:
> >> >>>> >>>Hello,
> >> >>>> >&

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
main-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >>>> [Permission denied]
>> >>>>
>> >>>>
>> >>>> In my case , the workaround was to downgrade the gluster
>packages
>> >on all
>> >>>> nodes (and reboot each node 1 by 1 ) if the major version is the
>> >same, but
>> >>>> if you upgraded to v7.X - then you can try the v7.0 .
>> >>>>
>> >>>> Best Regards,
>> >>>> Strahil Nikolov
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
>> >>>> cwilliams3...@gmail.com> написа:
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>> Here are additional log tiles as well as a tree of the
>problematic
>> >>>> Gluster storage domain. During this time I attempted to copy a
>> >virtual disk
>> >>>> to another domain, move a virtual disk to another domain and run
>a
>> >VM where
>> >>>> the virtual hard disk would be used.
>> >>>>
>> >>>> The copies/moves failed and the VM went into pause mode when the
>> >virtual
>> >>>> HDD was involved.
>> >>>>
>> >>>> Please check these out.
>> >>>>
>> >>>> Thank You For Your Help !
>> >>>>
>> >>>> On Sat, Jun 20, 2020 at 9:54 AM C Williams
>> >
>> >>>> wrote:
>> >>>> > Strahil,
>> >>>> >
>> >>>> > I understand. Please keep me posted.
>> >>>> >
>> >>>> > Thanks For The Help !
>> >>>> >
>> >>>> > On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov
>> >
>> >>>> wrote:
>> >>>> >> Hey C Williams,
>> >>>> >>
>> >>>> >> sorry for the delay,  but I couldn't get somw time to check
>your
>> >>>> logs.  Will  try a  little  bit later.
>> >>>> >>
>> >>>> >> Best Regards,
>> >>>> >> Strahil  Nikolov
>> >>>> >>
>> >>>> >> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams <
>> >>>> cwilliams3...@gmail.com> написа:
>> >>>> >>>Hello,
>> >>>> >>>
>> >>>> >>>Was wanting to follow up on this issue. Users are impacted.
>> >>>> >>>
>> >>>> >>>Thank You
>> >>>> >>>
>> >>>> >>>On Fri, Jun 19, 2020 at 9:20 AM C Williams
>> >
>> >>>> >>>wrote:
>> >>>> >>>
>> >>>> >>>> Hello,
>> >>>> >>>>
>> >>>> >>>> Here are the logs (some IPs are changed )
>> >>>> >>>>
>> >>>> >>>> ov05 is the SPM
>> >>>> >>>>
>> >>>> >>>> Thank You For Your Help !
>> >>>> >>>>
>> >>>> >>>> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
>> >>>> >>>
>> >>>> >>>> wrote:
>> >>>> >>>>
>> >>>> >>>>> Check on the hosts tab , which is your current SPM (last
>> >column in
>> >>>> >>>Admin
>> >>>> >>>>> UI).
>> >>>> >>>>> Then open the /var/log/vdsm/vdsm.log  and repeat the
>> >operation.
>> >>>> >>>>> Then provide the log from that host and the engine's log
>(on
>> >the
>> >>>> >>>>> HostedEngine VM or on your standalone engine).
>> >>>> >>>>>
>> >>>> >>>>> Best Regards,
>> >>>> >>>>> Strahil Nik

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
>>> from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
> >>>>
> >>>> Did you update your setup recently ? Did you upgrade gluster also ?
> >>>>
> >>>> You have to check the gluster logs in order to verify that, so you
> >can
> >>>> try:
> >>>>
> >>>> 1. Set Gluster logs to trace level (for details check:
> >>>>
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
> >>>> )
> >>>> 2. Power up a VM that was already off , or retry the procedure from
> >the
> >>>> logs you sent.
> >>>> 3. Stop the trace level of the logs
> >>>> 4. Check libvirt logs on the host that was supposed to power up the
> >VM
> >>>> (in case a VM was powered on)
> >>>> 5. Check the gluster brick logs on all nodes for ACL errors.
> >>>> Here is a sample from my old logs:
> >>>>
> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >13:19:41.489047] I
> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >>>>
>
> >4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >>>> [Permission denied]
> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >13:22:51.818796] I
> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >>>>
>
> >4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >>>> [Permission denied]
> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >13:24:43.732856] I
> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >>>>
>
> >4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >>>> [Permission denied]
> >>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
> >13:26:50.758178] I
> >>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
> >>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
> >>>>
>
> >4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
> >>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
> >>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >>>> [Permission denied]
> >>>>
> >>>>
> >>>> In my case , the workaround was to downgrade the gluster packages
> >on all
> >>>> nodes (and reboot each node 1 by 1 ) if the major version is the
> >same, but
> >>>> if you upgraded to v7.X - then you can try the v7.0 .
> >>>>
> >>>> Best Regards,
> >>>> Strahil Nikolov
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
> >>>> cwilliams3...@gmail.com> написа:
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>> Here are additional log tiles as well as a tree of the problematic
> >>>> Gluster storage domain. During this time I attempted to copy a
> >virtual disk
> >>>> to another domain, move a virtual disk to another domain and run a
> >VM where
> >>>> the virt

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
cl.c:262:posix_acl_log_permit_denied]
>>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>>>
>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>>> [Permission denied]
>>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>13:22:51.818796] I
>>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>>>
>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>>> [Permission denied]
>>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>13:24:43.732856] I
>>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>>>
>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>>> [Permission denied]
>>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>13:26:50.758178] I
>>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>>>
>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>>> [Permission denied]
>>>>
>>>>
>>>> In my case , the workaround was to downgrade the gluster packages
>on all
>>>> nodes (and reboot each node 1 by 1 ) if the major version is the
>same, but
>>>> if you upgraded to v7.X - then you can try the v7.0 .
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
>>>> cwilliams3...@gmail.com> написа:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>> Here are additional log tiles as well as a tree of the problematic
>>>> Gluster storage domain. During this time I attempted to copy a
>virtual disk
>>>> to another domain, move a virtual disk to another domain and run a
>VM where
>>>> the virtual hard disk would be used.
>>>>
>>>> The copies/moves failed and the VM went into pause mode when the
>virtual
>>>> HDD was involved.
>>>>
>>>> Please check these out.
>>>>
>>>> Thank You For Your Help !
>>>>
>>>> On Sat, Jun 20, 2020 at 9:54 AM C Williams
>
>>>> wrote:
>>>> > Strahil,
>>>> >
>>>> > I understand. Please keep me posted.
>>>> >
>>>> > Thanks For The Help !
>>>> >
>>>> > On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov
>
>>>> wrote:
>>>> >> Hey C Williams,
>>>> >>
>>>> >> sorry for the delay,  but I couldn't get somw time to check your
>>>> logs.  Will  try a  little  bit later.
>>>> >>
>>>> >> Best Regards,
>>>> >> Strahil  Nikolov
>>>> >>
>>>> >> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams <
>>>> cwilliams3...@gmail.com> написа:
>>>> >>>Hello,
>>>> >>>
>>>> >>>Was wanting to follow up on this issue. Users are impacted.
>>>> >>>
>>>> >>>Thank You
>>>> >>>
>>>> >>>On Fri, Jun 19, 2020 at 9:20 AM C Williams
>
>>>> >>>wrote:
>>>> >>>
>>>> >>>> Hello,
>>>> >>&

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
:[2020-03-18 13:24:43.732856] I
>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>> [Permission denied]
>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:26:50.758178] I
>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>> [Permission denied]
>>>
>>>
>>> In my case , the workaround was to downgrade the gluster packages on all
>>> nodes (and reboot each node 1 by 1 ) if the major version is the same, but
>>> if you upgraded to v7.X - then you can try the v7.0 .
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
>>> cwilliams3...@gmail.com> написа:
>>>
>>>
>>>
>>>
>>>
>>> Hello,
>>>
>>> Here are additional log tiles as well as a tree of the problematic
>>> Gluster storage domain. During this time I attempted to copy a virtual disk
>>> to another domain, move a virtual disk to another domain and run a VM where
>>> the virtual hard disk would be used.
>>>
>>> The copies/moves failed and the VM went into pause mode when the virtual
>>> HDD was involved.
>>>
>>> Please check these out.
>>>
>>> Thank You For Your Help !
>>>
>>> On Sat, Jun 20, 2020 at 9:54 AM C Williams 
>>> wrote:
>>> > Strahil,
>>> >
>>> > I understand. Please keep me posted.
>>> >
>>> > Thanks For The Help !
>>> >
>>> > On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov 
>>> wrote:
>>> >> Hey C Williams,
>>> >>
>>> >> sorry for the delay,  but I couldn't get somw time to check your
>>> logs.  Will  try a  little  bit later.
>>> >>
>>> >> Best Regards,
>>> >> Strahil  Nikolov
>>> >>
>>> >> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams <
>>> cwilliams3...@gmail.com> написа:
>>> >>>Hello,
>>> >>>
>>> >>>Was wanting to follow up on this issue. Users are impacted.
>>> >>>
>>> >>>Thank You
>>> >>>
>>> >>>On Fri, Jun 19, 2020 at 9:20 AM C Williams 
>>> >>>wrote:
>>> >>>
>>> >>>> Hello,
>>> >>>>
>>> >>>> Here are the logs (some IPs are changed )
>>> >>>>
>>> >>>> ov05 is the SPM
>>> >>>>
>>> >>>> Thank You For Your Help !
>>> >>>>
>>> >>>> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
>>> >>>
>>> >>>> wrote:
>>> >>>>
>>> >>>>> Check on the hosts tab , which is your current SPM (last column in
>>> >>>Admin
>>> >>>>> UI).
>>> >>>>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>>> >>>>> Then provide the log from that host and the engine's log (on the
>>> >>>>> HostedEngine VM or on your standalone engine).
>>> >>>>>
>>> >>>>> Best Regards,
>>> >>>>> Strahil Nikolov
>>> >>>>>
>>> >>>>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams
>>> >>>
>>> >>>>> написа:
>>> >>>>> >Resending to eliminate email issues
>>> >>>>> >
>>> >>>>> >-- Forwarded message -
>>> >>>>> >From: C Williams 
>>> >>>>> >Date: Thu, Jun 18, 2020 at 4:01 PM
>&g

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> Here are additional log tiles as well as a tree of the problematic
>> Gluster storage domain. During this time I attempted to copy a virtual disk
>> to another domain, move a virtual disk to another domain and run a VM where
>> the virtual hard disk would be used.
>>
>> The copies/moves failed and the VM went into pause mode when the virtual
>> HDD was involved.
>>
>> Please check these out.
>>
>> Thank You For Your Help !
>>
>> On Sat, Jun 20, 2020 at 9:54 AM C Williams 
>> wrote:
>> > Strahil,
>> >
>> > I understand. Please keep me posted.
>> >
>> > Thanks For The Help !
>> >
>> > On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov 
>> wrote:
>> >> Hey C Williams,
>> >>
>> >> sorry for the delay,  but I couldn't get somw time to check your
>> logs.  Will  try a  little  bit later.
>> >>
>> >> Best Regards,
>> >> Strahil  Nikolov
>> >>
>> >> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams <
>> cwilliams3...@gmail.com> написа:
>> >>>Hello,
>> >>>
>> >>>Was wanting to follow up on this issue. Users are impacted.
>> >>>
>> >>>Thank You
>> >>>
>> >>>On Fri, Jun 19, 2020 at 9:20 AM C Williams 
>> >>>wrote:
>> >>>
>> >>>> Hello,
>> >>>>
>> >>>> Here are the logs (some IPs are changed )
>> >>>>
>> >>>> ov05 is the SPM
>> >>>>
>> >>>> Thank You For Your Help !
>> >>>>
>> >>>> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
>> >>>
>> >>>> wrote:
>> >>>>
>> >>>>> Check on the hosts tab , which is your current SPM (last column in
>> >>>Admin
>> >>>>> UI).
>> >>>>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>> >>>>> Then provide the log from that host and the engine's log (on the
>> >>>>> HostedEngine VM or on your standalone engine).
>> >>>>>
>> >>>>> Best Regards,
>> >>>>> Strahil Nikolov
>> >>>>>
>> >>>>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams
>> >>>
>> >>>>> написа:
>> >>>>> >Resending to eliminate email issues
>> >>>>> >
>> >>>>> >-- Forwarded message -
>> >>>>> >From: C Williams 
>> >>>>> >Date: Thu, Jun 18, 2020 at 4:01 PM
>> >>>>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
>> >>>>> >To: Strahil Nikolov 
>> >>>>> >
>> >>>>> >
>> >>>>> >Here is output from mount
>> >>>>> >
>> >>>>> >192.168.24.12:/stor/import0 on
>> >>>>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
>> >>>>> >type nfs4
>> >>>>>
>> >>>>>
>>
>> >>>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
>> >>>>> >192.168.24.13:/stor/import1 on
>> >>>>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
>> >>>>> >type nfs4
>> >>>>>
>> >>>>>
>>
>> >>>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >>>>> >192.168.24.13:/stor/iso1 on
>> >>>>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
>> >>>>> >type nfs4
>> >>>>>
>> >>>>>
>>
>> >>>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >>>>> >192.168.24.13:/stor/export0 on
>> >>>>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
>> >>>>&

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread Strahil Nikolov via Users
Hi ,


This one really looks like the ACL bug I was hit with when I updated from 
Gluster v6.5 to 6.6 and later from 7.0 to 7.2.

Did you update your setup recently ? Did you upgrade gluster also ?

You have to check the gluster logs in order to verify that, so you can try:

1. Set Gluster logs to trace level (for details check: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
 )
2. Power up a VM that was already off , or retry the procedure from the logs 
you sent.
3. Stop the trace level of the logs
4. Check libvirt logs on the host that was supposed to power up the VM (in case 
a VM was powered on)
5. Check the gluster brick logs on all nodes for ACL errors.
Here is a sample from my old logs:

gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:19:41.489047] I [MSGID: 
139001] [posix-acl.c:262:posix_acl_log_permit_denied] 
0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), 
ctx
(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission 
denied]
gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:22:51.818796] I [MSGID: 
139001] [posix-acl.c:262:posix_acl_log_permit_denied] 
0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), 
ctx
(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission 
denied]
gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:24:43.732856] I [MSGID: 
139001] [posix-acl.c:262:posix_acl_log_permit_denied] 
0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), 
ctx
(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission 
denied]
gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:26:50.758178] I [MSGID: 
139001] [posix-acl.c:262:posix_acl_log_permit_denied] 
0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), 
ctx
(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission 
denied]


In my case , the workaround was to downgrade the gluster packages on all nodes 
(and reboot each node 1 by 1 ) if the major version is the same, but if you 
upgraded to v7.X - then you can try the v7.0 .

Best Regards,
Strahil Nikolov






В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams 
 написа: 





Hello,

Here are additional log tiles as well as a tree of the problematic Gluster 
storage domain. During this time I attempted to copy a virtual disk to another 
domain, move a virtual disk to another domain and run a VM where the virtual 
hard disk would be used. 

The copies/moves failed and the VM went into pause mode when the virtual HDD 
was involved.

Please check these out.

Thank You For Your Help !

On Sat, Jun 20, 2020 at 9:54 AM C Williams  wrote:
> Strahil,
> 
> I understand. Please keep me posted.
> 
> Thanks For The Help ! 
> 
> On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov  wrote:
>> Hey C Williams,
>> 
>> sorry for the delay,  but I couldn't get somw time to check your logs.  Will 
>>  try a  little  bit later.
>> 
>> Best Regards,
>> Strahil  Nikolov
>> 
>> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams  
>> написа:
>>>Hello,
>>>
>>>Was wanting to follow up on this issue. Users are impacted.
>>>
>>>Thank You
>>>
>>>On Fri, Jun 19, 2020 at 9:20 AM C Williams 
>>>wrote:
>>>
>>>> Hello,
>>>>
>>>> Here are the logs (some IPs are changed )
>>>>
>>>> ov05 is the SPM
>>>>
>>>> Thank You For Your Help !
>>>>
>>>> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
>>>
>>>> wrote:
>>>>
>>>>> Check on the hosts tab , which is your current SPM (last column in
>>>Admin
>>>>> UI).
>>>>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>>>>> Then provide the log from that host and the engine's log (on the
>>>>> HostedEngine VM or on your standalone engine).
>>>>>
>>>>> Best Regards,
>>>>> Strahil Nikolov
>>>>&

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
Strahil,

I understand. Please keep me posted.

Thanks For The Help !

On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov 
wrote:

> Hey C Williams,
>
> sorry for the delay,  but I couldn't get somw time to check your logs.
> Will  try a  little  bit later.
>
> Best Regards,
> Strahil  Nikolov
>
> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams 
> написа:
> >Hello,
> >
> >Was wanting to follow up on this issue. Users are impacted.
> >
> >Thank You
> >
> >On Fri, Jun 19, 2020 at 9:20 AM C Williams 
> >wrote:
> >
> >> Hello,
> >>
> >> Here are the logs (some IPs are changed )
> >>
> >> ov05 is the SPM
> >>
> >> Thank You For Your Help !
> >>
> >> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
> >
> >> wrote:
> >>
> >>> Check on the hosts tab , which is your current SPM (last column in
> >Admin
> >>> UI).
> >>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
> >>> Then provide the log from that host and the engine's log (on the
> >>> HostedEngine VM or on your standalone engine).
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams
> >
> >>> написа:
> >>> >Resending to eliminate email issues
> >>> >
> >>> >-- Forwarded message -
> >>> >From: C Williams 
> >>> >Date: Thu, Jun 18, 2020 at 4:01 PM
> >>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
> >>> >To: Strahil Nikolov 
> >>> >
> >>> >
> >>> >Here is output from mount
> >>> >
> >>> >192.168.24.12:/stor/import0 on
> >>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
> >>> >192.168.24.13:/stor/import1 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.13:/stor/iso1 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.13:/stor/export0 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.15:/images on
> >>> >/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
> >>> >type fuse.glusterfs
> >>>
> >>>
>
> >>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >>> >192.168.24.18:/images3 on
> >>> >/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
> >>> >type fuse.glusterfs
> >>>
> >>>
>
> >>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >>> >tmpfs on /run/user/0 type tmpfs
> >>> >(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
> >>> >[root@ov06 glusterfs]#
> >>> >
> >>> >Also here is a screenshot of the console
> >>> >
> >>> >[image: image.png]
> >>> >The other domains are up
> >>> >
> >>> >Import0 and Import1 are NFS . GLCL0 is gluster. They all are
> >running
> >>> >VMs
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov
> >
> >>> >wrote:
> >>> >
> >>> >> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1'
> >>> >mounted
> >>> &

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread Strahil Nikolov via Users
Hey C Williams,

sorry for the delay,  but I couldn't get somw time to check your logs.  Will  
try a  little  bit later.

Best Regards,
Strahil  Nikolov

На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams  
написа:
>Hello,
>
>Was wanting to follow up on this issue. Users are impacted.
>
>Thank You
>
>On Fri, Jun 19, 2020 at 9:20 AM C Williams 
>wrote:
>
>> Hello,
>>
>> Here are the logs (some IPs are changed )
>>
>> ov05 is the SPM
>>
>> Thank You For Your Help !
>>
>> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
>
>> wrote:
>>
>>> Check on the hosts tab , which is your current SPM (last column in
>Admin
>>> UI).
>>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>>> Then provide the log from that host and the engine's log (on the
>>> HostedEngine VM or on your standalone engine).
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams
>
>>> написа:
>>> >Resending to eliminate email issues
>>> >
>>> >-- Forwarded message -
>>> >From: C Williams 
>>> >Date: Thu, Jun 18, 2020 at 4:01 PM
>>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
>>> >To: Strahil Nikolov 
>>> >
>>> >
>>> >Here is output from mount
>>> >
>>> >192.168.24.12:/stor/import0 on
>>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
>>> >type nfs4
>>>
>>>
>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
>>> >192.168.24.13:/stor/import1 on
>>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
>>> >type nfs4
>>>
>>>
>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>>> >192.168.24.13:/stor/iso1 on
>>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
>>> >type nfs4
>>>
>>>
>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>>> >192.168.24.13:/stor/export0 on
>>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
>>> >type nfs4
>>>
>>>
>>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>>> >192.168.24.15:/images on
>>> >/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
>>> >type fuse.glusterfs
>>>
>>>
>>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>>> >192.168.24.18:/images3 on
>>> >/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
>>> >type fuse.glusterfs
>>>
>>>
>>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>>> >tmpfs on /run/user/0 type tmpfs
>>> >(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
>>> >[root@ov06 glusterfs]#
>>> >
>>> >Also here is a screenshot of the console
>>> >
>>> >[image: image.png]
>>> >The other domains are up
>>> >
>>> >Import0 and Import1 are NFS . GLCL0 is gluster. They all are
>running
>>> >VMs
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov
>
>>> >wrote:
>>> >
>>> >> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1'
>>> >mounted
>>> >> at all  .
>>> >> What is the status  of all storage domains ?
>>> >>
>>> >> Best  Regards,
>>> >> Strahil  Nikolov
>>> >>
>>> >> На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
>>> >
>>> >> написа:
>>> >> >  Resending to deal with possible email issues
>>> >> >
>>> >> >-- Forwarded message -
>>> >> >From: C Williams 
>>> >> >Date: Thu, Jun 18, 2020 at 2:07 PM
>>> >> >Subject: Re: [ovirt-users] Issues with Gluster Domain
>>> >> >T

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-19 Thread C Williams
Hello,

Was wanting to follow up on this issue. Users are impacted.

Thank You

On Fri, Jun 19, 2020 at 9:20 AM C Williams  wrote:

> Hello,
>
> Here are the logs (some IPs are changed )
>
> ov05 is the SPM
>
> Thank You For Your Help !
>
> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov 
> wrote:
>
>> Check on the hosts tab , which is your current SPM (last column in Admin
>> UI).
>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>> Then provide the log from that host and the engine's log (on the
>> HostedEngine VM or on your standalone engine).
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams 
>> написа:
>> >Resending to eliminate email issues
>> >
>> >---------- Forwarded message -----
>> >From: C Williams 
>> >Date: Thu, Jun 18, 2020 at 4:01 PM
>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
>> >To: Strahil Nikolov 
>> >
>> >
>> >Here is output from mount
>> >
>> >192.168.24.12:/stor/import0 on
>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
>> >192.168.24.13:/stor/import1 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.13:/stor/iso1 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.13:/stor/export0 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.15:/images on
>> >/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
>> >type fuse.glusterfs
>>
>> >(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>> >192.168.24.18:/images3 on
>> >/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
>> >type fuse.glusterfs
>>
>> >(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>> >tmpfs on /run/user/0 type tmpfs
>> >(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
>> >[root@ov06 glusterfs]#
>> >
>> >Also here is a screenshot of the console
>> >
>> >[image: image.png]
>> >The other domains are up
>> >
>> >Import0 and Import1 are NFS . GLCL0 is gluster. They all are running
>> >VMs
>> >
>> >Thank You For Your Help !
>> >
>> >On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov 
>> >wrote:
>> >
>> >> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1'
>> >mounted
>> >> at all  .
>> >> What is the status  of all storage domains ?
>> >>
>> >> Best  Regards,
>> >> Strahil  Nikolov
>> >>
>> >> На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
>> >
>> >> написа:
>> >> >  Resending to deal with possible email issues
>> >> >
>> >> >-- Forwarded message -
>> >> >From: C Williams 
>> >> >Date: Thu, Jun 18, 2020 at 2:07 PM
>> >> >Subject: Re: [ovirt-users] Issues with Gluster Domain
>> >> >To: Strahil Nikolov 
>> >> >
>> >> >
>> >> >More
>> >> >
>> >> >[root@ov06 ~]# for i in $(gluster volume list);  do  echo $i;echo;
>> >> >gluster
>> >> >volume info $i; echo;echo;gluster volume status
>> >$i;echo;echo;echo;done
>> >> >images3
>> >> >
>> >> >
>> >> >Volume Name: images3
>> >> >Type: Replicate
>> >> >Volume ID: 0243d439-1b29-47d0-ab39-d61c2f15ae8b
>> >> >Status: Started
>> >> >Snapshot Count: 0
>> >> >Number of Bricks: 1 x 3 = 3
>>

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-18 Thread Strahil Nikolov via Users
Check on the hosts tab , which is your current SPM (last column in Admin  UI).
Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
Then provide the log from that host and the engine's log (on the HostedEngine 
VM or on your standalone engine).

Best Regards,
Strahil Nikolov

На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams  
написа:
>Resending to eliminate email issues
>
>-- Forwarded message -
>From: C Williams 
>Date: Thu, Jun 18, 2020 at 4:01 PM
>Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
>To: Strahil Nikolov 
>
>
>Here is output from mount
>
>192.168.24.12:/stor/import0 on
>/rhev/data-center/mnt/192.168.24.12:_stor_import0
>type nfs4
>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
>192.168.24.13:/stor/import1 on
>/rhev/data-center/mnt/192.168.24.13:_stor_import1
>type nfs4
>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>192.168.24.13:/stor/iso1 on
>/rhev/data-center/mnt/192.168.24.13:_stor_iso1
>type nfs4
>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>192.168.24.13:/stor/export0 on
>/rhev/data-center/mnt/192.168.24.13:_stor_export0
>type nfs4
>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>192.168.24.15:/images on
>/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
>type fuse.glusterfs
>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>192.168.24.18:/images3 on
>/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
>type fuse.glusterfs
>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>tmpfs on /run/user/0 type tmpfs
>(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
>[root@ov06 glusterfs]#
>
>Also here is a screenshot of the console
>
>[image: image.png]
>The other domains are up
>
>Import0 and Import1 are NFS . GLCL0 is gluster. They all are running
>VMs
>
>Thank You For Your Help !
>
>On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov 
>wrote:
>
>> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1' 
>mounted
>> at all  .
>> What is the status  of all storage domains ?
>>
>> Best  Regards,
>> Strahil  Nikolov
>>
>> На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
>
>> написа:
>> >  Resending to deal with possible email issues
>> >
>> >-- Forwarded message -
>> >From: C Williams 
>> >Date: Thu, Jun 18, 2020 at 2:07 PM
>> >Subject: Re: [ovirt-users] Issues with Gluster Domain
>> >To: Strahil Nikolov 
>> >
>> >
>> >More
>> >
>> >[root@ov06 ~]# for i in $(gluster volume list);  do  echo $i;echo;
>> >gluster
>> >volume info $i; echo;echo;gluster volume status
>$i;echo;echo;echo;done
>> >images3
>> >
>> >
>> >Volume Name: images3
>> >Type: Replicate
>> >Volume ID: 0243d439-1b29-47d0-ab39-d61c2f15ae8b
>> >Status: Started
>> >Snapshot Count: 0
>> >Number of Bricks: 1 x 3 = 3
>> >Transport-type: tcp
>> >Bricks:
>> >Brick1: 192.168.24.18:/bricks/brick04/images3
>> >Brick2: 192.168.24.19:/bricks/brick05/images3
>> >Brick3: 192.168.24.20:/bricks/brick06/images3
>> >Options Reconfigured:
>> >performance.client-io-threads: on
>> >nfs.disable: on
>> >transport.address-family: inet
>> >user.cifs: off
>> >auth.allow: *
>> >performance.quick-read: off
>> >performance.read-ahead: off
>> >performance.io-cache: off
>> >performance.low-prio-threads: 32
>> >network.remote-dio: off
>> >cluster.eager-lock: enable
>> >cluster.quorum-type: auto
>> >cluster.server-quorum-type: server
>> >cluster.data-self-heal-algorithm: full
>> >cluster.locking-scheme: granular
>> >cluster.shd-max-threads: 8
>> >cluster.shd-wait-qlength: 1
>> >features.shard: on
>> >cluster.choose-local: off
>> >client.event-threads: 4
>> >server.event-threads: 4
>> >storage.owner-uid: 36
>> >storage.owner-gid: 36
>> >performance.strict-o-direct: on
>> >network.ping-timeout: 30
>> >clust