[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-27 Thread Matt Snow
Pull request sent.

On Mon, Jan 25, 2021 at 11:21 AM Nir Soffer  wrote:

> On Mon, Jan 25, 2021 at 7:23 PM Matt Snow  wrote:
> >
> > I can confirm that removing "--manage-gids" flag from RPCMOUNTDOPTS in
> /etc/default/nfs-kernel-server now allows me to add the ZFS backed NFS
> share.
> >
> > From the man page:
> >
> > -g or --manage-gids
> > Accept requests from the kernel to map user id numbers into lists of
> group id numbers for use in access control. An NFS request will normally
> (except when using Kerberos or other cryptographic authentication) contains
> a user-id and a list of group-ids. Due to a limitation in the NFS protocol,
> at most 16 groups ids can be listed. If you use the -g flag, then the list
> of group ids received from the client will be replaced by a list of group
> ids determined by an appropriate lookup on the server. Note that the
> 'primary' group id is not affected so a newgroup command on the client will
> still be effective. This function requires a Linux Kernel with version at
> least 2.6.21.
> >
> >
> > I speculate that if I had directory services setup and the NFS server
> directed there, this would be a non-issue.
> >
> > Thank you so much for your help on this, Nir & Team!
>
> You can contribute by updating the nfs troubleshooting page with this info:
> https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
> See the link "Edit this page" in the bottom of the page.
>
> > On Mon, Jan 25, 2021 at 1:00 AM Nir Soffer  wrote:
> >>
> >> On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:
> >>
> >> I reproduced the issue with ubuntu server 20.04 nfs server.
> >>
> >> The root cause is this setting in /etc/default/nfs-kernel-server:
> >> RPCMOUNTDOPTS="--manage-gids"
> >>
> >> Looking at
> https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
> >> it seems that the purpose of this option is to ignore client groups,
> which
> >> breaks ovirt.
> >>
> >> After removing this option:
> >> RPCMOUNTDOPTS=""
> >>
> >> And restarting nfs-kernel-server service, creating storage domain works.
> >>
> >> You can check with Ubuntu folks why they are enabling this configuration
> >> by default, and if disabling it has any unwanted side effects.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/76DYBZ7SLXBX4L6HYOLLSGCHPTICK3P7/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-25 Thread Nir Soffer
On Mon, Jan 25, 2021 at 7:23 PM Matt Snow  wrote:
>
> I can confirm that removing "--manage-gids" flag from RPCMOUNTDOPTS in 
> /etc/default/nfs-kernel-server now allows me to add the ZFS backed NFS share.
>
> From the man page:
>
> -g or --manage-gids
> Accept requests from the kernel to map user id numbers into lists of group id 
> numbers for use in access control. An NFS request will normally (except when 
> using Kerberos or other cryptographic authentication) contains a user-id and 
> a list of group-ids. Due to a limitation in the NFS protocol, at most 16 
> groups ids can be listed. If you use the -g flag, then the list of group ids 
> received from the client will be replaced by a list of group ids determined 
> by an appropriate lookup on the server. Note that the 'primary' group id is 
> not affected so a newgroup command on the client will still be effective. 
> This function requires a Linux Kernel with version at least 2.6.21.
>
>
> I speculate that if I had directory services setup and the NFS server 
> directed there, this would be a non-issue.
>
> Thank you so much for your help on this, Nir & Team!

You can contribute by updating the nfs troubleshooting page with this info:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html

See the link "Edit this page" in the bottom of the page.

> On Mon, Jan 25, 2021 at 1:00 AM Nir Soffer  wrote:
>>
>> On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:
>>
>> I reproduced the issue with ubuntu server 20.04 nfs server.
>>
>> The root cause is this setting in /etc/default/nfs-kernel-server:
>> RPCMOUNTDOPTS="--manage-gids"
>>
>> Looking at https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
>> it seems that the purpose of this option is to ignore client groups, which
>> breaks ovirt.
>>
>> After removing this option:
>> RPCMOUNTDOPTS=""
>>
>> And restarting nfs-kernel-server service, creating storage domain works.
>>
>> You can check with Ubuntu folks why they are enabling this configuration
>> by default, and if disabling it has any unwanted side effects.
>>
>> Nir
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PHTQVU5GZCQ5AEMT53ISFXLR2IRU7T2Y/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-25 Thread Matt Snow
I can confirm that removing "--manage-gids" flag from RPCMOUNTDOPTS in
/etc/default/nfs-kernel-server now allows me to add the ZFS backed NFS
share.

>From the man page :

-g or --manage-gids
Accept requests from the kernel to map user id numbers into lists of group
id numbers for use in access control. An NFS request will normally (except
when using Kerberos or other cryptographic authentication) contains a
user-id and a list of group-ids. Due to a limitation in the NFS protocol,
at most 16 groups ids can be listed. If you use the -g flag, then the list
of group ids received from the client will be replaced by a list of group
ids determined by an appropriate lookup on the server. Note that the
'primary' group id is not affected so a newgroup command on the client will
still be effective. This function requires a Linux Kernel with version at
least 2.6.21.


I speculate that if I had directory services setup and the NFS server
directed there, this would be a non-issue.

Thank you so much for your help on this, Nir & Team!

On Mon, Jan 25, 2021 at 1:00 AM Nir Soffer  wrote:

> On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:
>
> I reproduced the issue with ubuntu server 20.04 nfs server.
>
> The root cause is this setting in /etc/default/nfs-kernel-server:
> RPCMOUNTDOPTS="--manage-gids"
>
> Looking at
> https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
> it seems that the purpose of this option is to ignore client groups, which
> breaks ovirt.
>
> After removing this option:
> RPCMOUNTDOPTS=""
>
> And restarting nfs-kernel-server service, creating storage domain works.
>
> You can check with Ubuntu folks why they are enabling this configuration
> by default, and if disabling it has any unwanted side effects.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQOQWZMABAKSWJKN5LSSCNNBBIEBMSWH/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-25 Thread Nir Soffer
On Mon, Jan 25, 2021 at 12:33 AM Matt Snow  wrote:

I reproduced the issue with ubuntu server 20.04 nfs server.

The root cause is this setting in /etc/default/nfs-kernel-server:
RPCMOUNTDOPTS="--manage-gids"

Looking at https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1454112
it seems that the purpose of this option is to ignore client groups, which
breaks ovirt.

After removing this option:
RPCMOUNTDOPTS=""

And restarting nfs-kernel-server service, creating storage domain works.

You can check with Ubuntu folks why they are enabling this configuration
by default, and if disabling it has any unwanted side effects.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDRVTC675KQX2RX5P42YNNGYG7Z75UBC/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-24 Thread Matt Snow
Hi  Nir,
as a test I setup a CentOS 8.3 cloud VM and exported an NFS share. It
worked as expected! Clearly something is wrong with how Ubuntu exports NFS
shares regardless of ZFS or other filesystems.  Unfortunately that is where
my storage is. =\

On Thu, Jan 21, 2021 at 6:27 AM Nir Soffer  wrote:

> On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
> >
> >
> >
> > On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
> >>
> >> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
> >> >
> >> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the
> same problem with both versions. The issue occurs in both cockpit UI and
> tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the
> VM is created and running.
> >> > I tried to do some debugging on my own before reaching out to this
> list. Any help is much appreciated!
> >> >
> >> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I
> believe it meets the minimum requirements.
> >> >
> >> > NFS server:
> >> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
> >>
> >> We don't test ZFS, but since  you tried also non-ZFS server with same
> issue, the
> >> issue is probably not ZFS.
> >>
> >> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also
> tried 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
> >>
> >> You are missing anonuid=36,anongid=36. This should not affect sanlock,
> but
> >> you will have issues with libvirt without this.
> >
> >
> > I found this setting in another email thread and have applied it
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >>
> >>
> >> Here is working export from my test system:
> >>
> >> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >>
> > I have updated both ZFS server and the Ubuntu 20.04 NFS server to use
> the above settings.
> >  # ZFS Server
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs set
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >
> > # Ubuntu laptop
> >
> > ls -ld /exports/host_storage
> >
> > drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
> >
> > root@msnowubntlt:/exports/host_storage# showmount -e dongle
> >
> > Export list for dongle:
> >
> > /exports/host_storage *
> >
> > root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
> >
> > /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >
> > root@msnowubntlt:/exports/host_storage#
> >
> >
> >
> > Interesting point - upon being re-prompted to configure storage after
> initial failure I provided *different* NFS server information (the ubuntu
> laptop)
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the storage you would like to use (glusterfs,
> iscsi, fc, nfs)[nfs]: nfs
> >
> >   Please specify the nfs version you would like to use (auto,
> v3, v4, v4_0, v4_1, v4_2)[auto]:
> >
> >   Please specify the full shared storage connection path to use
> (example: host:/path): dongle:/exports/host_storage
> >
> >   If needed, specify additional mount options for the connection
> to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
> >
> > [ INFO  ] Creating Storage Domain
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a
> specific set of steps]
> >
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the storage you would like to use (glusterfs,
> iscsi, fc, nfs)[nfs]:
> >
> > **snip**
> > Upon checking mounts I see that the original server is still being used.
> >
> > [root@brick ~]# mount | grep nfs
> >
> > sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> >
> > stumpy:/tanker/ovirt/host_storage on
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4
> 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-24 Thread Strahil Nikolov via Users
Can you try to alter the share options like this (check for typos as I am 
typing by memory):anonuid=36,anongid=36, all_squash

And of course export a fresh one:

Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android 
 
  On Mon, Jan 25, 2021 at 0:51, Matt Snow wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BLFODDM63I6IRUV5CEZAFBJLE25UQZQL/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z46A52GG2MWTBCGIRMUOQUZQFSORT3JI/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-23 Thread Matt Snow
Thanks, Joop. I saw this solution on the list and tried it as well. It did
not work for me.

On Fri, Jan 22, 2021 at 3:48 AM Joop  wrote:

> Hallo All,
>
> I'm not sure if this was mentioned but I had some problems with NFS,
> using Synology,  a while back and the solution for me was to use a
> subfolder instead of the root of the NFS mount.
> My exports:
> /volume1/nfs*(rw,sec=sys,anonuid=36,anongid=36)
>
> And I mount data in oVirt as /volume1/nfs/data. Before I re-installed I
> used /volume1/nfs/data as an export.
> Worth a try?
>
> Regards,
>
> Joop
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BXK27HXYNB2X6HOHYCROIHKZYRT4FKTD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P22LHTO3YOJHQKCJ2EB4RKIMUGUPADJH/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-22 Thread Joop
Hallo All,

I'm not sure if this was mentioned but I had some problems with NFS,
using Synology,  a while back and the solution for me was to use a
subfolder instead of the root of the NFS mount.
My exports:
/volume1/nfs*(rw,sec=sys,anonuid=36,anongid=36)

And I mount data in oVirt as /volume1/nfs/data. Before I re-installed I
used /volume1/nfs/data as an export.
Worth a try?

Regards,

Joop
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BXK27HXYNB2X6HOHYCROIHKZYRT4FKTD/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-21 Thread Matt Snow
Hi  Nir,
I will check selinux on both NFS servers.
I've created a centos8.3 cloud image KVM on the Ubuntu 20.04 laptop. I am
stepping through the installing ovirt as a standalone manager with local
databases instructions

.
Right now I am waiting for the host add to finish in the ovirt manager and
will try to add the NFS shares once that is done.

It would be slick if the self hosted engine worked out of the box with
node. Maybe in the future. :)  I will let you know what I find out
tomorrow.

Thank you for your help so far!

..Matt

On Thu, Jan 21, 2021 at 6:27 AM Nir Soffer  wrote:

> On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
> >
> >
> >
> > On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
> >>
> >> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
> >> >
> >> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the
> same problem with both versions. The issue occurs in both cockpit UI and
> tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the
> VM is created and running.
> >> > I tried to do some debugging on my own before reaching out to this
> list. Any help is much appreciated!
> >> >
> >> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I
> believe it meets the minimum requirements.
> >> >
> >> > NFS server:
> >> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
> >>
> >> We don't test ZFS, but since  you tried also non-ZFS server with same
> issue, the
> >> issue is probably not ZFS.
> >>
> >> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also
> tried 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
> >>
> >> You are missing anonuid=36,anongid=36. This should not affect sanlock,
> but
> >> you will have issues with libvirt without this.
> >
> >
> > I found this setting in another email thread and have applied it
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >>
> >>
> >> Here is working export from my test system:
> >>
> >> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >>
> > I have updated both ZFS server and the Ubuntu 20.04 NFS server to use
> the above settings.
> >  # ZFS Server
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs set
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >
> > # Ubuntu laptop
> >
> > ls -ld /exports/host_storage
> >
> > drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
> >
> > root@msnowubntlt:/exports/host_storage# showmount -e dongle
> >
> > Export list for dongle:
> >
> > /exports/host_storage *
> >
> > root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
> >
> > /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >
> > root@msnowubntlt:/exports/host_storage#
> >
> >
> >
> > Interesting point - upon being re-prompted to configure storage after
> initial failure I provided *different* NFS server information (the ubuntu
> laptop)
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the storage you would like to use (glusterfs,
> iscsi, fc, nfs)[nfs]: nfs
> >
> >   Please specify the nfs version you would like to use (auto,
> v3, v4, v4_0, v4_1, v4_2)[auto]:
> >
> >   Please specify the full shared storage connection path to use
> (example: host:/path): dongle:/exports/host_storage
> >
> >   If needed, specify additional mount options for the connection
> to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
> >
> > [ INFO  ] Creating Storage Domain
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a
> specific set of steps]
> >
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-21 Thread Nir Soffer
On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
>
>
>
> On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
>>
>> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>> >
>> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
>> > problem with both versions. The issue occurs in both cockpit UI and tmux'd 
>> > CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
>> > created and running.
>> > I tried to do some debugging on my own before reaching out to this list. 
>> > Any help is much appreciated!
>> >
>> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
>> > cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but 
>> > I believe it meets the minimum requirements.
>> >
>> > NFS server:
>> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
>>
>> We don't test ZFS, but since  you tried also non-ZFS server with same issue, 
>> the
>> issue is probably not ZFS.
>>
>> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
>> > 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
>>
>> You are missing anonuid=36,anongid=36. This should not affect sanlock, but
>> you will have issues with libvirt without this.
>
>
> I found this setting in another email thread and have applied it
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>>
>>
>> Here is working export from my test system:
>>
>> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>>
> I have updated both ZFS server and the Ubuntu 20.04 NFS server to use the 
> above settings.
>  # ZFS Server
>
> root@stumpy:/tanker/ovirt/host_storage# zfs set 
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>
> # Ubuntu laptop
>
> ls -ld /exports/host_storage
>
> drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
>
> root@msnowubntlt:/exports/host_storage# showmount -e dongle
>
> Export list for dongle:
>
> /exports/host_storage *
>
> root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
>
> /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>
> root@msnowubntlt:/exports/host_storage#
>
>
>
> Interesting point - upon being re-prompted to configure storage after initial 
> failure I provided *different* NFS server information (the ubuntu laptop)
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]: nfs
>
>   Please specify the nfs version you would like to use (auto, v3, v4, 
> v4_0, v4_1, v4_2)[auto]:
>
>   Please specify the full shared storage connection path to use 
> (example: host:/path): dongle:/exports/host_storage
>
>   If needed, specify additional mount options for the connection to 
> the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
>
> [ INFO  ] Creating Storage Domain
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps]
>
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]:
>
> **snip**
> Upon checking mounts I see that the original server is still being used.
>
> [root@brick ~]# mount | grep nfs
>
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>
> stumpy:/tanker/ovirt/host_storage on 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)

Using node and hosted engine makes everything more complicated
and harder to debug.

I would create an engine vm for testing on some machine, and add
this host as regular host. It will be easier to debug this way, since
you don't have to reinstall everything from scratch on every failure.

Once we sort out the issue with the NFS/ZFS storage, using node and

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Matt Snow
On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:

> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
> >
> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the
> same problem with both versions. The issue occurs in both cockpit UI and
> tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the
> VM is created and running.
> > I tried to do some debugging on my own before reaching out to this list.
> Any help is much appreciated!
> >
> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I
> believe it meets the minimum requirements.
> >
> > NFS server:
> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
>
> We don't test ZFS, but since  you tried also non-ZFS server with same
> issue, the
> issue is probably not ZFS.
>
> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried
> 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
>
> You are missing anonuid=36,anongid=36. This should not affect sanlock, but
> you will have issues with libvirt without this.
>

I found this setting in another email thread and have applied it

root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs

tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
root@stumpy:/tanker/ovirt/host_storage#

>
> Here is working export from my test system:
>
> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>
> I have updated both ZFS server and the Ubuntu 20.04 NFS server to use the
above settings.
 # ZFS Server

root@stumpy:/tanker/ovirt/host_storage# zfs set
sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt

root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs

tanker/ovirt  sharenfs
rw,sync,anonuid=36,anongid=36,no_subtree_check  local
root@stumpy:/tanker/ovirt/host_storage#

# Ubuntu laptop

ls -ld /exports/host_storage

drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage

root@msnowubntlt:/exports/host_storage# showmount -e dongle

Export list for dongle:

/exports/host_storage *

root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports

/exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)

root@msnowubntlt:/exports/host_storage#



Interesting point - upon being re-prompted to configure storage after
initial failure I provided *different* NFS server information (the ubuntu
laptop)
**snip**

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]

[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail
is "[]". HTTP response code is 400.


[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

  Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]: nfs


  Please specify the nfs version you would like to use (auto, v3,
v4, v4_0, v4_1, v4_2)[auto]:


  Please specify the full shared storage connection path to use
(example: host:/path): dongle:/exports/host_storage


  If needed, specify additional mount options for the connection to
the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:


[ INFO  ] Creating Storage Domain

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific
set of steps]

**snip**

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]

[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail
is "[]". HTTP response code is 400.

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

  Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]:
**snip**
Upon checking mounts I see that the original server is still being used.

[root@brick ~]# mount | grep nfs



sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

stumpy:/tanker/ovirt/host_storage on
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)


[root@brick ~]#


> > * The target directory is always empty and chown'd 36:36 with 0755
> permissions.
>
Correct.
>
> > * I have tried using both IP and DNS names. forward and reverse DNS
> works from ovirt host and other systems on the network.
> > * The NFS share always gets mounted successfully on the ovirt node
> system.
> > * I have tried auto and v3 NFS versions in other various combinations.
> > * I have also tried setting up an NFS server on a non-ZFS backed storage
> system that is open to any host and get the same errors as shown below.
> > * I ran nfs-check.py script without issue against both NFS servers and
> followed other 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Matt Snow
On Wed, Jan 20, 2021 at 3:00 AM Nir Soffer  wrote:

> On Wed, Jan 20, 2021 at 3:52 AM Matt Snow  wrote:
> >
> > [root@brick ~]# ps -efz | grep sanlock
>
> Sorry, its "ps -efZ", but we already know its not selinux.
>
> > [root@brick ~]# ps -ef | grep sanlock
> > sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock
> daemon
>
> Does sanlock run with the right groups?
>
> It appears so.

> On a working system:
>
> $ ps -efZ | grep sanlock | grep -v grep
> system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 983 1  0 11:23 ?
> 00:00:03 /usr/sbin/sanlock daemon
> system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 986  983  0 11:23 ?
> 00:00:00 /usr/sbin/sanlock daemon
>

[root@brick audit]# ps -efZ | grep sanlock | grep -v grep

system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 1308 1  0 Jan19 ?
00:00:04
/usr/sbin/sanlock daemon
system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 1309 1308  0 Jan19 ?
00:00:00
/usr/sbin/sanlock daemon

>
> The sanlock process running with "sanlock" user (pid=983) is the
> interesting one.
> The other one is a helper that never accesses storage.
>
> $ grep Groups: /proc/983/status
> Groups: 6 36 107 179
>
[root@brick audit]# grep Groups: /proc/1308/status

Groups: 6 36 107 179


> Vdsm verify this on startup using vdsm-tool is-configured. On a working
> system:
>
> $ sudo vdsm-tool is-configured
> lvm is configured for vdsm
> libvirt is already configured for vdsm
> sanlock is configured for vdsm
> Managed volume database is already configured
> Current revision of multipath.conf detected, preserving
> abrt is already configured for vdsm
>
> [root@brick audit]# vdsm-tool is-configured

lvm is configured for vdsm

libvirt is already configured for vdsm

sanlock is configured for vdsm

Current revision of multipath.conf detected, preserving

abrt is already configured for vdsm
Managed volume database is already configured


> > [root@brick ~]# ausearch -m avc
> > 
>
> Looks good.
>
> > [root@brick ~]# ls -lhZ
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
> > total 278K
> > -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids
>
> Looks correct.
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WMOJMLXJHVESJFCBFGVBIBJUTNXLICIR/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Wed, Jan 20, 2021 at 3:52 AM Matt Snow  wrote:
>
> [root@brick ~]# ps -efz | grep sanlock

Sorry, its "ps -efZ", but we already know its not selinux.

> [root@brick ~]# ps -ef | grep sanlock
> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon

Does sanlock run with the right groups?

On a working system:

$ ps -efZ | grep sanlock | grep -v grep
system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 983 1  0 11:23 ?
00:00:03 /usr/sbin/sanlock daemon
system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 986  983  0 11:23 ?
00:00:00 /usr/sbin/sanlock daemon

The sanlock process running with "sanlock" user (pid=983) is the
interesting one.
The other one is a helper that never accesses storage.

$ grep Groups: /proc/983/status
Groups: 6 36 107 179

Vdsm verify this on startup using vdsm-tool is-configured. On a working system:

$ sudo vdsm-tool is-configured
lvm is configured for vdsm
libvirt is already configured for vdsm
sanlock is configured for vdsm
Managed volume database is already configured
Current revision of multipath.conf detected, preserving
abrt is already configured for vdsm

> [root@brick ~]# ausearch -m avc
> 

Looks good.

> [root@brick ~]# ls -lhZ 
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
> total 278K
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids

Looks correct.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5GPNUKUOSM252Z4WSQY3AOLEUBCM3II/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Wed, Jan 20, 2021 at 6:45 AM Matt Snow  wrote:
>
> Hi Nir, Yedidyah,
> for what it's worth I ran through the steps outlined here: 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/troubleshooting-problems-related-to-selinux_using-selinux
>  and eventually got to running
> `setenforce 0` and the issue still persists.

Good, this and no reports in "ausearch -m avc" show that this is
not selinux issue.

> On Tue, Jan 19, 2021 at 6:52 PM Matt Snow  wrote:
>>
>> [root@brick ~]# ps -efz | grep sanlock
>>
>> error: unsupported SysV option
>>
>>
>> Usage:
>>
>>  ps [options]
>>
>>
>>  Try 'ps --help '
>>
>>   or 'ps --help '
>>
>>  for additional help text.
>>
>>
>> For more details see ps(1).
>>
>> [root@brick ~]# ps -ef | grep sanlock
>>
>> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon
>>
>> root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock daemon
>>
>> root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log
>>
>> root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto sanlock
>>
>>
>> [root@brick ~]# ausearch -m avc
>>
>> 
>>
>>
>> [root@brick ~]# ls -lhZ 
>> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
>>
>> total 278K
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases
>>
>> -rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38 metadata
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox
>>
>> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases
>>
>> [root@brick ~]#
>>
>>
>> On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:
>>>
>>> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>>> >
>>> >
>>> > [root@brick log]# cat sanlock.log
>>> >
>>> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host 
>>> > 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
>>> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace 
>>> > 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
>>> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to 
>>> > open 
>>> > /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>>>
>>> Smells like selinux issue.
>>>
>>> What do you see in "ausearch -m avc"?
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B5KWEDIY4LS4GYNGQDKJQJYMDICGHPVE/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-20 Thread Nir Soffer
On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>
> I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
> problem with both versions. The issue occurs in both cockpit UI and tmux'd 
> CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
> created and running.
> I tried to do some debugging on my own before reaching out to this list. Any 
> help is much appreciated!
>
> ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I 
> believe it meets the minimum requirements.
>
> NFS server:
> * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.

We don't test ZFS, but since  you tried also non-ZFS server with same issue, the
issue is probably not ZFS.

> * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
> 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'

You are missing anonuid=36,anongid=36. This should not affect sanlock, but
you will have issues with libvirt without this.

Here is working export from my test system:

/export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)

> * The target directory is always empty and chown'd 36:36 with 0755 
> permissions.

Correct.

> * I have tried using both IP and DNS names. forward and reverse DNS works 
> from ovirt host and other systems on the network.
> * The NFS share always gets mounted successfully on the ovirt node system.
> * I have tried auto and v3 NFS versions in other various combinations.
> * I have also tried setting up an NFS server on a non-ZFS backed storage 
> system that is open to any host and get the same errors as shown below.
> * I ran nfs-check.py script without issue against both NFS servers and 
> followed other verification steps listed on 
> https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
> ***Snip from ovirt-hosted-engine-setup***
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]: nfs
>   Please specify the nfs version you would like to use (auto, v3, v4, 
> v4_0, v4_1, v4_2)[auto]:
>   Please specify the full shared storage connection path to use 
> (example: host:/path): stumpy.mydomain.com:/tanker/ovirt/host_storage
>   If needed, specify additional mount options for the connection to 
> the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: rw
> [ INFO  ] Creating Storage Domain
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage 
> interface to be up]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir 
> existence]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
> username/password credentials]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Parse OVF]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage 
> domain]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free 
> space]
> [ INFO  ] 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
Hi Nir, Yedidyah,
for what it's worth I ran through the steps outlined here:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/troubleshooting-problems-related-to-selinux_using-selinux
and eventually got to running
`setenforce 0` and the issue still persists.


On Tue, Jan 19, 2021 at 6:52 PM Matt Snow  wrote:

> [root@brick ~]# ps -efz | grep sanlock
>
> error: unsupported SysV option
>
>
> Usage:
>
>  ps [options]
>
>
>  Try 'ps --help '
>
>   or 'ps --help '
>
>  for additional help text.
>
>
> For more details see ps(1).
>
> [root@brick ~]# ps -ef | grep sanlock
>
> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock
> daemon
>
> root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock
> daemon
>
> root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log
>
> root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto
> sanlock
>
>
> [root@brick ~]# ausearch -m avc
>
> 
>
>
> [root@brick ~]# ls -lhZ
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
>
> total 278K
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases
>
> -rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38
> metadata
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases
>
> [root@brick ~]#
>
> On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:
>
>> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>> >
>> >
>> > [root@brick log]# cat sanlock.log
>> >
>> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
>> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
>> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace
>> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
>> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission
>> to open
>> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>>
>> Smells like selinux issue.
>>
>> What do you see in "ausearch -m avc"?
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZCEGXZL6OVYV5I5V7QQ4UOL5FMA5WPBJ/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
[root@brick ~]# ps -efz | grep sanlock

error: unsupported SysV option


Usage:

 ps [options]


 Try 'ps --help '

  or 'ps --help '

 for additional help text.


For more details see ps(1).

[root@brick ~]# ps -ef | grep sanlock

sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon

root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock daemon

root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log

root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto
sanlock


[root@brick ~]# ausearch -m avc




[root@brick ~]# ls -lhZ
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md

total 278K

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases

-rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38 metadata

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases

[root@brick ~]#

On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:

> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
> >
> >
> > [root@brick log]# cat sanlock.log
> >
> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace
> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission
> to open
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>
> Smells like selinux issue.
>
> What do you see in "ausearch -m avc"?
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V6KNRLT5PO4D232P6FWGSCCIVD4G6ENZ/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>
>
> [root@brick log]# cat sanlock.log
>
> 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host 
> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
> 2021-01-15 19:17:31 7497 [36293]: s1 lockspace 
> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
> 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to 
> open 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids

Smells like selinux issue.

What do you see in "ausearch -m avc"?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVYDPG2SIAU7AAHFHKC63MDN2NAINWTI/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 3:43 PM Yedidyah Bar David  wrote:
...
> > 2021-01-18 08:43:25,524-0700 INFO  (jsonrpc/0) [storage.SANLock] 
> > Initializing sanlock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > path=/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/4b3fb9a9-6975-4b80-a2c1-af4e30865088/dom_md/ids
> >  alignment=1048576 block_size=512 io_timeout=10 (clusterlock:286)
> > 2021-01-18 08:43:25,533-0700 ERROR (jsonrpc/0) [storage.SANLock] Cannot 
> > initialize lock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > (clusterlock:305)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
> > 2021-01-18 08:43:25,534-0700 INFO  (jsonrpc/0) [vdsm.api] FINISH 
> > createStorageDomain error=Could not initialize cluster lock: () 
> > from=:::192.168.222.53,39612, flow_id=5618fb28, 
> > task_id=49a1bc04-91d0-4d8f-b847-b6461d980495 (api:52)
> > 2021-01-18 08:43:25,534-0700 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
> > (Task='49a1bc04-91d0-4d8f-b847-b6461d980495') Unexpected error (task:880)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
>
> I think this ^^ is the issue. Can you please check /var/log/sanlock.log?

This is a symptom of inaccessible storage. Sanlock failed to write to
the "ids" file.

We need to understand why sanlock failed. If the partialy created domain is
still available, this may explain the issue:

ls -lhZ /rhv/data-center/mnt/server:_path/storage-domain-uuid/dom_md

ps -efz | grep sanlock

ausearch -m avc

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QT7I2TWFCTTQDI37W3RIW6AQQZAMIHWB/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
[root@brick log]# cat sanlock.log

2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
2021-01-15 19:17:31 7497 [36293]: s1 lockspace
54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
2021-01-15 19:17:31 7497 [50873]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-15 19:17:31 7497 [50873]: s1 open_disk
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
error -13
2021-01-15 19:17:32 7498 [36293]: s1 add_lockspace fail result -19
2021-01-18 07:23:58 18 [1318]: sanlock daemon started 3.8.2 host
3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
2021-01-18 08:43:25 4786 [1359]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/4b3fb9a9-6975-4b80-a2c1-af4e30865088/dom_md/ids
2021-01-18 08:43:25 4786 [1359]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:13:17 6578 [1358]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/d3aec1fd-57cb-4d48-86b9-0a89ae3741a7/dom_md/ids
2021-01-18 09:13:17 6578 [1358]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:19:45 6966 [1359]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/fa5434cf-3e05-45d5-b32e-4948903ee2b4/dom_md/ids
2021-01-18 09:19:45 6966 [1359]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:21:16 7057 [1358]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
2021-01-18 09:21:16 7057 [1358]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:49:42 8763 [1359]: s1 lockspace
b0f7b773-7e37-4b6b-a467-64230d5f7391:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids:0
2021-01-18 09:49:42 8763 [54250]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
2021-01-18 09:49:42 8763 [54250]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:49:42 8763 [54250]: s1 open_disk
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
error -13
2021-01-18 09:49:43 8764 [1359]: s1 add_lockspace fail result -19

[root@brick log]# [root@brick log]# su - sanlock -s /bin/bash

Last login: Tue Jan 19 09:05:25 MST 2021 on pts/2
nodectl must be run as root!
nodectl must be run as root!
[sanlock@brick ~]$ grep sanlock /etc/group
disk:x:6:sanlock
kvm:x:36:qemu,ovirtimg,sanlock
sanlock:x:179:vdsm
qemu:x:107:vdsm,ovirtimg,sanlock
[sanlock@brick ~]$ cd
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/ && touch
file.txt && ls -l file.txt
-rw-rw-rw-. 1 sanlock sanlock 0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$ [root@brick log]# su -
sanlock -s /bin/bash
Last login: Tue Jan 19 09:05:25 MST 2021 on pts/2
nodectl must be run as root!
nodectl must be run as root!
[sanlock@brick ~]$ grep sanlock /etc/group
disk:x:6:sanlock
kvm:x:36:qemu,ovirtimg,sanlock
sanlock:x:179:vdsm
qemu:x:107:vdsm,ovirtimg,sanlock
[sanlock@brick ~]$ cd
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/ && touch
file.txt && ls -l file.txt
-rw-rw-rw-. 1 sanlock sanlock 0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$ ls -ltra
total 2
drwxr-xr-x. 3 vdsmkvm 48 Jan 18 09:48 ..
drwxrwxrwx. 2 vdsmkvm  3 Jan 19 09:07 .
-rw-rw-rw-. 1 sanlock sanlock  0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$

On Tue, Jan 19, 2021 at 6:44 AM Yedidyah Bar David  wrote:

> On Mon, Jan 18, 2021 at 6:01 PM Matt Snow  wrote:
> >
> > Hi Didi,
> > I did log clean up and am re-running ovirt-hosted-engine-cleanup &&
> ovirt-hosted-engine-setup to get you cleaner log files.
> >
> > searching for host_storage in vdsm.log...
> > **snip**
> > 2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH
> getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
> (suppressed)} from=:::192.168.222.53,39612 (api:54)
> > 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
> > 2021-01-18 08:43:19,963-0700 INFO  

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Yedidyah Bar David
On Mon, Jan 18, 2021 at 6:01 PM Matt Snow  wrote:
>
> Hi Didi,
> I did log clean up and am re-running ovirt-hosted-engine-cleanup && 
> ovirt-hosted-engine-setup to get you cleaner log files.
>
> searching for host_storage in vdsm.log...
> **snip**
> 2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH getStats 
> return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
> from=:::192.168.222.53,39612 (api:54)
> 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START 
> getConnectedStoragePoolsList(options=None) from=internal, 
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
> 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] FINISH 
> getConnectedStoragePoolsList return={'poollist': []} from=internal, 
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:54)
> 2021-01-18 08:43:19,964-0700 INFO  (vmrecovery) [vds] recovery: waiting for 
> storage pool to go up (clientIF:726)
> 2021-01-18 08:43:20,441-0700 INFO  (jsonrpc/4) [vdsm.api] START 
> connectStorageServer(domType=1, 
> spUUID='----', conList=[{'password': 
> '', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 
> 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': 
> '----', 'user': '', 'tpgt': '1'}], 
> options=None) from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=032afa50-381a-44af-a067-d25bcc224355 (api:48)
> 2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) 
> [storage.StorageServer.MountConnection] Creating directory 
> '/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage' (storageServer:167)
> 2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
> directory: /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage mode: 
> None (fileUtils:201)
> 2021-01-18 08:43:20,447-0700 INFO  (jsonrpc/4) [storage.Mount] mounting 
> stumpy:/tanker/ovirt/host_storage at 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage (mount:207)
> 2021-01-18 08:43:21,271-0700 INFO  (jsonrpc/4) [IOProcessClient] (Global) 
> Starting client (__init__:340)
> 2021-01-18 08:43:21,313-0700 INFO  (ioprocess/51124) [IOProcess] (Global) 
> Starting ioprocess (__init__:465)
> 2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> '----', 'status': 0}]} 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=032afa50-381a-44af-a067-d25bcc224355 (api:54)
> 2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [vdsm.api] START 
> getStorageDomainsList(spUUID='----', 
> domainClass=1, storageType='', 
> remotePath='stumpy:/tanker/ovirt/host_storage', options=None) 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:48)
> 2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache (resize=True) (sdc:80)
> 2021-01-18 08:43:21,498-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices (iscsi:442)
> 2021-01-18 08:43:21,628-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices: 0.13 seconds (utils:390)
> 2021-01-18 08:43:21,629-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices (hba:60)
> 2021-01-18 08:43:21,908-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices: 0.28 seconds (utils:390)
> 2021-01-18 08:43:21,969-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices (multipath:104)
> 2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices: 0.01 seconds (utils:390)
> 2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache: 0.48 seconds (utils:390)
> 2021-01-18 08:43:22,167-0700 INFO  (tmap-0/0) [IOProcessClient] 
> (stumpy:_tanker_ovirt_host__storage) Starting client (__init__:340)
> 2021-01-18 08:43:22,204-0700 INFO  (ioprocess/51144) [IOProcess] 
> (stumpy:_tanker_ovirt_host__storage) Starting ioprocess (__init__:465)
> 2021-01-18 08:43:22,208-0700 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> getStorageDomainsList return={'domlist': []} 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:54)
> 2021-01-18 08:43:22,999-0700 INFO  (jsonrpc/7) [vdsm.api] START 
> connectStorageServer(domType=1, 
> spUUID='----', conList=[{'password': 
> '', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 
> 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': 
> 'bc87e1a4-004e-41b4-b569-9e9413e9c027', 'user': '', 'tpgt': '1'}], 
> options=None) 

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
Complete logs can be found here:
vdsm.log - https://paste.c-net.org/DownloadPressure
supervdsm.log - https://paste.c-net.org/LaterScandals
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5BFTUM6IC6OXGGODBJTOEFGTS3VALBQ/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
In case its useful here is the mount occurring.

[root@brick setup_debugging_logs]# while :; do mount | grep stumpy ; sleep 1; 
done
   
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)


^C
[root@brick setup_debugging_logs]# 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/52UOCFHIOW6PB7EDMJWWYUMXRRAKIRLI/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
Hi Didi,
I did log clean up and am re-running ovirt-hosted-engine-cleanup && 
ovirt-hosted-engine-setup to get you cleaner log files. 

searching for host_storage in vdsm.log...
**snip**
2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::192.168.222.53,39612 (api:54)
2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:54)
2021-01-18 08:43:19,964-0700 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:726)
2021-01-18 08:43:20,441-0700 INFO  (jsonrpc/4) [vdsm.api] START 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'password': '', 'protocol_version': 'auto', 'port': '', 
'iqn': '', 'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 
'false', 'id': '----', 'user': '', 'tpgt': 
'1'}], options=None) from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=032afa50-381a-44af-a067-d25bcc224355 (api:48)
2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) 
[storage.StorageServer.MountConnection] Creating directory 
'/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage' (storageServer:167)
2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage mode: None 
(fileUtils:201)
2021-01-18 08:43:20,447-0700 INFO  (jsonrpc/4) [storage.Mount] mounting 
stumpy:/tanker/ovirt/host_storage at 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage (mount:207)
2021-01-18 08:43:21,271-0700 INFO  (jsonrpc/4) [IOProcessClient] (Global) 
Starting client (__init__:340)
2021-01-18 08:43:21,313-0700 INFO  (ioprocess/51124) [IOProcess] (Global) 
Starting ioprocess (__init__:465)
2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
Invalidating storage domain cache (sdc:74)
2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'id': 
'----', 'status': 0}]} 
from=:::192.168.222.53,39612, flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=032afa50-381a-44af-a067-d25bcc224355 (api:54)
2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [vdsm.api] START 
getStorageDomainsList(spUUID='----', 
domainClass=1, storageType='', remotePath='stumpy:/tanker/ovirt/host_storage', 
options=None) from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:48)
2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
Refreshing storage domain cache (resize=True) (sdc:80)
2021-01-18 08:43:21,498-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
devices (iscsi:442)
2021-01-18 08:43:21,628-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
devices: 0.13 seconds (utils:390)
2021-01-18 08:43:21,629-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
devices (hba:60)
2021-01-18 08:43:21,908-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
devices: 0.28 seconds (utils:390)
2021-01-18 08:43:21,969-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
multipath devices (multipath:104)
2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
multipath devices: 0.01 seconds (utils:390)
2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
Refreshing storage domain cache: 0.48 seconds (utils:390)
2021-01-18 08:43:22,167-0700 INFO  (tmap-0/0) [IOProcessClient] 
(stumpy:_tanker_ovirt_host__storage) Starting client (__init__:340)
2021-01-18 08:43:22,204-0700 INFO  (ioprocess/51144) [IOProcess] 
(stumpy:_tanker_ovirt_host__storage) Starting ioprocess (__init__:465)
2021-01-18 08:43:22,208-0700 INFO  (jsonrpc/5) [vdsm.api] FINISH 
getStorageDomainsList return={'domlist': []} from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:54)
2021-01-18 08:43:22,999-0700 INFO  (jsonrpc/7) [vdsm.api] START 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'password': '', 'protocol_version': 'auto', 'port': '', 
'iqn': '', 'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 
'false', 'id': 'bc87e1a4-004e-41b4-b569-9e9413e9c027', 'user': '', 'tpgt': 
'1'}], options=None) from=:::192.168.222.53,39612, flow_id=5618fb28, 
task_id=51daa36a-e1cf-479d-a93c-1c87f21ce934 (api:48)
2021-01-18 08:43:23,007-0700 INFO  (jsonrpc/7) [storage.StorageDomainCache] 
Invalidating storage domain cache (sdc:74)

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-17 Thread Yedidyah Bar David
On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>
> I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
> problem with both versions. The issue occurs in both cockpit UI and tmux'd 
> CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
> created and running.
> I tried to do some debugging on my own before reaching out to this list. Any 
> help is much appreciated!
>
> ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I 
> believe it meets the minimum requirements.
>
> NFS server:
> * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
> * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
> 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
> * The target directory is always empty and chown'd 36:36 with 0755 
> permissions.
> * I have tried using both IP and DNS names. forward and reverse DNS works 
> from ovirt host and other systems on the network.
> * The NFS share always gets mounted successfully on the ovirt node system.
> * I have tried auto and v3 NFS versions in other various combinations.
> * I have also tried setting up an NFS server on a non-ZFS backed storage 
> system that is open to any host and get the same errors as shown below.
> * I ran nfs-check.py script without issue against both NFS servers and 
> followed other verification steps listed on 
> https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
> ***Snip from ovirt-hosted-engine-setup***

Removing snippets of other logs, as I think vdsm should be enough.
Thanks for the investigation so far!

> The relevantsection from /var/log/vdsm/vdsm.log:
> ***begin snip***
> 2021-01-16 19:53:58,439-0700 INFO  (vmrecovery) [vdsm.api] START 
> getConnectedStoragePoolsList(options=None) from=internal, 
> task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:48)
> 2021-01-16 19:53:58,439-0700 INFO  (vmrecovery) [vdsm.api] FINISH 
> getConnectedStoragePoolsList return={'poollist': []} from=internal, 
> task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:54)
> 2021-01-16 19:53:58,440-0700 INFO  (vmrecovery) [vds] recovery: waiting for 
> storage pool to go up (clientIF:726)
> 2021-01-16 19:53:58,885-0700 INFO  (jsonrpc/3) [vdsm.api] START 
> connectStorageServer(domType=1, 
> spUUID='----', conList=[{'password': 
> '', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 
> 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': 
> '3ffd1e3b-168e-4248-a2af-b28fbdf49eef', 'user': '', 'tpgt': '1'}], 
> options=None) from=:::192.168.222.53,41192, flow_id=592e278f, 
> task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:48)

Are you sure this is the first place that deals with this issue?
Perhaps search the log earlier, e.g. for 'host_storage' (part of your
path).
Also, please check supervdsm.log (in the same log directory).

Does it manage to mount it? Write anything there?

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQKEEERTD7RQN7M7BIP2A3ZGOKJLXZ4L/