[ovirt-users] Re: Ubuntu NFS
I stand corrected /etc/default/nfs-kernel-server RPCMOUNTDOPTS="--manage-gids" Needs to be changed to RPCMOUNTDOPTS="" In my testing I restarted the wrong service ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOHKDC425AJKICKB3HJ3DZOSPAEFJOF3/
[ovirt-users] Re: Ubuntu NFS
I submitted my own for 22.04. Maybe re-open your pull request? I see it closed with the note no one is working on it. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2HQFWUCFUW3AFYQQLRV3JFXI5SENWWSF/
[ovirt-users] Re: Ubuntu NFS
That's strange ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZU2V5S4AXSAJMB6FAOSJQBWSKA543DMS/
[ovirt-users] Re: Ubuntu NFS
> Host > supervdsm.log > -open error -13 EACCES: no permission to open /ThePath/ids > -check that daemon user sanlock *** group sanlock *** has access to disk or > file. Wrong log on Host. It was sanlock.log not supervdsm.log ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KENZBEDOL3F2JT37M7JZNX3A7BRDRSQX/
[ovirt-users] Ubuntu NFS
Hello, I was having trouble with getting an Ubuntu 22.04 NFS share working and after searching for hours I was able to figure out what was needed. Below is what I found if anyone else runs into this. My error was engine.log "...Unexpected return value: Status [code=701, message=Could not initialize cluster lock: ()]" Host supervdsm.log -open error -13 EACCES: no permission to open /ThePath/ids -check that daemon user sanlock *** group sanlock *** has access to disk or file. The fix was changing /etc/nfs.conf manage-gids=y ( Which is the default ) to # manage-gids=y ( Commenting this sets the default which is no ) It looks like in the past the fix was to change /etc/default/nfs-kernel-server Line RPCMOUNTDOPTS="--manage-gids" which I didn't need to change. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PT6SKJ4ZTX3EKMEYPRDFW2PE5U3UGVK5/
[ovirt-users] oVirt and the future
Hi all, I'm looking for some information about the future of oVirt. With CentOS going away have they talked about what they will be doing or moving to? I'd like to see Ubuntu support. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYZYII5XULUTDYCXGAEPRON6QGE4CTL4/
[ovirt-users] Re: CentOS 8 is dead
I to would like to see if Ubuntu could become a bit more main stream with oVirt now that CentOS is gone. I'm sure we won't hear anything until 2021 the oVirt staff need to figure out what to do now. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HHQH2XIHK2VPV4TTERO2NH7DGEYUWV4/
[ovirt-users] Re: Failed upgrade from SHE 4.3.10 to 4.4.3 - Host set to Non-Operational - missing networks
I had issues with 4.4 and the network setup crashing. I got around this by setting IPV6_DISABLED=yes IPV6INIT=no and removing all other ipV6 entries in the ifcfg. After that the deploy worked just fine. I ran dnf autoremove vdsm -y before I re-ran the push from my engine. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/X62F46I3Z3YWGQYBZBTFMMTI4JXIAIOM/
[ovirt-users] Re: Adding iscsi issue 4.3
Disregard, the array was setting the LBA support to 32 bit which was causing the block size to go way up. Also it was doing an auto partition of the volume which was causing the LV create to error out. Once I ran dd on the volume it all works correctly now. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4MISVGF7WOWHJWZEZRI3KRS3RG42GCB/
[ovirt-users] Adding iscsi issue 4.3
When connecting an iscsi connection from an array I get the following popup "A database error occurred. Please contact your system administrator." I have checked everything and the only device connected right now is the single host in the datacenter. If I check the box " Approve operation" and click Ok it will error out with "A database error occurred. Please contact your system administrator." This is a new volume and no other machine is connected to this LUN or array. Below is from the vdsm log 2020-11-11 10:42:26,087-0800 INFO (jsonrpc/0) [vdsm.api] START getDeviceList(storageType=3, guids=[u'32021001378a6ddad'], checkStatus=True, options={}) from=:::XX,46208, flow_id=d09dbfd6-2256-48ad-8597-b6c581def1fd, task_id=67c17255-3ffb-4e8b-b6d3-a4e40836d12c (api:48) 2020-11-11 10:42:26,650-0800 INFO (jsonrpc/0) [storage.LVM] Overriding read_only mode current=True override=False (lvm:398) 2020-11-11 10:42:26,822-0800 INFO (jsonrpc/0) [vdsm.api] FINISH getDeviceList return={'devList': [{'status': 'used', 'vendorID': 'ETIUSA', 'capacity': '3999688294400', 'fwrev': '10E', 'discard_zeroes_data': 0, 'vgUUID': '', 'pvsize': '', 'pathlist': [{'connection': u'10.87.172.100', 'iqn': u'XX.XX.XX:storage3', 'portal': '0', 'port': '3260', 'initiatorname': u'default'}], 'logicalblocksize': '512', 'discard_max_bytes': 0, 'pathstatus': [{'type': 'iSCSI', 'physdev': 'sdc', 'capacity': '3999688294400', 'state': 'active', 'lun': '0'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': '', 'serial': 'SETIUSA_UltraStorRS8IP4_2021001378A6DDAD', 'GUID': '32021001378a6ddad', 'productID': 'UltraStorRS8IP4'}]} from=:::X,46208, flow_id=d09dbfd6-2256-48ad-8597-b6c581def1fd, task_id=67c17255-3ffb-4e8b-b6d3-a4e40836d12c (api:54) 2020-11-11 10:42:26,823-0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getDeviceList succeeded in 0.74 seconds (__init__:312) 2020-11-11 10:42:27,158-0800 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,48480 (api:48) 2020-11-11 10:42:27,159-0800 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,48480 (api:54) 2020-11-11 10:42:27,159-0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312) 2020-11-11 10:42:27,393-0800 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=:::X,46208 (api:48) 2020-11-11 10:42:27,394-0800 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=:::X,46208 (api:54) 2020-11-11 10:42:27,394-0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312) 2020-11-11 10:43:12,323-0800 INFO (jsonrpc/6) [vdsm.api] START createVG(vgname=u'35e95355-75db-4640-bf99-e576b9ce39aa', devlist=[u'32021001378a6ddad'], force=True, options=None) from=:::X,46208, flow_id=2c973734, task_id=91f2ab5b-017a-4c88-aa45-ece3bd920ce0 (api:48) 2020-11-11 10:43:12,348-0800 INFO (jsonrpc/6) [storage.LVM] Overriding read_only mode current=True override=False (lvm:398) 2020-11-11 10:43:12,409-0800 INFO (jsonrpc/4) [api.host] START getAllVmStats() from=:::XX,46208 (api:48) 2020-11-11 10:43:12,410-0800 INFO (jsonrpc/4) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=:::XXX,46208 (api:54) 2020-11-11 10:43:12,410-0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312) 2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.LVM] pvcreate failed with rc=5 (lvm:988) 2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.LVM] [], [' Device /dev/mapper/32021001378a6ddad excluded by a filter.'] (lvm:989) 2020-11-11 10:43:12,520-0800 INFO (jsonrpc/6) [vdsm.api] FINISH createVG error=Failed to initialize physical device: ("[u'/dev/mapper/32021001378a6ddad']",) from=:::XX,46208, flow_id=2c973734, task_id=91f2ab5b-017a-4c88-aa45-ece3bd920ce0 (api:52) 2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='91f2ab5b-017a-4c88-aa45-ece3bd920ce0') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in createVG File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2146, in createVG force=force) File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1256, in createVG _initpvs(pvs, metadataSize, force) File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 990, in _initpvs raise se.PhysDevInitializationError(str(devices)) PhysDevInitializationError:
[ovirt-users] Affinity Group Graceful Migration
Hello, I'm trying to get an Affinity Group setup that will gracefully remove VMs from a Host. Below is what I've tried but the VMs will not migrate off. oVirt 4.3 Priority: 1 VM Affinity Rule: Disabled Host Affinity Rule: Negative ( Soft ) Virtual Machines: Select a VM ( My understanding is this would mean all VMs? ) Hosts: Host: Also, is this right place to ask for help on this? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPKSHDG3ULD4AP54PUNITCHTQKOAHLW7/