[ovirt-users] Re: upgrade python3

2022-05-16 Thread klaasdemter
Hi,
the support should be determined by the Red Hat support, the python3 pacakges 
have a lifespan same as rhel8, so 2029. I also have a couple of python3.8 
packages on my ovirt-engine from the 3.8 module, that one is supported until 
may 2023. So I don't think this is something that needs to be adressed right 
now.
https://access.redhat.com/support/policy/updates/rhel-app-streams-life-cycle

Greetings
Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IOQNU63JHU5N45VVN2S6L2YZ5JRZTZTZ/


[ovirt-users] Re: vm_service.stop in python sdk

2020-06-24 Thread klaasdemter
https://ovirt.github.io/ovirt-engine-sdk/master/services.m.html#ovirtsdk4.services.VmService.stop 
poweroff, that's what the example is doing.


https://ovirt.github.io/ovirt-engine-sdk/master/services.m.html#ovirtsdk4.services.VmService.shutdown 
shutdown, what you are searching for I'd guess ( I think through guest 
agent of acpi event )


On 24.06.20 20:30, Louis Bohm wrote:
One of the example scripts at 
https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples is 
the stop_vm.py.  I am trying to use that to stop a vm however, I am 
finding that what its doing is similar to pulling the power cord 
rather then running a shutdown -h now on the system.


Is that correct?  Is there a nicer way to shutdown the VM in the 
python sdk?


Thanks,
Louis
-<<—->>-
Louis Bohm
louisb...@gmail.com 





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LPXWGDQWV7A3WGNDHYTPLESZNAOPNBDF/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YX3ALPH2FNE2AYPCRUZG6IWGJACU2DI/


[ovirt-users] Re: command line vm start/stop

2020-04-20 Thread klaasdemter
I'm not sure why the direct api requests are not working, but in general 
I would suggest to use one of the sdks or ansible to interact with ovirt.


https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/stop_vm.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/start_vm.py

https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html

Then you have a nice abstraction layer. Makes working with ovirt api 
easier for me :)


Greetings
Klaas


On 19.04.20 16:21, Ali Gusainov wrote:

According to
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/rest_api_guide/index#services-vm

trying to use
curl --insecure -v -u admin@internal:password -H Content-type: application/xml 
-X POST https://FQDN/ovirt-engine/api/vms/VM_NAME/ACTION

where ACTION start or stop

Got following:


* About to connect() to FQDN port 443 (#0)
*   Trying SERVER_IP...
* Connected to FQDN (SERVER_IP) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/ovirt-engine/ca.pem
   CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*   subject: CN=FQDN
*   start date: Jan 22 06:12:15 2020 GMT
*   expire date: Dec 27 06:12:15 2024 GMT
*   common name: FQDN
*   issuer: CN=FQDN.
* Server auth using Basic with user 'admin@internal'

POST /ovirt-engine/api/vms/VM_NAME/ACTION HTTP/1.1
Authorization: Basic YWRtaW5AaW50ZXJuYWw6U3lzVGVhbTEzYw==
User-Agent: curl/7.29.0
Host: FQDN
Accept: application/xml


< HTTP/1.1 404 Not Found
< Date: Sun, 19 Apr 2020 00:20:04 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
< Content-Length: 0
< Correlation-Id: 15e7a226-8982-47ff-9f68-7a2519705856
<
* Connection #0 to host FQDN left intact


--


tail -1000f /var/log/ovirt-engine/server.log:

2020-04-18 20:20:04,354-04 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] 
(default task-385) RESTEASY002010: Failed to execute: 
javax.ws.rs.WebApplicationException: HTTP 404 Not Found
at 
org.ovirt.engine.api.restapi.resource.AbstractBackendResource.asGuidOr404(AbstractBackendResource.java:355)
 [restapi-jaxrs.jar:]
at 
org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.(AbstractBackendSubResource.java:26)
 [restapi-jaxrs.jar:]
at 
org.ovirt.engine.api.restapi.resource.AbstractBackendActionableResource.(AbstractBackendActionableResource.java:39)
 [restapi-jaxrs.jar:]
at 
org.ovirt.engine.api.restapi.resource.BackendVmResource.(BackendVmResource.java:114)
 [restapi-jaxrs.jar:]
at 
org.ovirt.engine.api.restapi.resource.BackendVmsResource.getVmResource(BackendVmsResource.java:164)
 [restapi-jaxrs.jar:]
at sun.reflect.GeneratedMethodAccessor1357.invoke(Unknown Source) 
[:1.8.0_232]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [rt.jar:1.8.0_232]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_232]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.createResource(ResourceLocatorInvoker.java:69)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:105)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:132)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:100)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:440)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:229)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:135)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:355)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:138)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:215)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:227)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
 [resteasy-jaxrs-3.7.0.Final.jar:3.7.0.Final]
at 
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
 

[ovirt-users] Re: NUMA Pinning bug with Hugepages?

2020-04-15 Thread klaasdemter

Hi,
I asked about a backport to 4.3, but this won't happen according to 
support. You should be aware that there are more issues with hugepages 
and 4.3:


https://access.redhat.com/solutions/4904441
https://bugzilla.redhat.com/show_bug.cgi?id=1804037
https://bugzilla.redhat.com/show_bug.cgi?id=1804046
https://bugzilla.redhat.com/show_bug.cgi?id=1812316
https://bugzilla.redhat.com/show_bug.cgi?id=1806339
https://bugzilla.redhat.com/show_bug.cgi?id=1785507

Greetings
Klaas

On 15.04.20 13:42, Alan G wrote:

Will this be fixed in 4.3 or only 4.4?

Thanks,

Alan


 On Wed, 15 Apr 2020 12:37:38 +0100 *Lucia Jelinkova 
* wrote 


Hi,


I think you've hit similar issue as reported here:

https://bugzilla.redhat.com/show_bug.cgi?id=1812316

You're right, the hugepages are considered as allocated memory and
it is a bug. I am working on a fix right now.

Regards,

Lucia

On Wed, Apr 15, 2020 at 1:21 PM Alan G mailto:alan%2bov...@griff.me.uk>> wrote:

Hi,

I seem to have found an issue when trying to setup a high
performance VM utilising hugepages and NUMA pinning.

The VM is configured for 32GB RAM and uses hugepages of size 1G.

The host has two NUMA nodes each having 64GB RAM (for 128GB
total system RAM).

No other VMs are running or otherwise pinned to the host.

The VM is pinned to node 0. And the required hugepages are
allocated with

echo 32 >

/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages

I then attempt to start the VM and get a pop-up error saying
"cannot accommodate memory of VM's pinned virtual NUMA nodes
within host's physical NUMA nodes".

However, if I remove the hugepages from node 0

echo 0 >

/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages

Start the VM and then immediately re-create the hugepqages
then everything works as expected.

It seems to me that oVirt is considering the hugepages as
allocated memory even if they are not in use.

Is this correct?

Thanks,

Alan

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/32QKZ2EQ5OKTMPJD4M6W7QK2FKFDFK2E/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PLIOIMOAOBVWJEW3LMT326HWBQRMHEFI/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKUIWKTZJI4BQ7NZ4Q42TLXCJPRC/


[ovirt-users] Re: paused vm's will not resume

2020-02-23 Thread klaasdemter
I've had problems with resuming after pausing VMs if you changed the 
compatibility level without rebooting VMs to new level:
https://bugzilla.redhat.com/show_bug.cgi?id=1693813 so depending on if 
you're already on a recent version this could be an issue.


Note https://bugzilla.redhat.com/show_bug.cgi?id=1693813#c6 for a 
possible way to resume the VMs in this state.


Greetings
Klaas

On 18.02.20 05:52, eev...@digitaldatatechs.com wrote:

I have 2 vm's, which are the most important in my world, that paused and will 
not resume. I have googled this to death but no solution. It stated a lack of 
space but none of the drives on my hosts are using more than 30% or there space 
and these 2 have ran on kvm host for several years and always had at least 50% 
free space.
I like ovirt and want to use it but I cannot tolerate the down time. If I 
cannot get this resolved, I'm going back to kvm hosts. I am pulling my hair out 
here.
If anyone can help with this issue, please let me know.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JBXNV3WT2W72I2E7EXM2KY4YN37STIMC/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6K7PHEPE5H2EBQJHRTTEFIMULJ7ZVHSQ/


[ovirt-users] Hugepages and running out of memory

2020-02-21 Thread klaasdemter

Hi,

this e-mail is meant to caution people using hugepages and mixing those 
VMs with non-hugepages VMs on the same hypervisors. I ran into major 
trouble, the hypervisors ran out of memory because the VM scheduling 
disregards the hugepages in it's calculations. So if you have hugepages 
and non-hugepages VMs better check the memory commited on a hypervisor 
manually :)



https://bugzilla.redhat.com/show_bug.cgi?id=1804037

https://bugzilla.redhat.com/show_bug.cgi?id=1804046


As for workarounds: so far it seems the only viable solution is 
splitting hugepages/nonhugepages VMs with affinity groups but at least 
for me that means wasting a lot of resources.



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIYCVH33QR6SK5AS3QWF3DRBLJK3HQFK/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-05-10 Thread klaasdemter
> So for all lease-holding vms, push the lease to a different storage domain 
> (on different
> storage hardware), apply the upgrade and then push the leases back? And that 
> can be done
> whilst the VMs are running? Leases must be frequently renewed, so I guess no 
> particular
> reason why not.

If you have the storage leases on another hardware then you do not have the 
same problem I have; unless you are talking about upgrading the hardware of the 
storage domain that holds the leases.
General: you can change the lease on a running VM. But (speaking for 4.2) you 
have to set it to no lease first.

> 
> Does that work for the SPM and hosted engine as well?

I think the hosted engine has it's own kind of storage lease, but I can't say 
anything to that. SPM does not have a storage lease it should be hardware (ie 
it's one of the hypervisors) not a VM.

> 
> I'm guessing this doesn't help with an unmanaged contoller failover. 
> Anecdotally,
> that seems to happen a bit faster for me than a managed one, which is also 
> odd.

Yeah, all my ideas are just for planned maintenance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JMYPVO45FWLO7OEPTSIJDUN7SG4VUKAZ/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-05-03 Thread klaasdemter
my current idea of a workaround is to disable storage leases before netapp 
upgrades, and re-enable them after upgrades are done. python sdk should make 
this fairly easy 
(https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/set_vm_lease_storage_domain.py
 and https://gerrit.ovirt.org/c/99712/)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3EZAJDRFUCCY5T62E2CKDYVQZVWWABAR/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-05-02 Thread klaasdemter
So after a little wild discussion with support it boiled down to this BZ: 
https://bugzilla.redhat.com/show_bug.cgi?id=1705289 which suggests it may be a 
good idea to have sanlock io_timeout configureable :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7OQG3TRTS7YFC26QUOLVQ4DYUO5ZEMED/


[ovirt-users] Re: Update storage lease via python sdk

2019-04-30 Thread klaasdemter
Hi,
with a little help from Red Hat my current solution looks like this:
https://github.com/Klaas-/ovirt-engine-sdk/blob/663cc06516f9ace45ba046a3b2ba14a6724cfb8a/sdk/examples/change_vm_lease_storage_domain.py#L51-L83

using a correlation_id to find the running task with jobs_service and then 
watching that until it's finished

I'll try to put a PR in through gerrit within the next days.

Greetings
Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JIN6DVVSQORMF6KJSE2LZD432P32AQ/


[ovirt-users] Update storage lease via python sdk

2019-04-27 Thread klaasdemter

Hi,

I'm trying to update a storage lease via python sdk but I can’t seem to 
figure out how to wait for the storage lease configuration to finish. 
The vm status does not show “configuration in progress” just “up” and 
the event is created as soon as I issue the update, not when its done.


Can someone give me a hint or point me to an example?

https://github.com/Klaas-/ovirt-engine-sdk/blob/4094d3a2d4b011d4926e58374ddc6f589869b70b/sdk/examples/change_vm_lease_storage_domain.py#L64


Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FPZQUPFSI3CD7P6JJC3XTSZUHB3IMKXK/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing actions) 
in a HA setting. Those storage leases use sanlock. If you use sanlock a 
VM gets killed if the lease is not renewable during a very short 
timeframe (60 seconds). That is what is killing the VMs during takeover. 
Before storage leases it seems to have worked because it would simply 
wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done 
EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4PMK33ZQLLH6DCXLC3NNDUQDQJM3XTX/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing actions) 
in a HA setting. Those storage leases use sanlock. If you use sanlock a 
VM gets killed if the lease is not renewable during a very short 
timeframe (60 seconds). That is what is killing the VMs during takeover. 
Before storage leases it seems to have worked because it would simply 
wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done 
EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFQJNX6TNWTCJERLH3WAO7UGEUPWQ2KI/


[ovirt-users] oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,

I got a question regarding oVirt and the support of NetApp NFS storage. 
We have a MetroCluster for our virtual machine disks but a HA-Failover 
of that (active IP gets assigned to another node) seems to produce 
outages too long for sanlock to handle - that affects all VMs that have 
storage leases. NetApp says a "worst case" takeover time is 120 seconds. 
That would mean sanlock has already killed all VMs. Is anyone familiar 
with how we could setup oVirt to allow such storage outages? Do I need 
to use another type of storage for my oVirt VMs because that NFS 
implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/