Hi Ovirt Mailing-list
Hope you are well. We had an outage over several ovirt clusters. It looks
like we had the same ISO NFS domain shared to all off them, many of the
VM's had a CD attached to it. The NFS server went down for an hour, all
hell broke lose when the NFS server went down. Some of the
Hi
Thanks for the response. Seems that they got paused, all came right after
the NFS server came backup.
Regards
Nardus
On Wed, 11 Mar 2020 at 11:34, Strahil Nikolov wrote:
> On March 11, 2020 9:45:30 AM GMT+02:00, Nardus Geldenhuys <
> nard...@gmail.com> wrote:
> >Hi
Hi oVirt Land
Hope you are all well and that you can help me.
I build a centos 8.2 vm on my local Fedora 32 qemu/kvm laptop. Then I ssh
to my ovirt host in the following manner: ssh ovirthost -R
1:localhost:16509. I can then view the vm's on my local laptop with the
following configuration st
Hi oVirt land
Hope you are well. Don't even know what to call this. But let me describe
what I want to achieve.
We have a cluster with say 100 vm's. But we want two use two hosts in the
cluster two run only certain VM's. I think you can do that with affinity
rules. But how can I restrict those tw
disabled
> HOST affinity rule negative set enforcing mode. Then add the rest of the
> VMs and the 2 Hosts to force them not to run on these hosts.
>
>
> https://www.ovirt.org/documentation/vmm-guide/chap-Administrative_Tasks.html
>
>
> Regards,
>
> Paul
Hi oVirt land
Hope you are well. Got a bit of an issue, actually a big issue. We had some
sort of dip of some sort. All the VM's is still running, but some of the
hosts is show "Unassigned" or "NonResponsive". So all the hosts was showing
UP and was fine before our dip. So I did increase vdsHeart
't put it in maintenance, only option is "restart" or "stop".
Regards
Nar
On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov wrote:
> After rebooting the node, have you "marked" it that it was rebooted ?
>
> Best Regards,
> Strahil Nikolov
>
> На 5 август
INFO (jsonrpc/0) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=:::127.0.0.1,41540 (api:54)
2020-08-06 07:23:00,337+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getA
the
> vdsm logs.
>
> Best Regards,
> Strahil Nikolov
>
> На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys <
> nard...@gmail.com> написа:
> >Hi Strahil
> >
> >Hope you are well. I get the following error when I tried to confirm
> >reboot:
> &g
Hi
Hope you are well. Did you find a solution for this? Think we have the same
type of issue.
Regards
Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-p
w_bug.cgi?id=1845152
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1846338
>
> regards,
> Artur
>
>
>
> On Thu, Aug 6, 2020 at 8:01 AM Nardus Geldenhuys
> wrote:
>
>> Also see this in engine:
>>
>> Aug 6, 2020, 7:37:17 AM
>> VDSM someserver comma
w_bug.cgi?id=1845152
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1846338
>
> regards,
> Artur
>
>
>
> On Thu, Aug 6, 2020 at 8:01 AM Nardus Geldenhuys
> wrote:
>
>> Also see this in engine:
>>
>> Aug 6, 2020, 7:37:17 AM
>> VDSM someserver command
It is generally not necessary to increase the number of threads
>> in this thread pool. To change the value
>> # permanently create a conf file 99-engine-scheduled-thread-pool.conf in
>> /etc/ovirt-engine/engine.conf.d/
>> ENGINE_SCHEDULED_THREAD_POOL_SIZE=100
>>
&g
Hi oVirt land
Hope you are well. Running into this issue, I hope you can help.
Centos7 and it is updated.
Ovirt 4.3, latest packages.
My network config:
[root@mob-r1-d-ovirt-aa-1-01 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00
attaching ovirt engine.log
2019-04-10 10:09:46,786+02 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[81a82c39-4786-46db-b719-2808f736e359=VM]', sharedLocks=''}'
2019-04-10 10:09:46,831+02
attached is the engine.log
On Wed, 10 Apr 2019 at 10:39, Milan Zamazal wrote:
> nard...@gmail.com writes:
>
> > Wonder if this issue is related to our problem and if there is a way
> > around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of
> > the VM's fail to start. You need to
Can a moderator delete this post please. Can't find the option to delete it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Condu
Can't find any logs containing the VM name on the host it was supposed to
start. Seems that it does not even get to the host and that it fails in the
ovirt engine
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovir
It seems that ovirt-engine thinks that the storage is attached to a running VM.
But it is not. Is there away to refresh these stats ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https
Hi Milan
Nothing special. We did the upgrade on two clusters. One is fine and this one
is broken. Is there a way to rescan the cluster with all its VM's to pull
information
I did notice also that there is no NIC showing under the VM's network. When you
trying to add one it complains that it ex
This is fixed. Was a table in db that was truncated, we fixed it by restoring
backup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code
Also get this after install new ovirt node. It stops after about 20 minutes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduc
Please find attached
On Mon, 15 Apr 2019 at 10:10, Dominik Holler wrote:
> Would you please share the last lines of all files
> in /var/log/openvswitch, the relevant lines of /var/log/message, and
> the ouput of
> ss -lap
> lsof
> ?
> Thanks
>
>
> On Sun, 14 Apr
>
>
>
>
> On Mon, 15 Apr 2019 12:01:54 +0200
> Nardus Geldenhuys wrote:
>
> > ss output attached
> >
> > On Mon, 15 Apr 2019 at 10:10, Dominik Holler wrote:
> >
> > > Would you please share the last lines of all files
> > > in
Hey
Do use openvswitch? We don't and I did notice that the messages disappear after
a while.
Regards
Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/p
Hi There
Hope you are well. We have two clusters with two ovirt-engines. On the one
cluster's ovirt -engine the vm_dynamic tables is almost 3.5 GB big, is that
normal ? We are on the latest engine software.
Regards
Nar
___
Users mailing list -- users
Hi
Stab in the dark, you using DHCP for the engine and it is not getting an
address ?
Ciao
Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy
Hi
Hope you are well. Quick question.
Can I disable SELINUX on the ovirt nodes ? Will there be any issues. I know
that you cant migrate from SELINUX to DISABLED SELINUX nodes.
Regards
Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send a
Hey Michal
Hope you are well. Thank you so much for this write up and all the work you put
into it.
Do you have a easy way of using different data sources, more than one oVirt
engine DWH ? Or will I have to adapt all the dashboards to the different data
sources?
Thanks again
Nardus
_
Hi
Thank you. I did figured it out after a while...
Another question. How can I see LUN utilization ? We had an issue where one LUN
was over utiliazed, meaning it was hammered by the VM's. We only discovered
this after the SAN team told us, we moved the VM's storage around it we
resolved the i
This worked for us:
edit /etc/httpd/conf.d/ovirt-engine-grafana-proxy.conf
add "ProxyPreserveHost On"
should look like this now:
LoadModule proxy_module modules/mod_proxy.so
ProxyPreserveHost On
ProxyPass http://127.0.0.1:3000 retry=0 disablereuse=On
ProxyPassReverse http://12
31 matches
Mail list logo