oVirt Version: 4.5.0.8-1.el8
I ran into the same problem, and to answer your question No, no manual
intervention or changes to repositories.
Regards,
Brett
On Tue, 14 Jun 2022 at 08:44, Sandro Bonazzola wrote:
> Hi, did you manually enable ovirt-45-upstream-testing repo?
>
> Il giorno mar 14 g
Thanks Nardus,
ProxyPreserveHost did the trick, all seems to be working now.
On Mon, 13 Jun 2022 at 12:43, Nardus Geldenhuys wrote:
> This worked for us:
>
> edit /etc/httpd/conf.d/ovirt-engine-grafana-proxy.conf
> add "ProxyPreserveHost On"
> should look like this now:
>
>
> LoadModule
s UI. (You'll need admin privlieges for grafana to
> do so.)
> The password for the engine database's grafana user should be located in
> /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-grafana-database.conf
> on the engine host.
>
> -Patrick Hibbs
>
> On T
oVirt 4.5.0.8-1.el8
I tried to connect to grafana via the monitoring portal link from the dash
and all panels are failing to display any data with varying error messages,
but all include 'Origin Not Allowed'
I navigated to Data Sources and ran a test on the PostgreSQL connection
(localhost) which
ption to upgrade.
Cheers,
Brett
On Mon, 6 Jun 2022 at 14:05, Gianluca Cecchi
wrote:
> On Mon, Jun 6, 2022 at 2:54 PM Maton, Brett
> wrote:
>
>> Opened a bug report: 2093954 – Engine certificate alert, no option to
>> update offered by engine-setup (redhat.com)
>>
nt the weekend
> reinstalling the engine host. Due to the engine no longer being able to
> install new / reinstall existing hosts or enroll host certificates after
> doing so.
>
> Might just be better to wait until engine-setup does it automatically.
>
> -Patrick Hibbs
>
> On Mon
oVirt: 4.5.0.8-1.el8
Hi,
I got a warning yesterday that the engine certificate is 'about' to
expire, in 6 months
> Engine's certification is about to expire at 2022-12-10. Please renew
> the engine's certification.
I tried 'engine-setup --offline' but wasn't prompted to update the eng
Hi List,
I'm using the power_saving schedule to minimise the number of physical
hosts which works as expected when scaling in, however it doesn't appear to
ever scale out.
I've tried setting the minimum and maximum RAM and lowering the HighUsage
(Utilization in American) percentage but it nev
Probably worth pointing out that if you (as I did) update to 4.5.0.8 and
exclude the postgresql-jdbc update you'll wind up with
500 - Internal Server Error
When you try to login to the admin console again.
On Wed, 11 May 2022 at 13:43, Martin Perina wrote:
> Hi,
>
> oVirt 4.5.0.8 async release
I ran in to this problem yesterday, if you've already upgraded 'everything'
you can roll back the postgresql driver with
dnf downgrade postgresql-jdbc
and then restart the engine
systemctl restart ovirt-engine
On Sat, 30 Apr 2022 at 08:10, Latchezar Filtchev wrote:
> Dear Jan,
>
> Please ch
Hosts still not shutting down, VM's get moved from host to host by
Migration initiated by system (reason load balancing)
events every few hours, unfortunately there is nothing in the event logs
about power management though.
Any thoughts or suggestions on logs to look into?
Regards,
Brett
__
Hi,
I'm having trouble with the power_saving Scheduling Policy not shutting
down idle hosts
Policy is more or less default, I added 'HostsInReserve 0' too see if
that would help, and then 24hrs later I bumped
CpuOverCommitDurationMinutes to 15 that didn't make a difference either.
(not unexp
Thanks for the replies, as it turns out it was nothing to do with
/etc/exports or regular file system permissions.
Synology have applied their own brand of Access Control Lists (ACLs) to
shared folders.
Basically I had to run the following commands to allow vdsm:kvm (36:36) to
read and write to t
Hi List,
I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
to an NFS share on a Synology NAS.
I gave up trying to get the hosted engine deployed and put that on an
iscsi volume instead...
The directory being exported from NAS is owned by vdsm / kvm (36:36)
perms I've
I use a Dell SCv2000, that is plenty for 3 hosts with out needing a fibre
channel switch.
They do the SCv3000 series now, might suit your needs.
Regards,
Brett
On Thu, 18 Feb 2021 at 00:44, Chris Adams wrote:
> Once upon a time, matthew.st...@fujitsu.com
> said:
> > Disks + Linux + iSCSI targe
Last time I had to forcibly remove a node because it was impossible to do
so otherwise, it had never ever had anything to do with gluster, so I
STRONGLY dispute your claim that fixing an issue (that was not stated) will
fix anything.
On Tue, 21 Apr 2020 at 22:39, Maton, Brett wrote:
> I'
Strahil Nikolov
>
>
>
>
>
>
> В вторник, 21 април 2020 г., 19:46:47 Гринуич+3, Maton, Brett <
> mat...@ltresources.co.uk> написа:
>
>
>
>
>
> Last time I had to do this I removed from the database.
>
> (at your own risk)
> On ovirt engine
Last time I had to do this I removed from the database.
(at your own risk)
On ovirt engine switch to the postgres user from root:
su - postgres
Enable postgres 10 and connect to the engine database:
. scl_source enable rh-postgresql10
psql -d engine
Change to the name (Name column of the host
Daft question, but are the switch ports configured to allow the VLAN
traffic through ?
On Sat, 11 Apr 2020 at 00:46, Brian Dumont wrote:
> Evan,
>
> Thanx for your help. I think I've got the Networks setup correctly on the
> Setup Hosts Network section, but obviously I've got something fundamen
a logging generated by this flag. Default value of this
> flag is very large and therefore you should see no extra logging unless the
> flag is overridden.
>
>
>
> The default in Windows is 1 GB. I’m not sure about Linux.
>
>
>
> I hope this is helpful.
>
>
>
&
The hosts are identical, and yes I'm sure about the 563 terrabytes, which
is obviously wrong, and why I mentioned it. Possibly an overflow?
On Fri, 10 Apr 2020, 21:31 , wrote:
> I have a Windows 10 guest and a Server 2016 guest that migrate without an
> issue.
> Are your CPU architectures compar
: 1737283
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1737283 (Wed Apr 8 16:16:17 2020)
> host-id=2
> score=0
> vm_conf_refresh_time=1737037 (Wed Apr 8 16:12:11 2020)
> conf_on_shared_stora
First steps, on one of your hosts as root:
To get information:
hosted-engine --vm-status
To start the engine:
hosted-engine --vm-start
On Wed, 8 Apr 2020 at 17:00, Shareef Jalloq wrote:
> So my engine has gone down and I can't ssh into it either. If I try to
> log into the web-ui of the node
at 11:56, Maton, Brett wrote:
> I haven't got an active RHEL subscription so I can't view that solution
> unfortunately.
>
>
> Thanks for the log pointers though, looking in the qemu log I'm not
> surprised it's crashing...
>
> tcmalloc: large alloc
ackages related need to be upgraded).
>
> You may also find some more information about that error on the vdsm log
> (/var/log/vdsm/vdsm.log)
> and the qemu log (/var/log/libvirt/qemu/vm_name.log)
>
> [1] https://access.redhat.com/solutions/3423481
>
>
> *Regards,*
>
&
I recently added a Windows 10 Pro 64 bit (release 1909) VM, and I'm seeing
a lot of failures when oVirt tries to move the VM to another host
(triggered by load balancing),
These errors are showing up in the UI event log
Migration failed (VM: , Source: , Destination: ).
Followed by:
VM is down
pr 2020 at 14:58, Liran Rotenberg wrote:
>
>
> On Sun, Apr 5, 2020 at 3:38 PM Maton, Brett
> wrote:
>
>> I've got a cluster made up of five physical hosts, (Dells with idrac 7
>> management)
>> Power management / fencing enabled on all hosts.
>>
>&g
I've got a cluster made up of five physical hosts, (Dells with idrac 7
management)
Power management / fencing enabled on all hosts.
I've enabled the power_saving scheduling policy on my cluster, it's
migrated all the VM's to a couple of physical hosts so three are sitting
idle with no VM's.
Shoul
irtualization/4.2/html/self-hosted_engine_guide/troubleshooting
> > >
> > >
> > >
> > >It should tell you the steps to take to troubleshoot your deployment.
> > >
> > >
> > >
> > >Eric Evans
> > >
> > >Digital Data Servic
ing
>
>
>
> It should tell you the steps to take to troubleshoot your deployment.
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.660.9080
>
>
>
> *From:* Maton, Brett
> *Sent:* Tuesday, March 31, 2020 11:52 PM
> *To:* eev...@dig
18:01, Strahil Nikolov wrote:
> On April 1, 2020 7:44:09 AM GMT+03:00, "Maton, Brett" <
> mat...@ltresources.co.uk> wrote:
> >I currently don't have a hosted engine...
> >
> >I tried the usual command as you suggested, but that just says 'You
> >
ed, 1 Apr 2020 at 05:37, Strahil Nikolov wrote:
> On April 1, 2020 6:51:49 AM GMT+03:00, "Maton, Brett" <
> mat...@ltresources.co.uk> wrote:
> >So, how would I go about disabling global maintenance when hosted
> >engine
> >isn't running?
> >
Have you tried
hosted-engine --vm-start
On any of the HVs ?
On Wed, 1 Apr 2020 at 02:57, Mark Steele wrote:
> Hello,
>
> We are on an older version (3.x - cannot be specific as I cannot get my
> ovirt hosted engine up).
>
> We experienced a storage failure earlier this evening - the hosted engin
t;> Eric Evans
>>
>> Digital Data Services LLC.
>>
>> 304.660.9080
>>
>>
>>
>> *From:* Maton, Brett
>> *Sent:* Tuesday, March 31, 2020 2:35 PM
>> *To:* Ovirt Users
>> *Subject:* [ovirt-users] Failing to redeploy self hosted en
Oooh probably...
I'll give that a try in the morning, cheers for the tip!
On Tue, 31 Mar 2020, 21:23 , wrote:
> Did you put the ovirt host into global maintenance mode? That may be the
> issue.
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.66
I keep running into this error when I try to (re)deploy self-hosted engine.
# ovirt-hosted-engine-cleanup
# hosted-engine --deploy
...
...
[ INFO ] TASK [ovirt.hosted_engine_setup : Fail with error description]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
host has been
e physical one, can anybody confirm this?
> Le 25/03/2020 à 13:02, Maton, Brett a écrit :
>
> Can't say that I've tried it, but it looks like it's a setting within iDRAC
>
> This link is for iDRAC 8, but it's probably similar for 7 and 9...
>
> https://www.d
Can't say that I've tried it, but it looks like it's a setting within iDRAC
This link is for iDRAC 8, but it's probably similar for 7 and 9...
https://www.dell.com/community/PowerEdge-Hardware-General/iDRAC-8-NIC-Port-Sharing/td-p/5078061
On Wed, 25 Mar 2020 at 09:33, Nathanaël Blanchet wrote:
I think all you need on the inaccessible host is
/root/.ssh/authorized_keys
copied from a working host (with the same ownership, permissions and
SELinux context)
On Sat, 14 Mar 2020 at 11:15, wrote:
> It worked with the password.
> I recopied the authorized keys and ssh keys from engine host to
Have you checked the file permissions and SELinux context of the SSH keys
you copied to kvm01 ?
On Fri, 13 Mar 2020 at 23:39, wrote:
> This is from the secure log, /var/log/secure
>
> Mar 13 19:23:17 kvm01 sshd[46045]: Accepted publickey for root from
> 192.168.254.240 port 39668 ssh2: RSA
> SH
=python2-sanlock
On Thu, 8 Aug 2019 at 10:27, Sandro Bonazzola wrote:
>
>
> Il giorno gio 8 ago 2019 alle ore 11:20 Maton, Brett <
> mat...@ltresources.co.uk> ha scritto:
>
>> Sure, it seems to be running now.
>>
>> For anyone else with this issue, I
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.3
exclude=python2-sanlock
On Thu, 8 Aug 2019 at 09:55, Sandro Bonazzola wrote:
>
>
> Il giorno gio 8 ago 2019 alle ore 10:37 Maton, Brett <
> mat...@ltresources.co.uk> ha scritto:
>
>> Thanks Sandro,
>>
>>R
sanlock-lib = 3.7.1-2.el7
Available: sanlock-lib-3.7.1-2.1.el7.x86_64 (ovirt-4.3-fix)
sanlock-lib = 3.7.1-2.1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
On Thu, 8 Aug 2019 at 08:59, Sandro Bonazzola
I just tried to update my 4.6 testlab and got the following RPM dependency
issue:
rpm -qa ovirt-release*
ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch
Error encountered:
yum upgrade
...
Error: Package: vdsm-4.30.26-1.el7.x86_64 (ovirt-4.3-pre)
Requires: sanlock-python >= 3.7.3
Hi,
I just ran yum update on my test cluster and ran into the following issue:
I did notice that the python2-ioprocess is currently installed from the
ovirt-4.2 repo...
Any suggestions?
Thanks,
Brett
Repo RPM: ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch
yum -y upgrade
...
Error: Packag
I ssh'd to my Synology NAS and created the user and group vdsm/kvm with id
36 and chown'd the share.
The permissions on shared folders for oVirt are 0750, no where near as
open as 0777...
Not had any problems with it over the years, just need to change the
ownership of the new shares before usi
28, 2019, 13:15 Strahil wrote:
>
>> Hi Sandro,
>> Should I open a bug or you can do it based on this thread?
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 28, 2019 10:47, "Maton, Brett" wrote:
>>
>> I've just upgraded to 4.3.4 RC2 an
Where is the configuration that VDSM uses to generate ifcfg files?
My nameservers have moved and it seems to regenerate the ifcfg
(ovirtmgmt) file overwriting the changes with the correct name servers in
when the server is rebooted and s putting in the wrong (old) nameserver
addresses.
Where
not seen that one before either ;)
On Fri, 8 Feb 2019 at 15:06, Greg Sheremeta wrote:
> So, while we wait for word on hyperkitty, do y'all use the ovirt subreddit?
> https://www.reddit.com/r/ovirt
>
>
> On Fri, Feb 8, 2019 at 8:44 AM Maton, Brett
> wrote:
>
>> H
I just tried the suggested patch/mod applying it with a simple sed call:
sed -i "75i\'--verbose=db_ctl_base:syslog:off'"
/usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet
So far it appears to have squashed the no key "odl_os_hostconfig_hostid"
error message
On Tue, 29 Jan 2019 at 09
p...@ovirt.org>
> >
> > Thanks!!
> >
> > On Fri, Feb 8, 2019 at 6:38 AM Maton, Brett > <mailto:mat...@ltresources.co.uk>> wrote:
> >
> > Hadn't noticed the link before!
> >
> > On Fri, 8 Feb 2019 at 11:31, Greg Sh
I'm using openbacchus at the moment and have it working with oVirt 4.2.8
and 4.3.
As you know it's UI based so doesn't tick your command line box (yet ;) )
On Mon, 4 Feb 2019 at 18:18, Torsten Stolpmann
wrote:
> On 04.02.2019 16:03, Mike wrote:
> > 04.02.2019 17:45, Torsten Stolpmann пишет:
> >
Hadn't noticed the link before!
On Fri, 8 Feb 2019 at 11:31, Greg Sheremeta wrote:
> Yep. It's supposed to be instant. It's broken on this thread -- seems to
> work on others. I reported it and hopefully we'll get that fixed ASAP.
>
> Greg
>
> On Fri, Feb 8, 2019 at 6:23 AM Josep Manel Andrés Mo
+1 for me, a forum would be much easier to search
On Fri, 8 Feb 2019 at 08:06, Josep Manel Andrés Moscardó <
josep.mosca...@embl.de> wrote:
> Hi all,
> I am just wondering if anyone like me would like to have everything that
> is bump here in a forum, with all the benefits it brings (and people
>
In the updated UI, it doesn't seem possible to migrate hosted engine from
Compute -> Host -> Virtual Machines
anymore, although there does appear to be a 'new' Cancel Migration button.
It's handy to be able to migrate the hosted engine from this view, I
normally manually migrate the hosted engine
Importing usr/lib/python2.7/site-packages/vdsm/rpc/vdsm-api.pickle
from vdsm-api-4.30.5-2.gitf824ec2.el7.noarch.rpm
also fixes the issue for gluster storage
On Mon, 14 Jan 2019 at 11:34, Gianluca Cecchi
wrote:
> On Mon, Jan 14, 2019 at 11:09 AM Marcin Sobczyk
> wrote:
>
>>
>> Hi,
>>
>> There ar
Would this default issue also affect gluster storage ?
On Sun, 13 Jan 2019 at 18:16, Nir Soffer wrote:
> On Sun, Jan 13, 2019 at 4:35 PM Gianluca Cecchi
> wrote:
>
>>
>>
>> On Sun, Jan 13, 2019 at 12:38 PM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Sun, Jan 13, 2019 at 10:
Raised bug report https://bugzilla.redhat.com/show_bug.cgi?id=1644550
5 days with no hosted engine to manage production servers
On Tue, 30 Oct 2018 at 06:45, Maton, Brett wrote:
> Any suggestions on the FATAL: Failed checking DbJustRestored error ?
>
> On Sat, 27 Oct 2018 at 16:
Any suggestions on the FATAL: Failed checking DbJustRestored error ?
On Sat, 27 Oct 2018 at 16:01, Maton, Brett wrote:
> Not sure what you mean by ovirt-engine-appliance, I just deploy with
> 'hosted-engine --deploy...' and keep it upto date.
>
> On Fri, 26 Oct 2018 at 13
, Oct 26, 2018 at 1:31 PM Maton, Brett
> wrote:
>
>> Hi Simone,
>>
>> I'm seeing the same error with new hosted-engine0setup RPM...
>> ...
>> [ ERROR ] fatal: [ovirt.gh.ltresources.co.uk]: FAILED! => {"changed":
>> true, "cmd":
ot;, "stdout_lines": ["Preparing to restore:", "- Unpacking
file '/root/engine_backup'", "Restoring:", "- Files", "Provisioning
PostgreSQL users/databases:", "- user 'engine', database 'engine'", &qu
, Oct 26, 2018 at 9:23 AM Maton, Brett
> wrote:
>
>> oVirt: 4.2.6.2-1
>>
>> I'm Moving hosted engine from one storage domain to another by backing up
>> and restoring the engine.
>>
>> New VM provisioned in new storage domain, I get as far
oVirt: 4.2.6.2-1
I'm Moving hosted engine from one storage domain to another by backing up
and restoring the engine.
New VM provisioned in new storage domain, I get as far as trying to restore
the backup:
but am getting this DbJustRestored Error:
engine-backup --mode=restore --file=engine.backu
58 PM Piotr Kliczewski <
>> piotr.kliczew...@gmail.com> wrote:
>>
>>> This error was raised on vdsm side here [1]. I was unable to find
>>> 'getiterator' in vdsm code based.
>>> Please provide gluster related logs.
>>>
>>> Thi
I'm seeing the following errors appear in the event log every 10 minutes
for each participating host in the gluster cluster
GetGlusterVolumeHealInfoVDS failed: Internal JSON-RPC error: {'reason':
"'bool' object has no attribute 'getiterator'"}
Gluster brick health is good
Any ideas ?
oVirt 4.2.
Having trouble upgrading my test instance (4.2.7.1-1.el7), there appear to
be some dependency issues:
Transaction check error:
file /usr/share/cockpit/networkmanager/manifest.json from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1
As suggested may be run once will suit, there's a checkbox at the bottom of
the option panel to roll-back options after reboot.
On Sat, 29 Sep 2018 at 22:06, Zach Dzielinski
wrote:
> My current usage of oVirt requires me to switch the boot priorities for
> virtual machines from hard disk to pxe
After a couple of restarts I now have the gluster service and bricks up and
running.
Thanks again for the pointer
On Thu, 27 Sep 2018 at 14:40, Maton, Brett wrote:
> Thanks Paul, Ill give that a go
>
> On Thu, 27 Sep 2018 at 14:38, Staniforth, Paul <
> p.stanifo...@leedsbecke
install
> menu so installs the gluster packages. I tried a short-cut with ours and as
> one host was in maintenance it reinstalled and activated it so there was
> somewhere for the VMs and SPM to migrate to.
>
>
> Regards,
>
> Paul S.
> -----
I just enabled the Gluster service in an existing oVirt 7.2.7-1 cluster via
the Web-UI which put all hosts into non-operational status.
Lots of events being created:
Could not find gluster uuid of server host001.local on Cluster testlab.
Could not find gluster uuid of server host002.local on Clust
gt;
>
> *From:* femi adegoke
> *Sent:* 11 September 2018 11:48
> *To:* Maton, Brett
> *Cc:* Ovirt Users
> *Subject:* [ovirt-users] Re: Managing multiple oVirt installs?
>
>
>
> Brett,
>
>
>
> Did you install ManageIQ in a vm?
>
> What instruc
ides did you follow?
>
> On Sep 10 2018, at 11:13 pm, Maton, Brett
> wrote:
>
>
> Installed manageIQ yesterday, looks like it's going to cover my needs
> thanks for suggesting it.
>
> On 4 September 2018 at 13:06, femi adegoke
> wrote:
>
> Just an FYI:
> The
Installed manageIQ yesterday, looks like it's going to cover my needs
thanks for suggesting it.
On 4 September 2018 at 13:06, femi adegoke wrote:
> Just an FYI:
> The Glance registry does not have the latest current stable release which
> is Gaprindashvili-4
> ___
Good question, I'm interested in the solution.
On 3 September 2018 at 01:39, femi adegoke wrote:
> Let's say you have multiple oVirt installs.
>
> How can they all be "managed" by using a single engine web UI (so I don't
> have to login 5 different times)?
> _
That should have been covered by the installer really, good to know that
you found the issue though.
On 23 August 2018 at 04:13, Wesley Stewart wrote:
> I'm an idiot, it was selinux
>
> setsebool -P httpd_can_network_connect true
>
> On Wed, Aug 22, 2018, 10:19 PM Wesley Stewart wrote:
>
>> I a
What used to catch me out here, is that you need to set 'Choose hosted
engine deployment action' to deploy when adding a new physical host.
On 22 August 2018 at 08:51, Simone Tiraboschi wrote:
> The hosts that are eligible for running the engine VM should be flagged
> with a silver crown, the ho
AM, Yedidyah Bar David
>> wrote:
>>
>>> On Tue, Aug 14, 2018 at 9:27 AM, Maton, Brett
>>> wrote:
>>> >
>>> > Just tried to update my test cluster to 4.2.6.2 :
>>> >
>>> >
>>> > [ INFO ] Stage: Misc config
Just tried to update my test cluster to 4.2.6.2 :
[ INFO ] Stage: Misc configuration
[ INFO ] Running vacuum full on the engine schema
[ INFO ] Running vacuum full elapsed 0:00:04.523561
[ INFO ] Upgrading CA
[ INFO ] Backing up database localhost:ovirt_engine_history to
'/var/lib/ovirt-engi
Also works with Firefox, thanks.
On 2 August 2018 at 23:34, Jayme wrote:
> I got it working in chrome by setting spice+vnc then selecting the novnc
> option in the console options
>
> On Thu, Aug 2, 2018, 6:50 PM Christophe TREFOIS, <
> christophe.tref...@uni.lu> wrote:
>
>> I guess there are co
I've not had any luck on MacOS for a while now, even the .vv files with
RemoteViewer 0.5.7
I get a dialog saying unable to connect to the graphical server
Same servers work just fine from a Windows client though.
On 2 August 2018 at 12:06, Greg Sheremeta wrote:
> On Wed, Aug 1, 2018 at 7:03 PM
; 'hypervisor-interface' config value to 'vdsmjsonrpcbulk' insted of
> 'vdsmjsonrpcclient'.
>
> On Wed, 25 Jul 2018 at 08:56, Maton, Brett
> wrote:
>
>> I upgraded my test cluster to 4.2.5.2-1 last night (hosts rebooted after
>> update) and I've star
18 July 2018 at 16:10, Maton, Brett wrote:
> Thanks,
>
> Cluster is all installed from pre-release, maybe I managed to get an
> iffy rpm
>
> On 18 July 2018 at 15:50, Andrej Krejcir wrote:
>
>> Yes, copying it from another host with mom version 0.5.12 is enough.
&
Install the 4.2 repo listed here
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt/
On 22 July 2018 at 22:28, aduckers wrote:
> Does that get me from 4.1 to 4.2 though? What about adding the 4.2
> repositories?
>
> > On Jul 20, 2018, at 9:19 PM, femi adegoke
> wrote:
> >
Hi Lakhwinder,
The current stable version is 4.2.
https://www.ovirt.org/download/
On 22 July 2018 at 12:07, Greg Sheremeta wrote:
> We recommend the Cockpit UI
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#fresh-install-via-web-ui
>
> Greg
>
>
> On Sat, Jul 21, 2018 at 1
>
> On Wed, 18 Jul 2018 at 16:38, Maton, Brett
> wrote:
>
>> Bingo
>>
>> How could that file not be installed/deployed ?
>>
>>
>> Should I simply copy it from one of the other hosts to make the message
>> go away ?
>>
>> On 18 July
n2.7/site-packages/mom/HypervisorInterfaces/
> vdsmjsonrpcclientInterface.py
>
>
> Andrej
>
> On Wed, 18 Jul 2018 at 16:04, Maton, Brett
> wrote:
>
>> I just checked the mom version, it's already at 0.5.12
>>
>> # rpm -qa mom
>> mom-0.5.12-1.el7.centos.
is configured to use
> 'vdsmjsonrpcclient' module to communicate with vdsm, but it cannot find
> this module, probably because it is an older version.
>
> Updating MOM to version 0.5.12 should fix it.
>
>
> Regards,
> Andrej
>
> On Wed, 18 Jul 2018 at 14
FWIW:
This test cluster is 3x HP MicroServer Gen 8 16GB RAM, Intel(R) Xeon(R) CPU
E3-1220 V2 @ 3.10GHz
Network is bonded fail-over.
Regards,
Brett
On 18 July 2018 at 13:15, Francesco Romani wrote:
> Thanks!
>
>
> On 07/18/2018 02:11 PM, Maton, Brett wrote:
>
>> Sur
Sure no problem, mom log attached.
On 18 July 2018 at 12:36, Francesco Romani wrote:
>
> On 07/18/2018 07:24 AM, Maton, Brett wrote:
>
>> Thanks Francesco,
>>
>> Log attached.
>>
>
> Interestings, it seems the fault comes from MOM:
>
> 2018-07-18
Thanks Francesco,
Log attached.
On 17 July 2018 at 13:12, Francesco Romani wrote:
> On 07/17/2018 07:30 AM, Maton, Brett wrote:
>
> I've got one physical host in a 3 host CentOS7.5 cluster that reports the
> following error several times a day
>
> VDSM node3.examp
I've got one physical host in a 3 host CentOS7.5 cluster that reports the
following error several times a day
VDSM node3.example.com command Get Host Statistics failed: Internal
JSON-RPC error: {'reason': ':\'NoneType\' object has no attribute
\'statistics\'">'}
Any ideas what the problem might b
You could also run engine-setup on hosted-engine again
On 11 July 2018 at 21:17, Bruckner, Simone
wrote:
> Hi all,
>
>
>
> I have a VM stuck in state „Migrating to“. I restarted ovirt-engine and
> rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on CentOS 7.5
> hosts with vdsm-4.20.32
eta wrote:
> Hi Brett,
>
> This is a bug. It could be https://bugzilla.redhat.
> com/show_bug.cgi?id=1533214
> If you think so, please add any details you think would help. If you think
> it's something else, please open a new bug.
>
> Best wishes,
> Greg
>
> On T
Done: https://bugzilla.redhat.com/show_bug.cgi?id=1598364
On 5 July 2018 at 07:30, Idan Shaby wrote:
> Hi,
>
> Thanks for letting us know!
> Can you please file a bug for it?
>
>
> Regards,
> Idan
>
> On Wed, Jul 4, 2018 at 9:07 AM, Maton, Brett
> wrote:
&
The table which displays disk info is too small when moving disks between
storage domains, probably because the progress bar is added below the
'locked' status but the table doesn't resize to accommodate the taller rows.
Tried in Chrome, Edge, Firefox, Internet Explorer & Safari
__
Actually the extra nic is assigned to network 'Empty' in the edit VM form,
and is throwing the html null error in the snapshots form/view
On 3 July 2018 at 14:26, Maton, Brett wrote:
> I think the issue is being caused by a missing network.
>
> One of the upgrades of my test
ll as they're not really needed at the moment.
The vm's that are throwing the html null error when trying to view
snapshots have a secondary nic that isn't assigned to any network.
Regards,
Brett
On 2 July 2018 at 08:04, Maton, Brett wrote:
> Hi,
>
> I'm trying to
Hi,
I'm trying to restore a VM snapshot theough the UI but keep running into
this error:
Uncaught exception occurred. Please try reloading the page. Details:
Exception caught: html is null
Please have your administrator check the UI logs
ui log attached.
CentOS 7
oVirt 4.2.5-1.el7
Regards,
B
I've not needed to do it with production data.
But when I trash my testlab hosted engine I regularly re-import the
existing vm (not hosted engine) storage domain.
Not had many issues with the process.
That said, I've only done it with all vm's being down and physical hosts
rebooted to ensure that
Thanks for the tip Simone, all working now.
Best regards,
Brett
On 29 May 2018 at 09:24, Simone Tiraboschi wrote:
>
>
> On Tue, May 29, 2018 at 9:32 AM, Maton, Brett
> wrote:
>
>> Hi Ido,
>>
>> It appears that I have the latest packages installed, any
1 - 100 of 248 matches
Mail list logo