Hi,
On Wed, May 27, 2020 5:38 pm, Gianluca Cecchi wrote:
[snip]
> But you hated Python, didn't you? ;-)
I do. Can't stand it. Doesn't mean I can't read it and/or write it, but
I have to hold my nose doing it. Syntactic white space? Eww. But Python
is already installed and used and,
Hi, everyone. When I use the ovirt cluster, there are multiple virtual machines
in the cluster.Engine controller shows excessive CPU load during
administration, I would like to ask if there is a way to extend engine
controller CPU and memory online, I would be very grateful.thank you
I wonder if this is something that needs QEMU 4.2 as the release notes
specifically mention handling of TSX for x86 as seen here:
https://wiki.qemu.org/ChangeLog/4.2#x86. Looks like QEMU 4.1.0 is in use in
oVirt 4.4.0.
___
Users mailing list --
On Wed, May 27, 2020 at 4:52 PM Randall Wood wrote:
> I have a three node oVirt 4.3.7 cluster that is using GlusterFS as the
> underlying storage (each oVirt node is a GlusterFS node). The nodes are
> named ovirt1, ovirt2, and ovirt3. This has been working wonderfully until
> last week when
On Wed, May 27, 2020 at 4:50 PM Derek Atkins wrote:
> Hi,
>
> (Sorry if you get this twice -- looks like it didn't like the python
> script in there so I'm resending without the code)
>
> Gianluca Cecchi writes:
>
> > Hi Derek,
> > today I played around with Ansible to accomplish, I think,
On Wed, May 27, 2020 at 7:47 PM Carlos C wrote:
> Hello folks,
>
> Is there any documentation about this upgrade, or someone has the steps
> for this?!
>
> Best regards
> Carlos
Hi,
you can read some considerations of mine here in a thread of Mid April, for
both multi host and single host:
Greetings,
I have a running oVirt install that's been working for almost 2 years.
I'm building a _completely_ new install. I mention it because it is
useful for me to compare configurations when I run into issues like this
one.
Right now there are three physical hosts:
1x management where I run
Hello folks,
Is there any documentation about this upgrade, or someone has the steps for
this?!
Best regards
Carlos
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I am running MAC OS X but I was able to import the CA Cert and I can see it in
my Keychain. However, when I try to bring up the console I get:
Can't connect to websocket proxy server wss://lfg-kvm.corp.lfg.com:6100. Please
check that:
websocket proxy service is running,
firewalls are properly
Eh, no point in creating a repo for that, so I just put them on the web:
https://www.ihtfp.org/ovirt/
-derek
On Wed, May 27, 2020 11:05 am, Staniforth, Paul wrote:
>
> Thanks Derek,
> GitHub or GitLab probably.
>
> Regards,
> Paul S.
>
Thanks Derek,
GitHub or GitLab probably.
Regards,
Paul S.
From: Derek Atkins
Sent: 27 May 2020 15:50
To: Gianluca Cecchi
Cc: tho...@hoberg.net ; users
Subject: [ovirt-users] AutoStart VMs (was Re: Re: oVirt 4.4.0 Release
On Wed, May 27, 2020 at 7:42 AM Louis Bohm wrote:
> OS: Oracle Linux 7.8 (unbreakable kernel)
> Using Oracle Linux Virtualization Manager: Software
> Version:4.3.6.6-1.0.9.el7
>
> Since I am running all of it on one physical machine I opted to install
> the ovirt-engine using the accept defaults
Hi,
(Sorry if you get this twice -- looks like it didn't like the python
script in there so I'm resending without the code)
Gianluca Cecchi writes:
> Hi Derek,
> today I played around with Ansible to accomplish, I think, what you currently
> do in oVirt shell.
> It was the occasion to learn,
I have a three node oVirt 4.3.7 cluster that is using GlusterFS as the
underlying storage (each oVirt node is a GlusterFS node). The nodes are named
ovirt1, ovirt2, and ovirt3. This has been working wonderfully until last week
when ovirt2 crashed (it is *old* hardware; this was not entirely
Sorry, by overloaded I meant in terms of I/O, because this is an
active layer merge, the active layer
(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
it as the active layer. So if there is constantly
Hi,
I think I know (it's hard to tell without more logs, but anyway):
It's because your PKI was expired and thus renewed. If you used the
command line to restore/deploy, you were also asked:
'Renew engine CA on restore if needed? Please notice '
Replying to my own post here, I've verified that it's currently not possible to
do a greenfield deploy of oVirt 4.4.0 w/ hosted engine on EPYC CPUs due to the
system setting a requirement of 'virt-ssbd' for the final HE definition after
the move to shared storage. The local HE runs perfectly
Hello,
Yes, no problem. XML is attached (I ommited the hostname and IP).
Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not overloaded. We
have multiple servers with the same specs with no issues.
Regards,
On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik wrote:
> Can you share the VM's
Hello
I'm not sure how but I've got pending changes on the HostedEngine VM on
a 3 node 4.4 cluster and those changes are never applied. Is there a way
to cancel pending VM changes in general ?
Thanks.
___
Users mailing list -- users@ovirt.org
To
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging
taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To
Hey Didi,
yes this one.
The user has spent some time to fill in the template, so we shouldn't expect
that this user should also dedicate extra time to search for articles :)
Best Regards,
Strahil Nikolov
На 26 май 2020 г. 14:29:57 GMT+03:00, Yedidyah Bar David
написа:
>On Tue, May 26,
Hi,
I have upgraded one of my nodes to oVirt-node 4.4.0 and I am testing the basic
functionality in preparation for the full cluster migration.
Unfortunately snapshot deletions are not working at the moment; I have a live
merge failure most of the times due to a libvirt error --
Hello,
Running virsh blockjob sda --info a couple of times it shows 99 or 100
%. Looks like it is stuck / flapping for some reason.
Active Block Commit: [ 99 %]
Active Block Commit: [100 %]
What would be the best approach to resolve this?
I see taskcleaner.sh can be used in cases like these?
On Wed, May 27, 2020 at 2:33 PM Oliver Leinfelder
wrote:
>
> Hi,
>
> Yedidyah Bar David wrote:
> >
> > In any case (perhaps not relevant to you right now, if indeed engine-setup
> > succeeded), usually the engine vm is left running at the end of a failed
> > deploy. If it's still the local vm,
Maybe someone from virt team could refer to this.
On Wed, May 27, 2020 at 2:06 PM Gianluca Cecchi
wrote:
> On Wed, May 27, 2020 at 11:26 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Wed, May 27, 2020 at 10:21 AM Amit Bawer wrote:
>>
>>> From your vdsm log, it seems as
OS: Oracle Linux 7.8 (unbreakable kernel)
Using Oracle Linux Virtualization Manager: Software Version:4.3.6.6-1.0.9.el7
Since I am running all of it on one physical machine I opted to install the
ovirt-engine using the accept defaults option.
When I try to start a noVNC console I see this in
Hi,
Yedidyah Bar David wrote:
In any case (perhaps not relevant to you right now, if indeed engine-setup
succeeded), usually the engine vm is left running at the end of a failed
deploy. If it's still the local vm, you can find its IP address by searching
the ansible logs for local_vm_ip, then
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
sda --info` (if it's still running)
On Wed, May 27, 2020
On Wed, May 27, 2020 at 11:26 AM Gianluca Cecchi
wrote:
> On Wed, May 27, 2020 at 10:21 AM Amit Bawer wrote:
>
>> From your vdsm log, it seems as occurrence of an issue discussed at
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/
>>
>>
Hello,
Thank you for the reply.
Unfortunately I cant see the task on any on the hosts:
vdsm-client Task getInfo taskID=f694590a-1577-4dce-bf0c-3a8d74adf341
vdsm-client: Command Task.getInfo with args {'taskID':
'f694590a-1577-4dce-bf0c-3a8d74adf341'} failed:
(code=401, message=Task id unknown:
On Wed, May 27, 2020 at 12:37 PM Oliver Leinfelder
wrote:
>
> Hi there,
>
> > You should also see one or more ERROR messages, can you check/post them?
>
> There is one error message that immediately follows, if that helps:
>
> 2020-05-27 00:17:12,397+0200 ERROR otopi.context
>
Hi there,
You should also see one or more ERROR messages, can you check/post them?
There is one error message that immediately follows, if that helps:
2020-05-27 00:17:12,397+0200 ERROR otopi.context
context._executeMethod:154 Failed to execute stage 'Closing up': Failed
executing
On Wed, May 27, 2020 at 10:21 AM Amit Bawer wrote:
> From your vdsm log, it seems as occurrence of an issue discussed at
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/
>
> 2020-05-26 19:51:53,837-0400 ERROR (vm/90f09a7c) [virt.vm]
>
Hi,
I am still fighting with oVirt 4.4 installation in HCI single host
configuration. Seems to be hard fighter... ;-)
It looks like there is no 4.4 HCI single host installation guide so I am
using compilation of this sources
* https://www.ovirt.org/download/
*
Team,
I've been having issues all my VMs cant be started, error is "VM Has been
paused due to storage I/O error"
Any ideas are welcome as all my VMs are down
Gluster log (gluster version 6.8):
[2020-05-27 09:10:28.132619] E [MSGID: 113040]
[posix-inode-fd-ops.c:1572:posix_readv]
>From your vdsm log, it seems as occurrence of an issue discussed at
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/
2020-05-26 19:51:53,837-0400 ERROR (vm/90f09a7c) [virt.vm]
(vmId='90f09a7c-3af1-45a9-a210-78a9b0cd4c3d') The vm start process failed
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host
On Wed, May 27, 2020 at 9:02 AM David Sekne wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live
On Wed, May 27, 2020 at 9:09 AM Oliver Leinfelder
wrote:
>
> Hi there,
>
> I'm a bit puzzled about an possible upgrade paths from a 4.3 cluster to
> version 4.4 in a self-hosted engine environment.
>
> My idea was:
>
> Set up a new host with a clean ovirt node 4.4 installation, then deploy the
I think the problem with a storage connection. Verify your IP addresses on the
storage adapter port. Are they connected? After install my ports for storage
was deactivated by defaut. Try to connect your iSCSI manually with iscsiadm
command and then check your storage connection in storage tab
Hi there,
I'm a bit puzzled about an possible upgrade paths from a 4.3 cluster to
version 4.4 in a self-hosted engine environment.
My idea was:
Set up a new host with a clean ovirt node 4.4 installation, then deploy the
hosted engine on this with a restored backup from the production cluster
Hello,
I'm running oVirt version 4.3.9.4-1.el7.
After a failed live storage migration a VM got stuck with snapshot.
Checking the engine logs I can see that the snapshot removal task is
waiting for Merge to complete and vice versa.
2020-05-26 18:34:04,826+02 INFO
41 matches
Mail list logo