在 2020/9/17 12:58, Adam Xu 写道:
在 2020/9/16 15:53, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 10:46 AM Adam Xu wrote:
在 2020/9/16 15:12, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu
wrote:
Hi ovirt
I just try to upgrade a self-Hosted engine from 4.3.10 to
4.4.1.4.
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with:
feature policy=
Also the hosts have a section which contains:
написа:
HostedEngine:
..
Haswell-noTSX
..
both of the hosts:
..
Westmere
..
others vms which can be
在 2020/9/16 15:53, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 10:46 AM Adam Xu wrote:
在 2020/9/16 15:12, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu wrote:
Hi ovirt
I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4. I followed
the step in the
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with:
feature policy=
Also the hosts have a section which contains:
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo
написа:
HostedEngine:
..
Haswell-noTSX
..
both of
It seems that this one fails :
- name: Parse server CPU list
set_fact:
server_cpu_dict: "{{ server_cpu_dict |
combine({item.split(':')[1]: item.split(':')[3]}) }}"
In cases like that I usually define a new variable.
Can you put another task before that like:
- name: Debug
HostedEngine:
..
Haswell-noTSX
..
both of the hosts:
..
Westmere
..
others vms which can be migrated:
..
Haswell-noTSX
..
在 2020-09-17 03:03:24,"Strahil Nikolov" 写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias
In my previous reply:
>> Ansible task reports them as Xeon 5130.
>> According to Intel Ark these fall in the Woodcrest family, which is
>> older the Nehalem.
Xeon 5130 "Woodcrest"
Do you need something more specific or different?
I also found one a reply from you on an older thread and added
You didn't mention your CPU type.
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 20:44:23 Гринуич+3, Michael Blanton
написа:
Wondering if there are any suggestions here before I wipe these nodes
and go back to another Hypervisor.
On 9/14/2020 12:59 PM, Michael
In the VM 'edit' settings you can pick the 'Host' tab on the left, specify
'Specific Host(s)' , define the migration mode (I'm using both Auto and Manual
as my cluster has the same CPU type) and last enable 'Pass-Through Host CPU'
and save the VM.
Then you can power it up and it should be good
Hello,
I’m an Exchange Server VM that’s going down to suspend without possibility of
recovery. I need to click on shutdown and them power on. I can’t find anything
useful on the logs, except on “dmesg” of the host:
[47807.747606] *** Guest State ***
[47807.747633] CR0:
What is your VM's OS type ?
There is some differences per OS version ->
https://www.redhat.com/sysadmin/dissecting-free-command
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 11:13:51 Гринуич+3, KISHOR K
написа:
Hi,
Memory field/column for few of VMs in our ovirt
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE
2. alias virsh='virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show
the Hosts' .
Best
Wondering if there are any suggestions here before I wipe these nodes
and go back to another Hypervisor.
On 9/14/2020 12:59 PM, Michael Blanton wrote:
Thanks for the quick response.
Ansible task reports them as Xeon 5130.
According to Intel Ark these fall in the Woodcrest family, which is
The VM has one snapshot which I can't delete because it shows a similar
error. That doesn't allow me to attach the disks to another VM. This VM
will boot ok if the disks are deactivated.
Find the engine.log attached.
The steps associated with the engine log:
- The VM is on booted from CD
Hi,
I suggest using REST API to do what you described. or the python sdk.
have a nice day
On Tue, Sep 15, 2020 at 10:53 PM Green, Jacob Allen /C <
jacob.a.gr...@exxonmobil.com> wrote:
>I am looking for an automated way, via Ansible to move a VM
> disk from one storage domain
ok will try on our env with passthrough, could you please send how you
passthrough the cpu? simply over the ovirt gui?
Rav Ya schrieb am Mi., 16. Sept. 2020, 00:56:
>
> Hi Arman,
>
> Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
>
> *The VM is configured for host CPU pass through and pinned to 6
Hi,
can you please attach the engine log? what steps did you make before
this error is shown? did you tried to create a snapshot and failed before
On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
wrote:
> What happens if you create another VM and attach the disks to it ?
> Does it
Hello,
We have many cases of failed migrations, and reducing the load on the
respective VMs made migration possible. Using "Suspend workload when needed"
did not help either with the migrations, which only worked when stopping
services on the VMs, thus reducing load.
I was therefore trying to
On Tue, Sep 15, 2020 at 5:02 PM Ravin Ya wrote:
> Hello Everyone,
>
> Please advice. Any help will be highly appreciated. Thank you in advance.
>
> Test Setup:
> oVirt Centos 7.8 Virtulization Host
> Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
> The vCPUs are configured
On Tue, Sep 15, 2020 at 10:01 AM wrote:
> Hello,
>
> I set cpu QoS as 10 and applied it to VM on oVirt 4.2, but it doesn't seem
> to work.
> compared to VM without QoS, there wasn't any difference in cpu usage.
> Also, there wasn't any ... or ... field
> related to QoS in libvirt file.
>
> is it
On Tue, Sep 15, 2020 at 6:53 PM Konstantinos Betsis
wrote:
> So a new test-net was created under DC01 and was depicted in the networks
> tab under both DC01 and DC02.
> I believe for some reason networks are duplicated in DCs, maybe for future
> use??? Don't know.
> If one tries to delete the
Hi,
Memory field/column for few of VMs in our ovirt (Compute -> Virtual Machines ->
Memory column) shows more than 90%.
But, when I checked (from free and also other commands) the actual "used"
memory by those VMs, it is less than 60%. What I see (from free -h) is that
ovirt seems to be
On Wed, Sep 16, 2020 at 10:46 AM Adam Xu wrote:
>
> 在 2020/9/16 15:12, Yedidyah Bar David 写道:
> > On Wed, Sep 16, 2020 at 6:10 AM Adam Xu wrote:
> >> Hi ovirt
> >>
> >> I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4. I
> >> followed the step in the document:
> >>
> >>
Thank you Didi,
We'll look for an alternative, maybe even migrate to the more recent,
supported version.
That would be the best option IMO.
Best regards,
-rodri
On 9/16/20 9:39 AM, Yedidyah Bar David wrote:
Hi!
On Wed, Sep 16, 2020 at 10:25 AM Rodrigo G. López wrote:
Hello,
Any
My gateway was not pingable. I have fixed this problem and now both nodes have
a score(3400).
Yet, hosted engine could not be migrated. Same log in engine.log:
host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" 写道:
>Both nodes have a
Il giorno mer 16 set 2020 alle ore 09:24 Rodrigo G. López <
r.gonza...@telfy.com> ha scritto:
> Hello,
>
> Any idea about this problem? I don't know if the email got through to the
> list.
>
> Should I join the #vdsm channel and discuss it there? Is there any other
> place specific to vdsm where
在 2020/9/16 15:12, Yedidyah Bar David 写道:
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu wrote:
Hi ovirt
I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4. I followed
the step in the document:
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
the old 4.3
Hi!
On Wed, Sep 16, 2020 at 10:25 AM Rodrigo G. López wrote:
>
> Hello,
>
> Any idea about this problem? I don't know if the email got through to the
> list.
It did
>
> Should I join the #vdsm channel and discuss it there? Is there any other
> place specific to vdsm where I could report
Hello,
Any idea about this problem? I don't know if the email got through to
the list.
Should I join the #vdsm channel and discuss it there? Is there any other
place specific to vdsm where I could report this?
Cheers,
-rodri
On 9/15/20 9:55 AM, Rodrigo G. López wrote:
Hi there,
We
Hello,
network-scripts for host networking were deprecated since oVirt 4.4.
It will be removed completely in the 4.4.3 release. There is no action
required
for setups that did not change the configuration to use network-scripts
backend (net_nmstate_enabled = false).
Users that did disable
On Wed, Sep 16, 2020 at 6:10 AM Adam Xu wrote:
>
> Hi ovirt
>
> I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4. I
> followed the step in the document:
>
> https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
>
> the old 4.3 env has a FC storage as engine
31 matches
Mail list logo