The address is assigned by the engine and hosted engine tools only
consumes it via the OVF mechanism so an engine database dump might be
needed to investigate this, I have no idea how the address is
generated.
--
Martin Sivak
SLA / oVirt
On Tue, Oct 31, 2017 at 11:25 AM, Martin Sivak
Hi,
adding a second NIC is supported, but we should definitely investigate
why it has the same PCI slot address.
Martin
On Tue, Oct 31, 2017 at 10:56 AM, Hristo Pavlov wrote:
> Simone, Thanks a lot for the support!
>
> Yep, after remove the line of the additional network
Simone, Thanks a lot for the support!
Yep, after remove the line of the additional network and starting with custom
vm file , the hosted engine virtual machine runs successfully.
So far, the cluster will be in Global Maintainance, because the other nodеs
don't know about it.
At the network
On Mon, Oct 30, 2017 at 5:20 PM, Hristo Pavlov wrote:
> I had added the second ethernet card through the Administration Panel,
> Edit HostedEngine virtual machine and add new ethernet adapter. It didn't
> give me any error.
>
>
This smells like a bug, we have to investigate
I had added the second ethernet card through the Administration Panel, Edit
HostedEngine virtual machine and add new ethernet adapter. I t didn't give me
any error.
>Понедельник, 30 октября 2017, 18:13 +02:00 от Simone Tiraboschi
>:
>
>
>
>On Mon, Oct 30, 2017 at 5:03 PM,
On Mon, Oct 30, 2017 at 5:03 PM, Hristo Pavlov wrote:
> Francesco,
>
> I noticed something in file /var/run/ovirt-hosted-engine-ha/vm.conf
>
> cpuType=Broadwell
> emulatedMachine=pc-i440fx-rhel7.3.0
> vmId=da98112d-b9fb-4098-93fa-1f1374b41e46
> smp=2
> memSize=6144
>
Francesco,
I noticed something in file /var/run/ovirt-hosted-engine-ha/vm.conf
cpuType=Broadwell
emulatedMachine=pc-i440fx-rhel7.3.0
vmId=da98112d-b9fb-4098-93fa-1f1374b41e46
smp=2
memSize=6144
maxVCpus=16
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
On 10/30/2017 03:56 PM, Hristo Pavlov wrote:
> Thank you all!
>
> Francesco,
>
> [root@alpha ~]# journalctl -u libvirtd
> -- Logs begin at Sun 2017-10-29 09:39:58 EET, end at Mon 2017-10-30
> 16:35:36 EET. --
> Oct 29 09:41:24 alpha.datamax.bg systemd[1]: Starting Virtualization
> daemon...
> Oct
Thank you all!
Francesco,
[root@alpha ~]# journalctl -u libvirtd
-- Logs begin at Sun 2017-10-29 09:39:58 EET, end at Mon 2017-10-30 16:35:36
EET. --
Oct 29 09:41:24 alpha.datamax.bg systemd[1]: Starting Virtualization daemon...
Oct 29 09:41:26 alpha.datamax.bg systemd[1]: Started
Hi,
anything in the journal about libvirt? (journalctl -u libvirtd)
could you share a bigger chunk of the vdsm log, demonstrating the failed
VM start?
Bests,
On 10/30/2017 03:28 PM, Hristo Pavlov wrote:
> I tried it already, doesn't start on any of the nodes.
>
>
> In a log
I tried it already, doesn't start on any of the nodes.
In a log /var/log/libvirt/qemu/HostedEngine.log at all three nodes has nothing
to do with starting, as if it had not started.
>Понедельник, 30 октября 2017, 16:11 +02:00 от Nathanaël Blanchet
>:
>
>Hi,
>It happened to
Hi,
you will probably need to check libvirt and qemu logs to see why the
domain crashed.
My colleagues from the virt team will be probably able to point you to
more exact places.
Best regards
Martin Sivak
On Mon, Oct 30, 2017 at 12:28 PM, Hristo Pavlov wrote:
> Hi All,
>
Hi All,
Our oVirt cluster is with 3 nodes with shared fibre channel storage, the engine
virtual machine is self hosted.
Hypervisors OS: CentOS Linux release 7.3 / x86_64, oVirt version is 4.1.2.2.
The environment has been working for about a year without any problems .
After shutdown of
13 matches
Mail list logo