Hi Denis,
>
> That sound really strange. I would suspect some storage problems or
> something. As i told you earlier, output of --vm-status may shed light on
> that issue.
Unfortunately, I can't replicate it at the moment due to the need to
keep the VMs up.
>
>>
>
> Did you tried to migrate form
Hello!
On Fri, Jun 30, 2017 at 5:46 PM, cmc wrote:
> I ran 'hosted-engine --vm-start' after trying to ping the engine and
> running 'hosted-engine --vm-status' (which said it wasn't running) and
> it reported that it was 'destroying storage' and starting the engine,
> though it did not start it.
I ran 'hosted-engine --vm-start' after trying to ping the engine and
running 'hosted-engine --vm-status' (which said it wasn't running) and
it reported that it was 'destroying storage' and starting the engine,
though it did not start it. I could not see any evidence from
'hosted-engine --vm-status'
Hello!
On Fri, Jun 30, 2017 at 4:19 PM, cmc wrote:
> Help! I put the cluster into global maintenance, then powered off and
> then on all of the nodes I have powered off and powered on all the
> nodes. I have taken it out of global maintenance. No VM has started,
> including the hosted engine. Th
I've had no other choice but to power up the old bare metal engine to
be able to start the VMs. This is probably really bad but I had to get
the VMs running.
I am guessing now that if the host is shutdown rather than simply
rebooted, that the VMs will not restart on powerup of the host. This
would
Help! I put the cluster into global maintenance, then powered off and
then on all of the nodes I have powered off and powered on all the
nodes. I have taken it out of global maintenance. No VM has started,
including the hosted engine. This is very bad. I am going to look
through logs to see why not
So I can run from any node: hosted-engine --set-maintenance
--mode=global. By 'agents', you mean the ovirt-ha-agent, right? This
shouldn't affect the running of any VMs, correct? Sorry for the
questions, just want to do it correctly and not make assumptions :)
Cheers,
C
On Fri, Jun 30, 2017 at 1
Hi,
> Just to clarify: you mean the host_id in
> /etc/ovirt-hosted-engine/hosted-engine.conf should match the spm_id,
> correct?
Exactly.
Put the cluster to global maintenance first. Or kill all agents (has
the same effect).
Martin
On Fri, Jun 30, 2017 at 12:47 PM, cmc wrote:
> Just to clarif
Just to clarify: you mean the host_id in
/etc/ovirt-hosted-engine/hosted-engine.conf should match the spm_id,
correct?
On Fri, Jun 30, 2017 at 9:47 AM, Martin Sivak wrote:
> Hi,
>
> cleaning metadata won't help in this case. Try transferring the
> spm_ids you got from the engine to the proper hos
Ok, Thanks Martin. It should be feasible to get all VMs onto one host,
so I can do that (unless you recommend just shutting the entire
cluster down at once?). For the engine, I'll shut it down since it
won't migrate to another host, before shutting that host down.
Will let you know how it goes.
T
Hi,
cleaning metadata won't help in this case. Try transferring the
spm_ids you got from the engine to the proper hosted engine hosts so
the hosted engine ids match the spm_ids. Then restart all hosted
engine services. I would actually recommend restarting all hosts after
this change, but I have n
Tried running a 'hosted-engine --clean-metadata" as per
https://bugzilla.redhat.com/show_bug.cgi?id=1350539, since
ovirt-ha-agent was not running anyway, but it fails with the following
error:
ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Failed
to start monitoring domain
(sd_uuid=
Hi Denis,
I ran the query as you suggested, just by starting at spm_id=1 and on
up to 3 (the number of hosts I have), and it identified a different
host for each spm_id, indicating that they are indeed unique, so this
looks good.
Regards,
Cam
On Thu, Jun 29, 2017 at 2:07 PM, Denis Chaplygin wr
Actually, it looks like sanlock problems:
"SanlockInitializationError: Failed to initialize sanlock, the
number of errors has exceeded the limit"
On Thu, Jun 29, 2017 at 5:10 PM, cmc wrote:
> Sorry, I am mistaken, two hosts failed for the agent with the following error:
>
> ovirt-ha-agent o
Sorry, I am mistaken, two hosts failed for the agent with the following error:
ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine
ERROR Failed to start monitoring domain
(sd_uuid=207221b2-959b-426b-b945-18e1adfed62f, host_id=1): timeout
during domain acquisition
ovirt-ha-agent
Both services are up on all three hosts. The broke logs just report:
Thread-6549::INFO::2017-06-29
17:01:51,481::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-6549::INFO::2017-06-29
17:01:51,483::listener::186::ovirt_hosted_engine_ha
Hi,
please make sure that both ovirt-ha-agent and ovirt-ha-broker services
are restarted and up. The error says the agent can't talk to the
broker. Is there anything in the broker.log?
Best regards
Martin Sivak
On Thu, Jun 29, 2017 at 4:42 PM, cmc wrote:
> I've restarted those two services acr
I've restarted those two services across all hosts, have taken the
Hosted Engine host out of maintenance, and when I try to migrate the
Hosted Engine over to another host, it reports that all three hosts
'did not satisfy internal filter HA because it is not a Hosted Engine
host'.
On the host that
Hello!
On Thu, Jun 29, 2017 at 1:22 PM, Martin Sivak wrote:
> Change the ids so they are distinct. I need to check if there is a way
> to read the SPM ids from the engine as using the same numbers would be
> the best.
>
Host (SPM) ids are not shown in the UI, but you can search on it by typing
Hi,
yep, you have to restart the ovirt-ha-agent and ovirt-ha-broker services.
The scheduling message just means that the host has score 0 or is not
reporting score at all.
Martin
On Thu, Jun 29, 2017 at 1:33 PM, cmc wrote:
> Thanks Martin, do I have to restart anything? When I try to use the
>
Thanks Martin, do I have to restart anything? When I try to use the
'migrate' operation, it complains that the other two hosts 'did not
satisfy internal filter HA because it is not a Hosted Engine host..'
(even though I reinstalled both these hosts with the 'deploy hosted
engine' option, which sugg
Change the ids so they are distinct. I need to check if there is a way
to read the SPM ids from the engine as using the same numbers would be
the best.
Martin
On Thu, Jun 29, 2017 at 12:46 PM, cmc wrote:
> Is there any way of recovering from this situation? I'd prefer to fix
> the issue rather
Is there any way of recovering from this situation? I'd prefer to fix
the issue rather than re-deploy, but if there is no recovery path, I
could perhaps try re-deploying the hosted engine. In which case, would
the best option be to take a backup of the Hosted Engine, and then
shut it down, re-initi
Hi Martin,
yes, on two of the machines they have the same host_id. The other has
a different host_id.
To update since yesterday: I reinstalled and deployed Hosted Engine on
the other host (so all three hosts in the cluster now have it
installed). The second one I deployed said it was able to host
Hi,
can you please check the contents of
/etc/ovirt-hosted-engine/hosted-engine.conf or
/etc/ovirt-hosted-engine-ha/agent.conf (I am not sure which one it is
right now) and search for host-id?
Make sure the IDs are different. If they are not, then there is a bug somewhere.
Martin
On Tue, Jun 27
On the host that has the Hosted Engine VM, the sanlock.log reports:
2017-06-27 17:30:20+0100 1043742 [7307]: add_lockspace
207221b2-959b-426b-b945-18e1adfed62f:3:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0
conflicts with name of list1 s5
207221b2-959b-426b-b945-18e1adfed62f:1:/dev/207221b2-959
I see this on the host it is trying to migrate in /var/log/sanlock:
2017-06-27 17:10:40+0100 527703 [2407]: s3528 lockspace
207221b2-959b-426b-b945-18e1adfed62f:1:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0
2017-06-27 17:13:00+0100 527843 [27446]: s3528 delta_acquire host_id 1
busy1 1 2 104269
Hi Martin,
Thanks for the reply. I have done this, and the deployment completed
without error. However, it still will not allow the Hosted Engine
migrate to another host. The
/etc/ovirt-hosted-engine/hosted-engine.conf got created ok on the host
I re-installed, but the ovirt-ha-broker.service, tho
> Should it be? It was not in the instructions for the migration from
> bare-metal to Hosted VM
The hosted engine will only migrate to hosts that have the services
running. Please put one other host to maintenance and select Hosted
engine action: DEPLOY in the reinstall dialog.
Best regards
Mart
I changed the 'os.other.devices.display.protocols.value.3.6 =
spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
as 4 and the hosted engine now appears in the list of VMs. I am
guessing the compatibility version was causing it to use the 3.6
version. However, I am still unable to
Hi Tomas,
So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
engine VM, I have:
os.other.devices.display.protocols.value = spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
os.other.devices.display.protocols.value.3.6 = spice/qxl,vnc/cirrus,vnc/qxl
That seems to match - I assume since thi
On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
> > On 22 Jun 2017, at 12:31, Martin Sivak wrote:
> >
> > Tomas, what fields are needed in a VM to pass the check that causes
> > the following error?
> >
> > WARN [org.ovirt.engine.core.bll.exportimpo
> On 22 Jun 2017, at 12:31, Martin Sivak wrote:
>
> Tomas, what fields are needed in a VM to pass the check that causes
> the following error?
>
> WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
Tomas, what fields are needed in a VM to pass the check that causes
the following error?
WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
,V
Hi Martin,
>
> just as a random comment, do you still have the database backup from
> the bare metal -> VM attempt? It might be possible to just try again
> using it. Or in the worst case.. update the offending value there
> before restoring it to the new engine instance.
I still have the backup.
Hi,
just as a random comment, do you still have the database backup from
the bare metal -> VM attempt? It might be possible to just try again
using it. Or in the worst case.. update the offending value there
before restoring it to the new engine instance.
Regards
Martin Sivak
On Thu, Jun 22, 20
Hi Yanir,
Thanks for the reply.
> First of all, maybe a chain reaction of :
> WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
> failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
> ,VAR__TYPE__VM,ACTION_TYPE
HI,
First of all, maybe a chain reaction of :
WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_S
Hi Jenny/Martin,
Any idea what I can do here? The hosted engine VM has no log on any
host in /var/log/libvirt/qemu, and I fear that if I need to put the
host into maintenance, e.g., to upgrade it that I created it on (which
I think is hosting it), or if it fails for any reason, it won't get
migrat
Thanks Martin. The hosts are all part of the same cluster.
I get these errors in the engine.log on the engine:
2017-06-19 03:28:05,030Z WARN
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
failed for user SYST
EM. Rea
Hi,
you do not have to install it on all hosts. But you should have more
than one and ideally all hosted engine enabled nodes should belong to
the same engine cluster.
Best regards
Martin Sivak
On Wed, Jun 21, 2017 at 11:29 AM, cmc wrote:
> Hi Jenny,
>
> Does ovirt-hosted-engine-ha need to be
Hi Jenny,
Does ovirt-hosted-engine-ha need to be installed across all hosts?
Could that be the reason it is failing to see it properly?
Thanks,
Cam
On Mon, Jun 19, 2017 at 1:27 PM, cmc wrote:
> Hi Jenny,
>
> Logs are attached. I can see errors in there, but am unsure how they arose.
>
> Thanks
>From the output it looks like the agent is down, try starting it by
running: systemctl start ovirt-ha-agent.
The engine is supposed to see the hosted engine storage domain and import
it to the system, then it should import the hosted engine vm.
Can you attach the agent log from the host
(/var/lo
Hi Jenny,
> What version are you running?
4.1.2.2-1.el7.centos
> For the hosted engine vm to be imported and displayed in the engine, you
> must first create a master storage domain.
To provide a bit more detail: this was a migration of a bare-metal
engine in an existing cluster to a hosted en
Hi,
What version are you running?
For the hosted engine vm to be imported and displayed in the engine, you
must first create a master storage domain.
What do you mean the hosted engine commands are failing? What happens when
you run hosted-engine --vm-status now?
Jenny Tokar
On Thu, Jun 15, 2
Hi,
I've migrated from a bare-metal engine to a hosted engine. There were
no errors during the install, however, the hosted engine did not get
started. I tried running:
hosted-engine --status
on the host I deployed it on, and it returns nothing (exit code is 1
however). I could not ping it eithe
46 matches
Mail list logo