Hi,
since the bug 1093366 is evidently blocking the Hosted Engine feature,
it should be added as blocker for the oVirt 3.4.3 tracker (bug 1107968).
All the more so now that the proposed patches seems to have fixed the
problem (I've run at least 30 Hosted Engine migrations without errors).
Hi,
since the bug 1093366 is evidently blocking the Hosted Engine feature,
it should be added as blocker for the oVirt 3.4.3 tracker (bug 1107968).
All the more so now that the proposed patches seems to have fixed the
problem (I've run at least 30 Hosted Engine migrations without errors).
T
Just to update everyone, I've the same problem with a 3 host setup and
uploaded logs to the BZ 1093366
Joop
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
in reproduce
steps.
If you know another steps to reproduce this error, without blocking
connection to storage it also can be wonderful if you can provide them.
Thanks
- Original Message -----
From: "Andrew Lau"
To: "combuster"
Cc: "users"
Sent: Monday, June 9, 20
also can be wonderful if you can provide them.
Thanks
----- Original Message -----
From: "Andrew Lau"
To: "combuster"
Cc: "users"
Sent: Monday, June 9, 2014 3:47:00 AM
Subject: Re: [ovirt-users] VM HostedEngie is down. Exist message:
internal error Failed to acqui
ult I had
>>>>> this error: "Failed to acquire lock error -243", so I added it in
>>>>> reproduce
>>>>> steps.
>>>>> If you know another steps to reproduce this error, without blocking
>>>>> connection to storage it a
ded it in reproduce steps.
If you know another steps to reproduce this error, without blocking connection
to storage it also can be wonderful if you can provide them.
Thanks
- Original Message -
From: "Andrew Lau"
To: "combuster"
Cc: "users"
Sent: Monday, Jun
ror, without blocking
>>> connection to storage it also can be wonderful if you can provide them.
>>> Thanks
>>>
>>> ----- Original Message -----
>>> From: "Andrew Lau"
>>> To: "combuster"
>>> Cc: "users"
o: "combuster"
>> Cc: "users"
>> Sent: Monday, June 9, 2014 3:47:00 AM
>> Subject: Re: [ovirt-users] VM HostedEngie is down. Exist message: internal
>> error Failed to acquire lock error -243
>>
>> I just ran a few extra tests, I had
g
> connection to storage it also can be wonderful if you can provide them.
> Thanks
>
> - Original Message -
> From: "Andrew Lau"
> To: "combuster"
> Cc: "users"
> Sent: Monday, June 9, 2014 3:47:00 AM
> Subject: Re: [ovirt-users]
ovide them.
Thanks
- Original Message -
From: "Andrew Lau"
To: "combuster"
Cc: "users"
Sent: Monday, June 9, 2014 3:47:00 AM
Subject: Re: [ovirt-users] VM HostedEngie is down. Exist message: internal
error Failed to acquire lock error -243
I just ran a few ex
I just ran a few extra tests, I had a 2 host, hosted-engine running
for a day. They both had a score of 2400. Migrated the VM through the
UI multiple times, all worked fine. I then added the third host, and
that's when it all fell to pieces.
Other two hosts have a score of 0 now.
I'm also curious,
Ignore that, the issue came back after 10 minutes.
I've even tried a gluster mount + nfs server on top of that, and the
same issue has come back.
On Fri, Jun 6, 2014 at 6:26 PM, Andrew Lau wrote:
> Interesting, I put it all into global maintenance. Shut it all down
> for 10~ minutes, and it's re
Interesting, I put it all into global maintenance. Shut it all down
for 10~ minutes, and it's regained it's sanlock control and doesn't
seem to have that issue coming up in the log.
On Fri, Jun 6, 2014 at 4:21 PM, combuster wrote:
> It was pure NFS on a NAS device. They all had different ids (had
Is this related to the NFS server which gluster provides, or is
because of the way gluster does replication?
There's a few posts ie.
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/ which
are reporting success with gluster + hosted engine. So it'd be good to
know, so we could possib
It was pure NFS on a NAS device. They all had different ids (had no
redeployements of nodes before problem occured).
Thanks Jirka.
On 06/06/2014 08:19 AM, Jiri Moskovcak wrote:
I've seen that problem in other threads, the common denominator was
"nfs on top of gluster". So if you have this setu
On 06/06/2014 08:03 AM, Andrew Lau wrote:
Hi Ivan,
Thanks for the in depth reply.
I've only seen this happen twice, and only after I added a third host
to the HA cluster. I wonder if that's the root problem.
It shouldn't be if a shared storage that vm is residing on is accessible
by a third no
I've seen that problem in other threads, the common denominator was "nfs
on top of gluster". So if you have this setup, then it's a known
problem. Or you should double check if you hosts have different ids
otherwise they would be trying to acquire the same lock.
--Jirka
On 06/06/2014 08:03 AM
Hi Ivan,
Thanks for the in depth reply.
I've only seen this happen twice, and only after I added a third host
to the HA cluster. I wonder if that's the root problem.
Have you seen this happen on all your installs or only just after your
manual migration? It's a little frustrating this is happeni
Hi Andrew,
this is something that I saw in my logs too, first on one node and then
on the other three. When that happend on all four of them, engine was
corrupted beyond repair.
First of all, I think that message is saying that sanlock can't get a
lock on the shared storage that you defined
Hi,
I'm seeing this weird message in my engine log
2014-06-06 03:06:09,380 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-79) RefreshVmList vm id
85d4cfb9-f063-4c7c-a9f8-2b74f5f7afa5 status = WaitForLaunch on vds
ov-hv2-2a-08-23 ignoring it in the refre
21 matches
Mail list logo