Hi,
Recently I have had 2 oVirt hosts (oVirt 4.2, CentOS 7.5) crash
unexpectedly (not at the same time). Both seem hardware related.
In both cases oVirt did detect the host as non responsive, did a
fence on the hosts and set the VMs which were running on the host at the
time as "Down".
On Fri, Aug 24, 2018 at 8:16 AM Sandro Bonazzola
wrote:
> Hi,
> just to let you know that Simone Tiraboschi will present "oVirt: powerful
> opensource virtualization" in Bergamo (Italy) on September 8th 2018 within
> Download Innovation IT & Conference event (https://download-event.io/en/)
>
>
>
Hi,
just to let you know that Simone Tiraboschi will present "oVirt: powerful
opensource virtualization" in Bergamo (Italy) on September 8th 2018 within
Download Innovation IT & Conference event (https://download-event.io/en/)
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
Red
Hi,
On Fri, Aug 24, 2018, 10:25 Eduardo Mayoral wrote:
>
> Hi,
>
> Recently I have had 2 oVirt hosts (oVirt 4.2, CentOS 7.5) crash
> unexpectedly (not at the same time). Both seem hardware related.
>
> In both cases oVirt did detect the host as non responsive, did a
> fence on the hosts
Hallo Group !
I am in need of upgrading librbd1 and librados2 for a oVirt 4.2.x cluster.
The cluster was installed via node ng.
Taking the repository for Ceph Mimic or Luminous will end in a dependency
problem because liburcu-cds.so.1 is already installed as a more recent version
provided by
Thanks for your fast answer, Alex.
Will flag the VMs as HA.
Indeed I was not interpreting correctly the "Resiliency policy" setting.
For the record, I found it very well explained here:
https://lists.ovirt.org/pipermail/users/2015-March/031896.html
Best regards,
On 24/08/18 10:06, Alex K
Hi,
I was not aware of this event, I’ll try to be there so!
Simon
> On Aug 24, 2018, at 8:14 AM, Sandro Bonazzola wrote:
>
> Hi,
> just to let you know that Simone Tiraboschi will present "oVirt: powerful
> opensource virtualization" in Bergamo (Italy) on September 8th 2018 within
>
Hi,
Thanks for answer,
I've isolate the issue into first SP (Storage processor) on my VNXe. On the
another SP, I don't have issue.
Difference between both SP is load average / cpu usage. I think my NAS is too
much used and there is an issue because of usage.
There is no network bottleneck
Sorry, I mean "migration network" for moving live migration traffic.
FDR infiniband much faster than 1Gb network which currently acts as
migration network, vm network, display network, mgmt network, etc.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Hi Simone,
it worked - I can access the server via SSH again to solve the original
problem (which is an httpd problem). Concerning the add-console-password
problem: It sounds weird to me, too.
For others: In my case the console device was there, it just did not
have any ID or address:
Hello Simone,
thanks for your reply.
hosted-engine --vm-shutdown --vm-conf=/root/my_vm.conf
I came across that before but the syntax of this file is nebulous to me
as it looks like some kind of JSON?! How do I add the serial console
there? What's the syntax?
As an alternative, VNC console
Hi
How do I turn off hosted engine alerts? We are in a testing phase so these
are not needed. I have disabled postfix on all hosts as well as stopped
the ovirt notification daemon on the hosted engine. I kept it running
while putting /dev/null in
On Fri, Aug 24, 2018 at 3:06 AM Douglas Duckworth
wrote:
> Rebooted the hosted engine VM
>
> Migration now works
>
> Guess that's needed after changing cluster CPU type
>
Yes, exactly: we already have an open bug about that:
https://bugzilla.redhat.com/show_bug.cgi?id=1585986
>
> Thanks,
>
>
Thanks!
No big deal as it's now working.
On Fri, Aug 24, 2018, 9:04 AM Simone Tiraboschi wrote:
>
>
> On Fri, Aug 24, 2018 at 3:06 AM Douglas Duckworth
> wrote:
>
>> Rebooted the hosted engine VM
>>
>> Migration now works
>>
>> Guess that's needed after changing cluster CPU type
>>
>
> Yes,
On Fri, Aug 24, 2018 at 3:00 PM Daniel Menzel <
daniel.men...@hhi.fraunhofer.de> wrote:
> Hello Simone,
> thanks for your reply.
>
> > hosted-engine --vm-shutdown --vm-conf=/root/my_vm.conf
>
> I came across that before but the syntax of this file is nebulous to me
> as it looks like some kind of
On Thu, 23 Aug 2018 13:51:39 -0400
Douglas Duckworth wrote:
> THANKS!
>
> ib0 now up with NFS storage back on this hypervisor
>
Thanks for letting us know.
> Though how do I make it a transfer network? I don't see an option.
>
I do not understand the meaning of "transfer network".
The
On Fri, 24 Aug 2018 09:46:25 -0400
Douglas Duckworth wrote:
> Sorry, I mean "migration network" for moving live migration traffic.
>
You have to create a new logical network in
"Network > Networks > New"
and assign this to ib0 in
"Compute > Hosts > hostname > Network Interfaces > Setup Host
> On 24 Aug 2018, at 14:16, Brendan Holmes wrote:
>
> Hi,
>
> Sorry if I overlooked, but was a bug raised for this that I can track?
AFAIK no. Please file one, and try to describe the steps you made with
as many details as possible please
And install logs
Thanks,
michal
> I should have
I was playing around with a 3 node setup.
This was on 4.3 based on FC28.
In the Cockpit UI, after completing the 1st stage of the hyperconverged setup,
I saw this (see pic)
On Aug 23 2018, at 9:56 pm, Gianluca Cecchi wrote:
>
> On Fri, Aug 24, 2018 at 6:50 AM Sahina Bose
Bug created (including hosted engine setup logs):
https://bugzilla.redhat.com/show_bug.cgi?id=1622240
Many thanks Michal.
-Original Message-
From: Michal Skrivanek
> On 24 Aug 2018, at 14:16, Brendan Holmes wrote:
>
> Hi,
>
> Sorry if I overlooked, but was a bug raised for this that I
Hi at all,
we cannot access our hosted engine anymore. It started with and overfull
/var due to a growing database. We access the engine via SSH and tried
to fix that - but somehow we seem to have produced another problem on
the SSH server itself. So unfortunately we can not login anymore.
On Fri, Aug 24, 2018 at 2:04 PM Daniel Menzel <
daniel.men...@hhi.fraunhofer.de> wrote:
> We then tried to access it via its host and a "hosted-engine --console"
> but ran into an
>
> internal error: cannot find character device
>
> which I know from KVM. With other VMs I could follow the
Hi,
Sorry if I overlooked, but was a bug raised for this that I can track? I
should have included that this problem is occurring on v4.2.6 second release
candidate. I use oVirt Node.
Many thanks,
Brendan
-Original Message-
From: Michal Skrivanek
> On 19 Aug 2018, at 20:47,
I found a way.
[root@node02 yum.repos.d]# cat luminious.repo
[ovirt-4.2-centos-ceph-luminous]
enabled=1
name = CentOS-7 - ceph luminous
baseurl = http://mirror.centos.org/centos/7/storage/$basearch/ceph-luminous/
gpgcheck = 1
enabled = 1
gpgkey =
24 matches
Mail list logo