I am currently in the process of researching converting an existing SMB
infra to virtual. Ovirt/RHEV is a strong contender and checks off a lot
of boxes on our list. GlusterFS is appealing but I am finding it very
difficult to find any answers or stats/numbers regarding how well it can
perform a
Thats right.
I am going to collect the data and report back.
Regards,
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
On Mon, Oct 16, 2017, at 10:00 AM, Yaniv Kaul wrote:
>
>
> On Mon, Oct 16, 2017 at 5:21 PM, Fernando Fuentes
> wrote:>> __
>> Any ideas team?
>> :(
>
>
On Mon, Oct 16, 2017 at 3:54 PM, Piotr Kliczewski
wrote:
> On behalf of oVirt and the Xen Project, we are excited to announce that the
> call for proposals is now open for the Virtualization & IaaS devroom at the
> upcoming FOSDEM 2018, to be hosted on February 3 and 4, 2017.
>
> This year will m
On Mon, Oct 16, 2017 at 4:51 PM, Erekle Magradze
wrote:
> That's the problem, at that time nobody has restarted the server.
Please provide engine log from this time so we could see whether it
was trigger by it.
>
> Is there any scenario when the hypervisor is restarted by engine?
>
> Cheers
>
>
On Mon, Oct 16, 2017 at 5:21 PM, Fernando Fuentes
wrote:
> Any ideas team?
> :(
>
I suspect if you've applied the workaround for libvirt authentication
change and things still don't work, we'll need to see the relevant logs to
further understand the issue.
Y.
>
> --
> Fernando Fuentes
> ffuen.
That's the problem, at that time nobody has restarted the server.
Is there any scenario when the hypervisor is restarted by engine?
Cheers
Erekle
On 10/16/2017 04:45 PM, Piotr Kliczewski wrote:
Erekle,
For the time period you mentioned I do not see anything wrong on vdsm
side except of a re
Erekle,
For the time period you mentioned I do not see anything wrong on vdsm
side except of a restart at 2017-10-15 16:28:50,993+0200. It looks
like manual restart.
The engine log starts at 2017-10-16 03:49:04,092+02 so not able to say
whether there was anything else except of heartbeat issue cau
On Sun, Oct 15, 2017 at 7:25 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:
> Hi,
>
> I tried to add new interface to hosted engine via GUI.
> It got added but network was not accessible from hosted engine (ping was
> failing)
>
> So I got old mail thread for adding "Multiple Nic on h
Hi Piotr,
Several times I've restarted vdsm daemon on certain nods, that could be
the reason.
The failure, I've mentioned, has happened yesterday from 15:00 to 17:00
Cheers
Erekle
On 10/16/2017 04:13 PM, Piotr Kliczewski wrote:
Erekle,
In the logs you provided I see:
IOError: [Errno 5]
Any ideas team?
:(
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
On Fri, Oct 13, 2017, at 11:47 AM, Fernando Fuentes wrote:
> Team,
>
> I think I am hitting this bug:
>
> https://gerrit.ovirt.org/#/c/76934/
>
> With that fix libvirtd starts but ovirt still wont bring i
Erekle,
In the logs you provided I see:
IOError: [Errno 5] _handleRequests._checkForMail - Could not read
mailbox:
/rhev/data-center/6d52512e-1c02-4509-880a-bf57cbad4bdf/mastersd/dom_md/inbox
and
StorageDomainMasterError: Error validating master storage domain: ('MD
read error',)
which seems
On behalf of oVirt and the Xen Project, we are excited to announce that the
call for proposals is now open for the Virtualization & IaaS devroom at the
upcoming FOSDEM 2018, to be hosted on February 3 and 4, 2017.
This year will mark FOSDEM’s 18th anniversary as one of the longest-running
free and
Hi,
Can you please tell us what is the issue that you are actually facing?
:) it would be easier to debug an issue and not an error message that
can be cause by several things.
Also, can you provide the engine and the vdsm logs?
thank you,
Dafna
On 10/16/2017 02:30 PM, Erekle Magradze wrote:
>
It's was a typo in the failure message,
that's what I was getting:
*VDSM hostname command GetStatsVDS failed: Connection reset by peer*
On 10/16/2017 03:21 PM, Erekle Magradze wrote:
Hi,
It's getting clear now, indeed momd service is disabled
● momd.service - Memory Overcommitment Manager
Hi,
It's getting clear now, indeed momd service is disabled
● momd.service - Memory Overcommitment Manager Daemon
Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
Active: inactive (dead)
mom-vdsm is enable and running.
● mom-vdsm.service - MOM inst
Hello,
I'm in 4.1.6.
I have a VM with 2 virtio-scsi disks (on FC).
If I try to do a resize of the boot disk (preallocated) from this menu:
1)
- select VM
- disks sub tab
- select disk
- edit
- edit virtual disk window
- extend size by X Gb
I get error about "not all VMs are powered down"
Instead
Hi,
how do you start MOM? MOM is supposed to talk to vdsm, we do not talk
to libvirt directly. The line you posted comes from vdsm and vdsm is
telling you it can't talk to MOM.
Which MOM service is enabled? Because there are two momd and mom-vdsm,
the second one is the one that should be enabled.
Hi Martin,
Thanks for the answer, unfortunately this warning message persists, does
it mean that mom cannot communicate with libvirt? how critical is it?
Best
Erekle
On 10/16/2017 03:03 PM, Martin Sivak wrote:
Hi,
it is just a warning, there is nothing you have to solve unless it
does not
Hi,
it is just a warning, there is nothing you have to solve unless it
does not resolve itself within a minute or so. If it happens only once
or twice after vdsm or mom restart then you are fine.
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Oct 16, 2017 at 2:44 PM, Erekle Magradze
wrote:
>
Hi,
after running
systemctl status vdsm I am getting that it's running and this message at
the end.
Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not available.
Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not
available, KSM stats will be missing.
Oct 16 14:2
On 10/16/2017 11:21 AM, Sahina Bose wrote:
On Mon, Oct 16, 2017 at 2:33 PM, Arsène Gschwind
mailto:arsene.gschw...@unibas.ch>> wrote:
Hi,
My setup uses a separate physical network for gluster storage,
this network is available on all hosts and defined as gluster
network in
Hi,
You can use setup/dbutils/taskcleaner.sh [1]
Run with -h to see all the options.
Regards,
Fred
[1]
https://github.com/oVirt/ovirt-engine/blob/master/packaging/setup/dbutils/taskcleaner.sh
On Wed, Oct 4, 2017 at 7:15 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:
> Hi,
>
> We
Hello guys,
I have 3 node oVirt with GlusterFS volumes,
here is the kernel version: *3.10.0-693.2.2.el7.x86_64
*this is the version of gluster I am running:*3.8.15-2
*oVirt engine is running on an separate baremetal host*
*I am getting the following failure message:*
**
VDSM command GetStatsVDS
Hi all,
I’m using oVirt 4.1 with Ceph backed VMs.
I’m trying to understand how to configure rbd cache on the nodes.
During VM boot up libvirt search for a files under /etc/ceph/ceph.conf. After
placing file in that directory with correct cache parameters the warning
disappears but cache isn’t
On Mon, Oct 16, 2017 at 2:33 PM, Arsène Gschwind
wrote:
> Hi,
>
> My setup uses a separate physical network for gluster storage, this
> network is available on all hosts and defined as gluster network in the
> engine but the engine itself has no connection to that network.
> Does the engine need
Hi,
My setup uses a separate physical network for gluster storage, this
network is available on all hosts and defined as gluster network in the
engine but the engine itself has no connection to that network.
Does the engine need to have a connection to the gluster network?
engine.log reports
26 matches
Mail list logo