Can you post the gluster mount logs from the node where paused VM was
running (under
/var/log/glusterfs/rhev-datarhev-data-center-mnt-glusterSD.log)
?
Which version of glusterfs are you running?
On 06/24/2016 07:49 AM, Bill Bill wrote:
Hello,
Have 3 nodes running both oVirt and Gluster on
Hi Didi,
Unfortunately the box got reinstalled before I could grab the logs.
I think I have an idea as to what caused the issue though. When I was
originally registering the oVirt node one of the fiber channel volumes
wouldn't detect so it just kept waiting to register with the hosted
Hello,
Have 3 nodes running both oVirt and Gluster on 4 SSD’s each. At the moment,
there are two physical nics, one has public internet access and the other is a
non-routable network used for ovirtmgmt & gluster.
In the logical networks, I have selected gluster for the nonroutable network
Hello list,
I've successfully upgraded oVirt to 4.0 from 3.6 on my engine and three
hosts. However, it doesn't look like I can change the Cluster
Compatibility Version to 4.0. It tells me I need to shut down all the VMs
in the cluster. Except I use Hosted Engine. How do I change the cluster
Hi!
After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm
vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch
vdsm-infra-4.18.4.1-0.el7.centos.noarch
vdsm-cli-4.18.4.1-0.el7.centos.noarch
vdsm-python-4.18.4.1-0.el7.centos.noarch
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined and I
think its easier than what I thought of (moving my self hosted engine back
to physical hardware, upgrading and moving it back to self hosted). I will
give it a spin in my build RHEV cluster tomorrow and let you
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott wrote:
> Hello list,
>
> I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6
> to 3.6/el7. I'm following the process outlined in these two documents:
>
>
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi wrote:
>
> Hi!
> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self
> hosted engine.
>
Hi Stefano, can you please try "yum clean metadata" "yum update" again?
You should get vdsm 4.18.4.1, please let us
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6
to 3.6/el7. I'm following the process outlined in these two documents:
Please share the engine log.
On Thu, Jun 23, 2016 at 8:07 PM, Claude Durocher
wrote:
> I did a complete reinstall of ovirt 4.0 (with hosted engine appliance) and
> the error is there with a single host after minimum configuration (add a
> single nfs storage
> On 23 Jun 2016, at 18:36, Stefano Danzi wrote:
>
>
> Hi!
> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self
> hosted engine.
>
> first thing is that the host network lose the degaut gateway configuration.
> But this is not the problem.
>
>
I did a complete reinstall of ovirt 4.0 (with hosted engine appliance) and the
error is there with a single host after minimum configuration (add a single nfs
storage domain).
The engine.log file doesn't content any irregularities.
Le Mercredi, Juin 22, 2016 17:18 EDT, "Claude Durocher"
Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self
hosted engine.
first thing is that the host network lose the degaut gateway configuration. But
this is not the problem.
Logs:
==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-23
Hi,
The new oVirt-Live 4.0.0 is available for download.
You can download it from:
http://plain.resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-live/ovirt-live-4.0.0.iso
Thanks in advance,
Lev Veyde.
___
Users mailing list
Users@ovirt.org
The oVirt Project is pleased to announce today the general availability of
oVirt 4.0.0.
This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat
Hi List,
I have two nodes (running CentOS 7) and the network interface order
changed for some interfaces after every reboot.
The configurations are done through the oVirt GUI. So the ifcfg-ethX
scripts are configured automatically by VDSM.
Is there any option to get this configured to be
Hi all,
Before 4.0, when we performed a Live Storage Migration on a vm disk we
expected from the disk format to become a qcow because of the auto
generated snapshot as part of the live storage migration process.
>From 4.0 this process got changed and after the disk migration finishes,
the auto
On 22/06/16 18:19, Yaniv Kaul wrote:
>> Hi Rafael,
>> >
>> > do you have an ETA for the version?
>> >
>> > The USB problem is a bit of a problem for me.
>> > Or it is possible to go to the release candidate now and later to the
>> > final version?
>> >
> Yes.
> Y.
>
>
Well I had some problems
On 22.06.16 09:57, JC Clark wrote:
> I am trying to establish a VCenter as an External Provider. The when I try
> to "Test" the connection, the ovirt error message says that I have "failed
> to communicate".
We have a fixed upstream bug of that
https://bugzilla.redhat.com/show_bug.cgi?id=1293591
> On 23 Jun 2016, at 04:56, qinglong.d...@horebdata.cn wrote:
>
> Hi, all
> I have found that the latest version of qemu has been updated to 2.6.
> In the latest version it supports a new virtio-gpu device which supprts
> accelerated 2D and 3D. Qemu 2.3 is used in ovirt 3.6 for now. So
Please check your engine/engine.log why it is attempting to connect
every monitoring cycle.
Is the host in 'NonResponsive' state?
On Wed, Jun 22, 2016 at 11:18 PM, Claude Durocher
wrote:
> Here's a more complete log of vdsm with the error :
>
>
21 matches
Mail list logo