Are there any known issues with cloud-init not setting the network gateway ?
I'm trying to create a host with the ansible roles, all is Ok apart from
network settings.
Ansible code:
-
block:
-
name: Authenticate
ovirt_auth:
password: ***
url:
Could you provide the output of "gluster volume status" and the gluster
mount logs to check further?
Are all the host shown as active in the engine (that is, is the monitoring
working?)
On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan wrote:
> Hi,
>
> After upgrade to 4.2
On 01/30/2018 03:43 PM, Christopher Cox wrote:
So, you're saying you export to an Export Domain (NFS), detach, and
then rsync that somewhere else (a different NFS system) and try to
attach that an Export(import) Domain to a different datacenter and
import? Sounds like it should work to me.
On 01/30/2018 05:10 PM, Matt Simonsen wrote:
Hello all,
We have a several oVirt data centers mostly using oVirt 4.1.9 and NFS
backed storage.
I'm planning a move for what will eventually be an exported VM, from one
physical location to another one.
Is there any reason it would be
Hello all,
We have a several oVirt data centers mostly using oVirt 4.1.9 and NFS
backed storage.
I'm planning a move for what will eventually be an exported VM, from one
physical location to another one.
Is there any reason it would be problematic to export the image and then
use rsync to
I have Windows VMs, both client and server.
if you provide the engine.log file we might have a look at it.
--
Respectfully
Mahdi A. Mahdi
From: Alex K
Sent: Monday, January 29, 2018 5:40 PM
To: Mahdi Adnan
Cc: users
Subject: Re:
Hi,
After upgrade to 4.2 i'm getting "VM paused due unknown storage
error". When i was upgrading i had some gluster problem with one of
the hosts, which i was fixed readding it to gluster peers. Now i see
something weir in bricks configuration, see attachment - one of the
bricks uses 0% of space.
Hi, I am trying to setup a cluster of two nodes, with self hoste Engine.
Things went fine for the first machine, but it as rather messy about the second
one. I would like to have load balancing and failover for both management
network and storage (NFS repository). So what should I exactly do
Hi all,
I released ioprocess 1.0.0 for Fedora 27 and 28.
If you are using Fedora, please install the new version from the
updates-testing
and test it.
Please share your feedback here:
https://bodhi.fedoraproject.org/updates/FEDORA-2018-fbe8141dd2
This version will be available soon from oVirt
On Tue, Jan 30, 2018 at 4:51 PM, Elad Ben Aharon
wrote:
> Please try:
>
> vdsClient -s 0 teardownImage
>
How do I map spUUID, sdUUID and imgUUID ?
___
Users mailing list
Users@ovirt.org
On Tue, Jan 30, 2018 at 4:36 PM, Gianluca Cecchi
wrote:
> On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon
> wrote:
>
>> Try to deactivate this LV:
>>
>> lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0
>>
Please try:
vdsClient -s 0 teardownImage
On Tue, Jan 30, 2018 at 5:36 PM, Gianluca Cecchi
wrote:
> On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon
> wrote:
>
>> Try to deactivate this LV:
>>
>> lvchange -an
On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon
wrote:
> Try to deactivate this LV:
>
> lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
>
# lvchange -an
Try to deactivate this LV:
lvchange -an
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
On Tue, Jan 30, 2018 at 5:25 PM, Gianluca Cecchi
wrote:
> On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon
> wrote:
>
>> In a
On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon
wrote:
> In a case of disk migration failure with leftover LV on the destination
> domain, lvremove is what needed. Also, make sure to remove the image
> directory on the destination domain (located under
>
In a case of disk migration failure with leftover LV on the destination
domain, lvremove is what needed. Also, make sure to remove the image
directory on the destination domain (located under
/rhev/data-center/%spuuid%/%sduuid%/images/)
On Mon, Jan 29, 2018 at 5:25 PM, Gianluca Cecchi
I wrote small ansile role to fix it, eg install ovirt-guest-agent and
fix the configuration., check it interested
https://galaxy.ansible.com/hudecof/ovirt-guest-agent/
But I'm not happy with the debian package version. It should be split to
server and deskop version. Right now it installs
The oVirt Project is pleased to announce the availability of the oVirt
4.2.1 Fourth
Release Candidate, as of January 30th, 2017
This update is a release candidate of the first in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be
Hi Community,
Recently we discovered that our VM's became unstable after upgrading
from Fedora 26 to Fedora 27. The journalctl log shows the following
Jan 29 20:03:28 host1.project.local libvirtd[2741]: 2018-01-29
19:03:28.789+: 2741: error : qemuMonitorIO:705 : internal error: End
of file
On 30/01/18 11:15 +0800, Ravyu Sivakumaran wrote:
Hi,
I am running an Ovirt host on a SuperMicro SuperServer 2028GR-TRH equipped
with 2x2 Tesla M60s (they are dual GPU Cards). I've followed this guide-
https://www.ovirt.org/develop/release-management/features/virt/hostdev-passthrough/
and
20 matches
Mail list logo