[ovirt-users] Attach Storage via CLI

2017-01-20 Thread Steffen Nolden

Hi,

i have create a storagedomain with ovirt-shell:

add storagedomain --host-name  --type data --storage-type 
glusterfs --storage-path :/data --storage-vfs_type glusterfs --name 
Data-Domain



Now i try to attach it to the data-center. I found this command

add storagedomain --datacenter-identifier Default --name Data-Domain


But i got this Error-MSG:

===
ERROR
===
cannot construct collection/member view, checked variants are:
[('--datacenter-identifier',)].
===


What is wrong..  or how can i attach a storagedomain to a data-center 
via CLI?


Thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot move pointer to top of console view in noVNC

2017-01-20 Thread Joop
On 19-1-2017 23:31, George Chlipala wrote:
> The subject line says it all.  When using noVNC and I move the mouse
> to the top of the console view, the pointer will stop short.  So, if I
> have a Windows VM and a window that is maximized, I cannot click on
> the close button.
>
I think you're looking at your cursor locked into the client window.
There should be a helpful message in the window title to press SHIFT+F12
to release the cursor :-)

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host Device Mapping

2017-01-20 Thread Bryan Sockel

Hi,

I am trying to map a host device to a VM, I am able to get the host device 
mapped to the VM, but i am unable to start the vm.  I receive the following 
Error:

Error while executing action: 

VM:
Cannot run VM. There is no host that satisfies current scheduling 
constraints. See below for details:
The host vm-host-1 did not satisfy internal filter HostDevice because it 
does not support host device passthrough..
The host vm-host-2 did not satisfy internal filter HostDevice because it 
does not support host device passthrough..
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Community Newsletter: December 2016

2017-01-20 Thread Sandro Bonazzola
Il 20/Gen/2017 18:57, "Gianluca Cecchi"  ha
scritto:



On Fri, Jan 20, 2017 at 4:12 PM, Sandro Bonazzola 
wrote:

>
>
>
>
> On Thu, Jan 19, 2017 at 8:27 PM, Brian Proffitt 
> wrote:
>
>> It's a new year with new opportunities for oVirt to show up its
>> virtualization features! We're getting ready for DevConf.CZ in Brno next
>> week, and FOSDEM in Brussels the week after that! We look forward to
>> meeting European developers and sysadmins to share your experiences!
>>
>> Here's what happened in December of 2016:
>>
>
Brian,
your newsletter is very attractive, as always.
(my compliments also to Mikey Ariel)



> I would add oVirt Italian community events being planned:
>
> OpenStack & oVirt beer ; January 27th 2017
> https://www.meetup.com/it-IT/Meetup-OpenStack-Milano/events/236414454/
>
> oVirt Workshop and 4.1 Release party ;  February 13th 2017
> https://www.eventbrite.it/e/biglietti-ovirt-41-workshop-rele
> ase-party-31347251473
> https://www.facebook.com/events/391387704529823/
>
>
Nice!
I would try to attend (and in the mean time to complete the update of the
italian oVirt translation for the upcoming 4.1 release... ;-)


Great! I hope to find some time for helping as well.




Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] master storage domain stuck in locked state

2017-01-20 Thread Bill Bill
No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 
the storage domain is stuck in locked state. Because of this, can’t activate 
any other storage either, so the other domains are in maintenance and the 
master sits in locked state, has been for hours.

This sticks out on a hypervisor:

StoragePoolWrongMaster: Wrong Master domain or its version: 
u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, 
pool=3fd2ad92-e1eb-49c2-906d-00ec233f610a'

Not sure, nothing changed other than a reboot of the storage.

Engine log shows:

[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName = 
U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational', 
nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a

No idea why it says unreachable, it certainly is because I can manually mount 
ALL storage to the hypervisor.

Sent from Mail for Windows 10

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-20 Thread Bill Bill

So apparently something didn’t change the metadata to master before connection 
was lost. I changed the metadata role to master and it came backup. Seems 
emailing in helped because every time I can’t figure something out, email in a 
find it shortly after.


From: Bill Bill
Sent: Friday, January 20, 2017 7:43 PM
To: users
Subject: master storage domain stuck in locked state

No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 
the storage domain is stuck in locked state. Because of this, can’t activate 
any other storage either, so the other domains are in maintenance and the 
master sits in locked state, has been for hours.

This sticks out on a hypervisor:

StoragePoolWrongMaster: Wrong Master domain or its version: 
u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, 
pool=3fd2ad92-e1eb-49c2-906d-00ec233f610a'

Not sure, nothing changed other than a reboot of the storage.

Engine log shows:

[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName = 
U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational', 
nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a

No idea why it says unreachable, it certainly is because I can manually mount 
ALL storage to the hypervisor.

Sent from Mail for Windows 10

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-20 Thread Bill Bill
Spoke too soon. Some hosts came back up but the storage domain is still locked 
so no vm’s can be started. What is the proper way to force this to be unlocked? 
Each time we look to move into production after successful testing, something 
like this always seems to pop up at the last minute rending oVirt questionable 
in terms of reliability for some unknown issue.



From: Bill Bill
Sent: Friday, January 20, 2017 7:54 PM
To: users
Subject: RE: master storage domain stuck in locked state


So apparently something didn’t change the metadata to master before connection 
was lost. I changed the metadata role to master and it came backup. Seems 
emailing in helped because every time I can’t figure something out, email in a 
find it shortly after.


From: Bill Bill
Sent: Friday, January 20, 2017 7:43 PM
To: users
Subject: master storage domain stuck in locked state

No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 
the storage domain is stuck in locked state. Because of this, can’t activate 
any other storage either, so the other domains are in maintenance and the 
master sits in locked state, has been for hours.

This sticks out on a hypervisor:

StoragePoolWrongMaster: Wrong Master domain or its version: 
u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, 
pool=3fd2ad92-e1eb-49c2-906d-00ec233f610a'

Not sure, nothing changed other than a reboot of the storage.

Engine log shows:

[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName = 
U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational', 
nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a

No idea why it says unreachable, it certainly is because I can manually mount 
ALL storage to the hypervisor.

Sent from Mail for Windows 10

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-20 Thread Bill Bill

Also cannot reinitialize the datacenter because the storage domain is locked.

From: Bill Bill
Sent: Friday, January 20, 2017 8:08 PM
To: users
Subject: RE: master storage domain stuck in locked state

Spoke too soon. Some hosts came back up but the storage domain is still locked 
so no vm’s can be started. What is the proper way to force this to be unlocked? 
Each time we look to move into production after successful testing, something 
like this always seems to pop up at the last minute rending oVirt questionable 
in terms of reliability for some unknown issue.



From: Bill Bill
Sent: Friday, January 20, 2017 7:54 PM
To: users
Subject: RE: master storage domain stuck in locked state


So apparently something didn’t change the metadata to master before connection 
was lost. I changed the metadata role to master and it came backup. Seems 
emailing in helped because every time I can’t figure something out, email in a 
find it shortly after.


From: Bill Bill
Sent: Friday, January 20, 2017 7:43 PM
To: users
Subject: master storage domain stuck in locked state

No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 
the storage domain is stuck in locked state. Because of this, can’t activate 
any other storage either, so the other domains are in maintenance and the 
master sits in locked state, has been for hours.

This sticks out on a hypervisor:

StoragePoolWrongMaster: Wrong Master domain or its version: 
u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, 
pool=3fd2ad92-e1eb-49c2-906d-00ec233f610a'

Not sure, nothing changed other than a reboot of the storage.

Engine log shows:

[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName = 
U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational', 
nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a

No idea why it says unreachable, it certainly is because I can manually mount 
ALL storage to the hypervisor.

Sent from Mail for Windows 10

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] import graylog VM fails

2017-01-20 Thread Alex R
I am trying to take one of virtual appliances available on
https://www.graylog.org/download.  I have the tried to import the VMware
OVA through the ovirt GUI but it fails with "Failed to load VM
configuration from OVA file: /tmp/graylog-2.1.2-1.ova".
/tmp/graylog-2.1.2-1.ova is the location of the file on the webclient.

So I try to use the openstack image. From my ovirt host:

# unzip graylog-2.1.2-1.qcow2.gz

# qemu-img info /tmp/graylog-2.1.2-1.qcow2
image: /tmp/graylog-2.1.2-1.qcow2
file format: qcow2
virtual size: 5.9G (6291456000 bytes)
disk size: 5.8G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

I create a VM in ovirt named graylog with a disk sized at 25 GB

I locate the the disk id that the VM graylog has in the ovirt GUI.

1dae7037-73d5-4f11-b7da-c5d1a6aa36b7

# qemu-img convert -O raw /tmp/graylog-2.1.2-1.qcow2
/rhev/data-center/mnt/blockSD/6643dec0-12b2-4764-9473-a86975ef523d/images/1dae7037-73d5-4f11-b7da-c5d1a6aa36b7/17a57535-77d1-4b3f-a0b4-a6f9b7717b4a

I then try to start the VM and get in both the vdsm.log and GUI:

2017-01-21T01:06:37.145854Z qemu-kvm: -drive
file=/rhev/data-center/0001-0001-0001-0001-02ad/6643dec0-12b2-4764-9473-a86975ef523d/images/1dae7037-73d5-4f11-b7da-c5d1a6aa36b7/17a57535-77d1-4b3f-a0b4-a6f9b7717b4a,format=raw,if=none,id=drive-virtio-disk0,serial=1dae7037-73d5-4f11-b7da-c5d1a6aa36b7,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/0001-0001-0001-0001-02ad/6643dec0-12b2-4764-9473-a86975ef523d/images/1dae7037-73d5-4f11-b7da-c5d1a6aa36b7/17a57535-77d1-4b3f-a0b4-a6f9b7717b4a':
Permission denied
Thread-2666080::INFO::2017-01-20
17:06:37,321::vm::1308::virt.vm::(setDownStatus)
vmId=`a1257117-e3f8-4896-9ab4-1b010298f282`::Changed state to Down:
internal error: qemu unexpectedly closed the monitor:
(process:23124): GLib-WARNING **: gmem.c:482: custom memory allocation
vtable not supported.


Thanks in advance.

-Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm problem

2017-01-20 Thread Beckman, Daniel
Can you describe your environment and the fix? Are you running your management 
engine on a docker container?

Best,
Daniel
From:  on behalf of Стаценко Константин Юрьевич 

Date: Friday, January 20, 2017 at 1:31 AM
To: 'Ilya Fedotov' 
Cc: users 
Subject: Re: [ovirt-users] vdsm problem

Fixed. This is a Docker/SElinux problem, if someone interested…

From: Ilya Fedotov [mailto:kosh...@gmail.com]
Sent: Friday, January 20, 2017 9:14 AM
To: Стаценко Константин Юрьевич 
Cc: users 
Subject: Re: [ovirt-users] vdsm problem

Dear Konstantin,




 Read the instruction for installation before
 Where did you see  CentOS7.3   ?


 with br, Ilya





oVirt 4.0.6 Release Notes

The oVirt Project is pleased to announce the availability of 4.0.6 Release as 
of January 10, 2017.

oVirt is an open source alternative to VMware™ vSphere™, and provides an 
awesome KVM management interface for multi-node virtualization. This release is 
available now for Red Hat Enterprise Linux 7.2, CentOS Linux 7.2 (or similar).

To find out more about features which were added in previous oVirt releases, 
check out the previous versions release 
notes. For a 
general overview of oVirt, read the Quick Start 
Guide and the about 
oVirt page.

An updated documentation has been provided by our downstream Red Hat 
Virtualization

2017-01-20 9:02 GMT+03:00 Стаценко Константин Юрьевич 
>:
Anyone ?

From: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of 
Стаценко Константин Юрьевич
Sent: Thursday, January 19, 2017 5:08 PM
To: users >
Subject: [ovirt-users] vdsm problem

Hello!
Today, after installing some of the updates, vdsmd suddenly dies. Running oVirt 
4.0.6 CentOS 7.3.
It cannot start any more:

# journalctl -xe

-- Subject: Unit vdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has begun starting up.
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running mkdirs
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running configure_coredump
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running configure_vdsm_logs
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running wait_for_network
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running run_init_hooks
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running upgraded_version_check
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running check_is_configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
sasldblistusers2[20115]: DIGEST-MD5 common mech free
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: Error:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: One of the modules is not configured to work with 
VDSM.
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: To configure the module use the following:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: 'vdsm-tool configure [--module module-name]'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: If all modules are not configured try to use:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: 'vdsm-tool configure --force'
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: (The force flag will stop the module's service and 
start it
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: afterwards automatically to load the new 
configuration.)
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: Current revision of multipath.conf detected, 
preserving
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: libvirt is already configured for vdsm
Jan 19 18:03:32 

Re: [ovirt-users] oVirt Community Newsletter: December 2016

2017-01-20 Thread Sandro Bonazzola
On Thu, Jan 19, 2017 at 8:27 PM, Brian Proffitt  wrote:

> It's a new year with new opportunities for oVirt to show up its
> virtualization features! We're getting ready for DevConf.CZ in Brno next
> week, and FOSDEM in Brussels the week after that! We look forward to
> meeting European developers and sysadmins to share your experiences!
>
> Here's what happened in December of 2016:
>
> -
> Software Releases
> -
>
> oVirt 4.0.6 Release is now available
> http://bit.ly/2iOI9cY
>
> 
> In the Community
> 
>
> Happy New Documentation!
> http://bit.ly/2iOLCrW
>
> oVirt System Tests to the Rescue!—How to Run End-to-End oVirt Tests on
> Your Patch
> http://bit.ly/2iONDUR
>
> CI Please Build—How to build your oVirt project on-demand
> http://bit.ly/2iOTAkD
>
> The Need for Speed—Coming Changes in oVirt's CI Standards
> http://bit.ly/2iOPUzf
>
> Еxtension of iptables Rules on oVirt 4.0 Hosts
> http://bit.ly/2iOPARp
>
> New oVirt Project Underway
> http://bit.ly/2iOKeW6
>
>

I would add oVirt Italian community events being planned:

OpenStack & oVirt beer ; January 27th 2017
https://www.meetup.com/it-IT/Meetup-OpenStack-Milano/events/236414454/

oVirt Workshop and 4.1 Release party ;  February 13th 2017
https://www.eventbrite.it/e/biglietti-ovirt-41-workshop-release-party-31347251473
https://www.facebook.com/events/391387704529823/



> 
> Deep Dives and Technical Discussions
> 
>
> KVM/Linux Nested Virtualization Support For ARM
> http://bit.ly/2iOILiD
>
> Virtual Machines in Kubernetes? How and what makes sense?
> http://bit.ly/2iOWDtj
>
> ANNOUNCE: New libvirt project Go XML parser model
> http://bit.ly/2iORd1j
>
> Using OVN with KVM and Libvirt
> http://bit.ly/2iOOEwc
>
> New libvirt project Go language bindings
> http://bit.ly/2iP0ne5
>
> CI tools testing lab: Making it do useful work
> http://bit.ly/2iOVilZ
>
> CI tools testing lab: Integrating Jenkins and adding Zuul UI
> http://bit.ly/2iOOBRa
>
> CI tools testing lab: Adding Zuul Merger
> http://bit.ly/2iOP1a8
>
> CI tools testing lab: Setting up Zuul Server
> http://bit.ly/2iOUZYn
>
> CI tools testing lab: Adding Gerrit
> http://bit.ly/2iP0CG1
>
> CI tools testing lab: Initial setup with Jenkins
> http://bit.ly/2iOSvtc
>
> ---
> Downstream News
> ---
>
> Debugging a kernel in QEMU/libvirt
> http://red.ht/2jvB5mf
>
> Five Reasons to Switch from vSphere to Red Hat Virtualization
> http://red.ht/2iLW5YU
>
> Red Hat scoops Best Virtualization Product at the V3 Technology Awards 2016
> http://red.ht/2hW9N7B
>
>
> --
> Brian Proffitt
> Principal Community Analyst
> Open Source and Standards
> @TheTechScribe
> 574.383.9BKP
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Monitoring disk I/O

2017-01-20 Thread Fernando Frediani
That's pretty much wanted and basic to have the three main metrics: CPU, 
Memory and Disk.


Fernando


On 19/01/2017 22:22, Michael Watters wrote:

Thanks.  I can monitor the VMs using snmp or collectd but what I'd like
is to have I/O use shown in the engine just like memory and CPU use are.


On 1/19/17 4:21 PM, Markus Stockhausen wrote:

Hi there ...

we are running a simple custom script that collects data of qemu
on the nodes via /proc etc. Storing this into RRD databases and
doing a little LAMP scripting you get the attached result.

Best regards.

Markus


Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von 
Michael Watters [watte...@watters.ws]
Gesendet: Donnerstag, 19. Januar 2017 21:42
An: users
Betreff: [ovirt-users] Monitoring disk I/O

Does ovirt have any way to monitor disk I/O for each VM or disk in a
storage pool?  I am receiving disk latency warnings and would like to
know which VMs are causing the most disk I/O.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Restoring Hosted-Engine from a stale backup

2017-01-20 Thread Doug Ingham
Hi all,
 One of our engines has had a DB failure* & it seems there was an unnoticed
problem in its backup routine, meaning the last backup I've got is a couple
of weeks old.
Luckily, VDSM has kept the underlying VMs running without any
interruptions, so my objective is to get the HE back online & get the hosts
& VMs back under its control with minimal downtime.

So, my questions are the following...

   1. What problems can I expect to have with VMs added/modified since the
   last backup?
   2. As it's only the DB that's been affected, can I skip redeploying the
   Engine & jump straight to restoring the DB & rerunning engine-setup?
   3. The original docs I read didn't mention that it's best to leave a
   host in maintenance mode before running the engine backup, so my plan is to
   install a new temporary host on a separate server, re-add the old hosts &
   then once everything's back up, remove the temporary host. Are there any
   faults in this plan?
   4. When it comes to deleting the old HE VM, the docs point to a
   paywalled guide on redhat.com...?

CentOS 7
oVirt 4.0.4
Gluster 3.8

* Apparently a write somehow cleared fsync, despite not actually having
been written to disk?! No idea how that happened...

Many thanks,
-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Community Newsletter: December 2016

2017-01-20 Thread Gianluca Cecchi
On Fri, Jan 20, 2017 at 4:12 PM, Sandro Bonazzola 
wrote:

>
>
>
>
> On Thu, Jan 19, 2017 at 8:27 PM, Brian Proffitt 
> wrote:
>
>> It's a new year with new opportunities for oVirt to show up its
>> virtualization features! We're getting ready for DevConf.CZ in Brno next
>> week, and FOSDEM in Brussels the week after that! We look forward to
>> meeting European developers and sysadmins to share your experiences!
>>
>> Here's what happened in December of 2016:
>>
>
Brian,
your newsletter is very attractive, as always.
(my compliments also to Mikey Ariel)



> I would add oVirt Italian community events being planned:
>
> OpenStack & oVirt beer ; January 27th 2017
> https://www.meetup.com/it-IT/Meetup-OpenStack-Milano/events/236414454/
>
> oVirt Workshop and 4.1 Release party ;  February 13th 2017
> https://www.eventbrite.it/e/biglietti-ovirt-41-workshop-rele
> ase-party-31347251473
> https://www.facebook.com/events/391387704529823/
>
>
Nice!
I would try to attend (and in the mean time to complete the update of the
italian oVirt translation for the upcoming 4.1 release... ;-)

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users