[ovirt-users] CEPH rbd support in EL7 libvirt

2015-10-11 Thread Nux!
Hi folks,

I was directed here by Sandro with the question in the $subject. 
As I could not find anything conclusive in either bugzilla or the 7.2 release 
notes, can someone clarify this for me?
At this point it's apparently as easy as rebuilding the libvirt src.rpm with 
"with_storage_rbd 1".[1]

I see users migrating from CentOS to Ubuntu because this is missing, it's not 
even in technology preview.
Kind of odd RH undermining their own projects in this way.

[1] - 
http://blog.widodh.nl/2015/04/rebuilding-libvirt-under-centos-7-1-with-rbd-storage-pool-support/

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).

2015-10-11 Thread Nico
 

Hi 

Recently, i built a small oVirt platform with 2 dedicated servers and
GlusterFS to synch the VM storage. 

oVirt Setup is simple: 

ovirt01 : Host Agent (VDSM) + oVirt Engine 

ovirt02 : Host Agent (VDSM) 

Version : 

ovirt-release35-005-1.noarch 

ovirt-engine-3.5.4.2-1.el7.centos.noarch 

vdsm-4.16.26-0.el7.centos.x86_64 

vdsm-gluster-4.16.26-0.el7.centos.noarch 

glusterfs-server-3.7.4-2.el7.x86_64 

GlusterFS Setup is simple, 2 bricks in replicate mode. 

It was done in shell; not using oVirt GUI, and then it was added in
STORAGE as a new DOMAIN; Type:DATA GlusterFS V3 

# gluster volume info 

Volume Name: ovirt 

Type: Replicate 

Volume ID: 043d2d36-dc2c-4f75-9d28-96dbac25d07c 

Status: Started 

Number of Bricks: 1 x 2 = 2 

Transport-type: tcp 

Bricks: 

Brick1: ovirt01:/gluster/ovirt 

Brick2: ovirt02:/gluster/ovirt 

Options Reconfigured: 

performance.readdir-ahead: on 

nfs.disable: true 

auth.allow: IP_A, IP_B 

network.ping-timeout: 10 

storage.owner-uid: 36 

storage.owner-gid: 36 

server.allow-insecure: on 

the data are reachable on the 2 nodes through a moint point that oVirt
created when i configured the Storage with the GUI: 

localhost:/ovirt 306G 216G 78G 74%
/rhev/data-center/mnt/glusterSD/localhost:_ovirt 

I created 7 VM on this shared storage and all is working fine. I can do
Live migration; all is working. 

But when i check /var/log/ovirt/engine.log on ovirt01, there are error
in loop every 2 seconds: 

2015-10-11 17:29:50,971 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-29) [34dbe5cf] START,
GlusterVolumesListVDSCommand(HostName = ovirt02, HostId =
65a5bb5d-721f-4a4b-9e77-c4b9162c0aa6), log id: 41443b77 

2015-10-11 17:29:50,998 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-29) [34dbe5cf] Could not add brick
ovirt02:/gluster/ovirt to volume 043d2d36-dc2c-4f75-9d28-96dbac25d07c -
server uuid 3c340e59-334f-4aa6-ad61-af2acaf3cad6 not found in cluster
fb976d4f-de13-449b-93e8-600fcb59d4e6 

2015-10-11 17:29:50,999 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-29) [34dbe5cf] FINISH,
GlusterVolumesListVDSCommand, return:
{043d2d36-dc2c-4f75-9d28-96dbac25d07c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@200ae0d1},
log id: 41443b77 

2015-10-11 17:29:51,001 WARN
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler_Worker-29) [34dbe5cf] Volume ovirt contains a
apparently corrupt brick(s). Hence will not add it to engine at this
point. 

I played a lot with oVirt at first it was running on a single node; in
Local Datacenter; then i added a second node, move the first host to a
new datacenter; migrated the images VM etc; with some pain at some very
moment and now all looks fine but i prefer to double check. 

So, i want to know if there is a real issue with ovirt/gluster setup
that i don't see, any info are welcome because i'm a bit worried to see
these message in LOOP on the log. 

Thanks in advance; 

Regards 

Nico 
 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Engine redundant ?

2015-10-11 Thread Julian De Marchi

On 12/10/2015 1:56 AM, Nico wrote:



Hello

I got a question regarding the redundancy of oVirt Engine; where we can
control everything through the GUI; aka the vCenter.

Initially; my setup was on a single server; installed following the doc
AllinOne.

Then after; i figured out that i would need redundancy; so i installed a
second node.

my question is about the vCenter; is it possible to start it on the
second node when the first node is down or in maintenance ? how to
ensure it is replicated ? i see there is a postgres DB on first node;
but not on second node. I'm a bit worried.

So, is it possible to have a dormant/slave ovirt ENGINE (oVirt web GUI )
; and if yes, could you point me to the related documentation as i
didn't find it.

Maybe, it is possible to run it in a VM ? is it a good idea ? and is it
still possible to migrate ?


The oVirt hosted-engine will do what you want. Have a read of the below.

http://www.ovirt.org/Migrate_to_Hosted_Engine

--julian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users