Re: [ovirt-users] oVirt HA.

2015-04-30 Thread Gianluca Cecchi
On Thu, Apr 30, 2015 at 8:12 AM, Sven Kieske s.kie...@mittwald.de wrote:



 but everything above a minute could become critical for large orgs
 relying on the ability to spawn vms at any given time.

 or imagine critical HA vms running on ovirt:
 you can't migrate them, when the engine is not running.
 you might not even want a downtime of a single second for them, that's
 why you implemented things like live migration in the first place.

 the bottom line is:
 if you manage critical infrastructure, the tools to manage
 this infrastructure have to be as reliable as the infrastructure itself.



If there is any interest I can revamp a testbed similar to what already
done about one year ago with CentOS 6.5 and oVirt 3.3.3

See for summary about my configuration here:
http://lists.ovirt.org/pipermail/users/2014-March/022176.html

At that time I configured the cluster with Pacemaker/cman and the resource
split on the two-node cluster was something like this

Last updated: Wed Mar  5 18:07:51 2014
Last change: Wed Mar  5 18:07:51 2014 via crm_resource on
ovirteng01.localdomain.local
Stack: cman
Current DC: ovirteng01.localdomain.local - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured
14 Resources configured


Online: [ ovirteng01.localdomain.local ovirteng02.localdomain.local ]

 Master/Slave Set: ms_OvirtData [OvirtData]
 Masters: [ ovirteng01.localdomain.local ]
 Slaves: [ ovirteng02.localdomain.local ]
 Resource Group: ovirt
 ip_OvirtData   (ocf::heartbeat:IPaddr2):   Started
ovirteng01.localdomain.local
 lvm_ovirt  (ocf::heartbeat:LVM):   Started ovirteng01.localdomain.local
 fs_OvirtData   (ocf::heartbeat:Filesystem):Started
ovirteng01.localdomain.local
 pgsql_OvirtData(lsb:postgresql):   Started
ovirteng01.localdomain.local
 ovirt-engine   (lsb:ovirt-engine): Started
ovirteng01.localdomain.local
 ovirt-websocket-proxy  (lsb:ovirt-websocket-proxy):Started
ovirteng01.localdomain.local
 httpd  (ocf::heartbeat:apache):Started
ovirteng01.localdomain.local
 Clone Set: p_lsb_nfs-clone [p_lsb_nfs]
 Started: [ ovirteng01.localdomain.local ovirteng02.localdomain.local ]
 Clone Set: p_exportfs_root-clone [p_exportfs_root]
 Started: [ ovirteng01.localdomain.local ovirteng02.localdomain.local ]

there were some customizations I had to do related to ovirt-engine service
init script and to setup HA for POstgreSQL.
I already have as a task to dig into cluster changes for CentOS 7.1 and so
I can try to see how it adapts with oVirt 3.5 too.

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] neutron subnet problems

2015-04-30 Thread Jorick Astrego

Hi,

Still messing about with Neutron a bit (oVirt 3.5.2). When I create an
external network with a subnet, I get this error:

Error while executing action Add Subnet to Provider: Failed to
communicate with the external provider.

The network does get created with a subnet. Doing the same without
subnet doesn't throw the error

neutron net-list

+--+--+---+
| id   | name |
subnets   |

+--+--+---+
| 66ef5f7f-b0d1-4ef9-8b5e-d7a7aa315d58 | public   |
37ea25da-68a1-478b-ab30-8cf6ad104ccb 217.114.98.64/26 |
| a30a9558-3a29-437d-b62c-96287f0a4702 | test2|
1f651e5b-f6fa-4674-b3ee-923a50eb8042 192.168.2.0/24   |
| 5e90abfb-19c1-4b09-b387-8f2b815cad30 | private  |
60f29e92-5461-4eba-a375-4016bc4c6f39 172.17.0.0/24|
| fa7a48df-a6c2-4cf8-a710-6f099abb81c8 | test |
009b7e32-5eca-47c8-8b71-92e5ab7ca3d7 10.1.1.0/24  |
| 4dcd5192-07be-4128-95f7-88b1bb19a6f4 | test3|
0d65bb46-632b-4698-b876-f534b76215e3 192.168.5.0/24   |
| b9144805-0b2c-468a-87e0-73d2fd91fe2d | nosubnet
|   |

+--+--+---+


The provider tests ok:

Test succeeded, managed to access provider.

I can see the POST but nothing else in the neutron log:


2015-04-30 11:48:49.288 1027 INFO neutron.plugins.ml2.db
[req-ea7a3897-966a-4c7f-b157-3be2da5a6a2b None] Added segment
eeeb23d3-fdc4-4686-bd83-250fcc12493c of type vlan for network
4dcd5192-07be-4128-95f7-88b1bb19a6f4
2015-04-30 11:48:49.302 1027 INFO neutron.wsgi
[req-ea7a3897-966a-4c7f-b157-3be2da5a6a2b None] **.***.***.** - -
[30/Apr/2015 11:48:49] POST /v2.0/networks HTTP/1.1 201 525 0.102263

Also fetching the subnet of any neutron network throws these errors:

2015-04-30 09:53:18,563 ERROR

[org.ovirt.engine.core.bll.provider.network.GetExternalSubnetsOnProviderByNetworkQuery]
(ajp--127.0.0.1-8702-6) Query
GetExternalSubnetsOnProviderByNetworkQuery failed. Exception message
is org.codehaus.jackson.map.JsonMappingException: Parameter #0 type
for factory method ([method valueOf, annotations: {interface

org.codehaus.jackson.annotate.JsonCreator=@org.codehaus.jackson.annotate.JsonCreator()}])
not suitable, must be java.lang.String :
org.jboss.resteasy.spi.ReaderException:
org.codehaus.jackson.map.JsonMappingException: Parameter #0 type for
factory method ([method valueOf, annotations: {interface

org.codehaus.jackson.annotate.JsonCreator=@org.codehaus.jackson.annotate.JsonCreator()}])
not suitable, must be java.lang.String:
org.jboss.resteasy.spi.ReaderException:
org.codehaus.jackson.map.JsonMappingException: Parameter #0 type for
factory method ([method valueOf, annotations: {interface

org.codehaus.jackson.annotate.JsonCreator=@org.codehaus.jackson.annotate.JsonCreator()}])
not suitable, must be java.lang.String
at

org.jboss.resteasy.client.core.BaseClientResponse.readFrom(BaseClientResponse.java:469)
[resteasy-jaxrs-2.3.2.Final.jar:]
at

org.jboss.resteasy.client.core.BaseClientResponse.getEntity(BaseClientResponse.java:377)
[resteasy-jaxrs-2.3.2.Final.jar:]
at

org.jboss.resteasy.client.core.BaseClientResponse.getEntity(BaseClientResponse.java:350)
[resteasy-jaxrs-2.3.2.Final.jar:]
at

org.jboss.resteasy.client.core.BaseClientResponse.getEntity(BaseClientResponse.java:344)
[resteasy-jaxrs-2.3.2.Final.jar:]
at

com.woorea.openstack.connector.RESTEasyResponse.getEntity(RESTEasyResponse.java:25)
[resteasy-connector.jar:]
at

com.woorea.openstack.base.client.OpenStackClient.execute(OpenStackClient.java:67)
[openstack-client.jar:]
at

com.woorea.openstack.base.client.OpenStackRequest.execute(OpenStackRequest.java:98)
[openstack-client.jar:]
at

org.ovirt.engine.core.bll.provider.network.openstack.OpenstackNetworkProviderProxy.getAllSubnets(OpenstackNetworkProviderProxy.java:132)
[bll.jar:]
at

org.ovirt.engine.core.bll.provider.network.GetExternalSubnetsOnProviderByNetworkQuery.executeQueryCommand(GetExternalSubnetsOnProviderByNetworkQuery.java:28)
[bll.jar:]
at

org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:73)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:497)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:471)

Re: [ovirt-users] GlusterFS native client use with oVirt

2015-04-30 Thread Doron Fediuck


On 23/04/15 00:47, Will Dennis wrote:
 Hi all,
 
  
 
 Can someone tell me if it’s possible or not to utilize GlusterFS mounted
 as native (i.e. FUSE) for a storage domain with oVirt 3.5.x?  I have two
 nodes (with a third I’m thinking of using as well) that are running
 Gluster, and I’ve created the two volumes needed for hosted engine setup
 (“engine”, “data”) on them, and mounted them native (not via NFS.) Can
 this be used with oVirt 3.5.x?
 
  
 
 Or is this (from what I now understand) a new feature coming in oVirt 3.6?
 
  
 
 Thanks,
 
 Will
 
 
Hi Will,
note that Hosted engine requires replica-3 when using Gluster.

If all goes well, we may see a tighter integration coming in the next
version (will require gluster updates as well).

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirtmgmt bridge, hosted engine, and running VMs

2015-04-30 Thread Sandro Bonazzola
Il 29/04/2015 09:55, Garry Tiedemann ha scritto:
 Hi folks,
 
 I have a 3.5 hosted-engine setup, which was recently upgraded from 3.4. It 
 has five nodes, two of them set up for hosted-engine HA.
 
 Initial problem:
 
 One of the hosted-engine HVs had a score of 1800, owing to the management 
 bridge (ovirtmgmt) being absent.
 I put ovirtmgmt bridge back in. The score went to 2400. Great!
 After that, I could use hosted-engine --set-maintenance --mode=local to force 
 it to move to the other blade.
 
 Can't migrate hosted-engine using the oVirt GUI though; I'm sure I've done so 
 before, is that still supposed to work in 3.5?

AFAIK yes, it should still be supported.


 
 Second problem:
 
 Having added the ovirtmgmt bridge, that HV, which was running VMs before, now 
 refuses to accept them.
 
 An example from hosted-engine's engine.log, it refuses to put a VM on these 
 hosts:
 
 2015-04-29 17:23:07,593 INFO  
 [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
 (ajp--127.0.0.1-8702-2) [4ca60585] Candidate host
 bl09.networkvideo.com.au (4f26611a-9f44-4832-b9e3-1a06b1d513fc) was filtered 
 out by VAR__FILTERTYPE__INTERNAL *filter Network*
 2015-04-29 17:23:07,595 INFO  
 [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
 (ajp--127.0.0.1-8702-2) [4ca60585] Candidate host
 bl07.networkvideo.com.au (fc50be91-3e07-4447-a0d8-bffbda8a07c6) was filtered 
 out by VAR__FILTERTYPE__INTERNAL filter Network
 
 I think it must relate to the configuration for either the ovirtmgmt bridge, 
 or the physical interface to which the bridge is connected.
 I have seen, for example, the need for BOOTPROTO=none to be in the ifcfg-file.
 So, it seems that filter actually reads config files, and I suspect it's 
 looking for a certain directive and/or syntax. My guess, it's being (too)
 fussy about syntax.
 
 Has anyone else encountered this? I'd be glad to learn more about how that 
 filter works, if someone can point me in the right direction please.
 
 The interface configs for the two scenarios are shown below.
 
 1. eth0 without a bridge - like this, I can run VMs on the HV, but 
 hosted-engine won't go there (of course).
 
 [root@bl09 network-scripts]# cat ifcfg-eth0
 DEVICE=eth0
 HWADDR=00:21:5a:48:4e:4a
 ONBOOT=yes
 IPADDR=10.0.14.9
 NETMASK=255.255.255.0
 GATEWAY=10.0.14.254
 BOOTPROTO=none
 MTU=1500
 DEFROUTE=yes
 NM_CONTROLLED=no
 
 2. With the bridge in, as it is now, I can put hosted-engine on it, but can't 
 run other VMs on there.
 
 [root@bl09 network-scripts]# cat ifcfg-eth0
 DEVICE=eth0
 HWADDR=00:21:5a:48:4e:4a
 BOOTPROTO=none
 ONBOOT=yes
 BRIDGE=ovirtmgmt
 MTU=1500
 DEFROUTE=no
 NM_CONTROLLED=no
 [root@bl09 network-scripts]# cat ifcfg-ovirtmgmt
 DEVICE=ovirtmgmt
 ONBOOT=yes
 TYPE=Bridge
 DELAY=0
 IPADDR=10.0.14.9
 NETMASK=255.255.255.0
 GATEWAY=10.0.14.254
 BOOTPROTO=static
 DEFROUTE=yes
 NM_CONTROLLED=no
 
 Any guidance appreciated.
 
 -- 
 Thanks,
 
 Garry
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bad performance with Windows 2012 guests

2015-04-30 Thread Martijn Grendelman

Hi,

Ever since our first Windows Server 2012 deployment on oVirt (3.4 back 
then, now 3.5.1), I have noticed that working on these VMs via RDP or on 
the console via VNC is noticeably slower than on Windows 2008 guests on 
the same oVirt environment.


Basic things like starting an application (even the Server Manager that 
get started automatically on login) take a very long time, sometimes 
minutes. Everything is just... slow.


We have recently deployed Microsoft Exchange on a Windows Server 2012 
guest on RHEV, and it doesn't perform well at all.


I haven't been able to find the cause for this slowness; CPU usage is 
not excessive and it doesn't seem I/O related. Moreover, other types of 
guests (Linux and even Windows 2008) do not have this problem.


We have 3 different environments:
- oVirt 3.5.1, on old Dell servers with Penryn Family CPUs with fairly 
slow storage on replicated GlusterFS, running CentOS 6.6
- oVirt 3.5.1, on modern 6-core SandyBridge servers with local storage 
via NFS, running CentOS 7.0)
- RHEV 3.4.4 on modern 10-core SandyBridge servers with an iSCSI SAN 
behind it, running on RHEV Hypervisor 6.5


All of these -very different- environments expose the same behaviour: 
Linux, Windows 2008 fast (or as fast as can be expected given the 
hardware), Windows 2012 painfully slow.


All Windows 2012 servers use VirtIO disk and network. I think all 
drivers are from the virtio-win-0.1-74 ISO.


Does anyone share this experience?
Any idea why this could happen and how it can be fixed?
Any other information I should share to get a better idea?

Btw, for the guests on the RHEV environment, we have a case with RedHat 
support, but that doesn't seem to lead to a quick solution, hence I'm 
writing here, too.


Thanks for any help.

Regards,
Martijn Grendelman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] High number of XFS extents migrating disk from iscsi to Gluster

2015-04-30 Thread Adrián Santos Marrero
Hi!,

I've been migrating my Ovirt storage from iscsi to Gluster. What I've been
doing is moving the disks with the VM powered off.

This procedure was fine until I tried to migrate a 100GB disk from a VM
with an Oracle DB. During this migration the access to Gluster slowed down,
affecting the whole cluster (VDSM timeout in oVirt Engine, VM's entered in
unknown state, gluster client disconnected from a brick)

What I found in gluster servers was a XFS error:

XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

And this error is due to the number of extents of the file (~ 8M):

gluster01:/gluster/ovirt1_brick_01/brick/89c4e113-1003-4b4e-850e-e7fc5bf2edc6/images/e5b7fc46-4168-4019-a13a-f9b7093d0534#
 xfs_bmap f9cd555d-fd98-499f-a853-d0ce76eecd37 | wc -l
 8627613


I'm using Gluster 3.6.2 and Centos 7 (3.10.0-229.1.2.el7.x86_64 kernel).

Does anyone know why is qemu-img generating this so fragmented file?

Regards.

-- 

 Adrián Santos Marrero
 Técnico de Sistemas - Área de Infraestructuras TIC
 Servicios de Tecnologías de la Información y Comunicación (STIC)
 Universidad de La Laguna (ULL)
 Teléfono/Phone: +34 922 845089


Este mensaje puede contener información confidencial y/o privilegiada.
Si usted no es el destinatario o lo ha recibido por error debe borrarlo
inmediatamente.
Está estrictamente prohibido por la legislación vigente realizar
cualquier copia, revelación o distribución del contenido de este mensaje
sin autorización expresa.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient or have received this e-mail in
error you must destroy it.
Any unauthorised copying, disclosure or distribution of the material in
this e-mail is strictly forbidden by current legislation.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Lars Nielsen



On 30/04/15 14:21, Simone Tiraboschi wrote:


- Original Message -

From: Lars Nielsen l...@steinwurf.com
To: Simone Tiraboschi stira...@redhat.com
Cc: users users@ovirt.org
Sent: Thursday, April 30, 2015 2:08:10 PM
Subject: Re: [ovirt-users] About installing ovirt - ovirt-engine not found



On 30/04/15 13:55, Simone Tiraboschi wrote:

- Original Message -

From: Lars Nielsen l...@steinwurf.com
To: users users@ovirt.org
Sent: Thursday, April 30, 2015 1:45:29 PM
Subject: [ovirt-users] About installing ovirt - ovirt-engine not found

Hey I need help, with the installation of oVirt (again).
After installing gluster and getting this up and running, I am trying to
install ovirt, by first running this command:

yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

and then

yum install -y ovirt-engine

However this does not work as I get: Error packages does not exist.

Sorry, which distribution and which architecture are you trying to deploy
on?

x86_64 and fedora 21

We are still not supporting fedora 21 and we will probably jump to fedora 22 
skipping 21 at all.
Please see:
https://bugzilla.redhat.com/show_bug.cgi?id=1163062

But should I not still be able to at least see the package?



Can some please tell me how to fix it? I have tried to run yum update
-y, which did not help at all

Thanks in advance.

- Lars
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Med venlig hilsen / Best Regards
Lars Nielsen
Student developer at Steinwurf
l...@steinwurf.com




--
Med venlig hilsen / Best Regards
Lars Nielsen
Student developer at Steinwurf
l...@steinwurf.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt - import detached gluter volumes

2015-04-30 Thread Sahina Bose
Could you try gluster volume start VGSF1 force to make sure the brick 
processes are restarted.

From the status output, it looks like the brick processes are not online.

On 04/22/2015 09:14 PM, p...@email.cz wrote:

Hello dears,
i've got some troubles with reattaching gluster volumes with data.

1) Base on a lot of tests I decided clear oVirt database ( # 
engine-cleanup ; # yum remove ovirt-engine;  # yum -y install 
ovirt-engine; #  engine-setup)

2) clearing sucessfully done and start with empty oVirt envir.
3) then I added networks, nodes and make basic network adjustment = 
all works fine
4) time to attach  volumes/ domains with original data ( a lot of 
virtuals , ISO files ,  )


So, main question is about HOWTO attach this volumes if I haven't 
defined any domain and can't clearly import them ??


Current status of nodes are without glusterfs NFS mounted, but bricks 
are OK


# gluster volume info

Volume Name: VGFS1
Type: Replicate
Volume ID: b9a1c347-6ffd-4122-8756-d513fe3f40b9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p1/GFS1
Brick2: 1kvm1:/FastClass/p1/GFS1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36

Volume Name: VGFS2
Type: Replicate
Volume ID: b65bb689-ecc8-4c33-a4e7-11dea6028f83
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p2/GFS1
Brick2: 1kvm1:/FastClass/p2/GFS1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36


[root@1kvm1 glusterfs]# gluster volume status
Status of volume: VGFS1
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p1/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS1
--
There are no active volume tasks

Status of volume: VGFS2
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p2/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS2
--
There are no active volume tasks

[root@1kvm1 glusterfs]# gluster volume start VGFS1
volume start: VGFS1: failed: Volume VGFS1 already started



# mount | grep mapper # base XFS mounting
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p1 on /FastClass/p1 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p2 on /FastClass/p2 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)



*5)* import screen
/VGFS1 dir exists  iptables flushed


# cat rhev-data-center-mnt-glusterSD-1kvm1:_VGFS1.log
[2015-04-22 15:21:50.204521] I [MSGID: 100030] 
[glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.6.2 (args: /usr/sbin/glusterfs 
--volfile-server=1kvm1 --volfile-id=/VGFS1 
/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1)
[2015-04-22 15:21:50.220383] I [dht-shared.c:337:dht_init_regex] 
0-VGFS1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2015-04-22 15:21:50.55] I [client.c:2280:notify] 
0-VGFS1-client-1: parent translators are ready, attempting connect on 
transport
[2015-04-22 15:21:50.224528] I [client.c:2280:notify] 
0-VGFS1-client-2: parent translators are ready, attempting connect on 
transport

Final graph:
+--+
  1: volume VGFS1-client-1
  2: type protocol/client
  3: option ping-timeout 42
  4: option remote-host 1kvm2
  5: option remote-subvolume /FastClass/p1/GFS1
  6: option transport-type socket
  7: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
  8: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
  9: option send-gids true
 10: end-volume
 11:
 12: volume VGFS1-client-2
 13: type protocol/client
 14: option ping-timeout 42
 15: option remote-host 1kvm1
 16: option remote-subvolume /FastClass/p1/GFS1
 17: option transport-type socket
 18: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
 19: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
 20: option send-gids true
 21: end-volume
 22:
 23: volume VGFS1-replicate-0
 24: type cluster/replicate
 25: subvolumes VGFS1-client-1 VGFS1-client-2
 26: end-volume
 27:
 28: volume VGFS1-dht
 29: type cluster/distribute
 30: subvolumes VGFS1-replicate-0
 31: end-volume
 32:
 33: volume VGFS1-write-behind
 34: type performance/write-behind
 35: subvolumes VGFS1-dht
 36: end-volume
 37:
 38: volume VGFS1-read-ahead
 39: type performance/read-ahead
 40: subvolumes VGFS1-write-behind
 41: end-volume
 42:
 43: volume VGFS1-io-cache
 44: type 

Re: [ovirt-users] oVirt HA.

2015-04-30 Thread Dan Yasny


- Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: de...@ovirt.org
 Cc: users@ovirt.org
 Sent: Thursday, April 30, 2015 2:12:55 AM
 Subject: Re: [ovirt-users] oVirt HA.
 
 
 
 On 29/04/15 21:53, Dan Yasny wrote:
  There is always room for improvement, but think about it: ever since
  SolidICE, there has been a demand to minimize the amount of hardware used
  in a minimalistic setup, thus the hosted engine project. And now that we
  have it, all of a sudden, we need to provide a way to make multiple
  engines work in active/passive mode? If that capability is provided, I'm
  sure a new demand will arise, asking for active/active engines, infinitely
  scalable, and so on.
 
 of course you wan't active/active clusters for an enterprise product,
 rather sooner than later

No doubt there, however, that's not *just* HA any longer :)

  
  The question really is, where the line is drawn. The engine downtime can be
  a few minutes, it's not that critical in setups of hundreds of hosts.
  oVirt's raison d'etre is to make VMs run, everything else is just plumbing
  around that.
 I disagree:
 
 ovirt is a provider of critical infrastructure
 (vms and their management) for modern it business.
 
 imagine a large organisation just using ovirt for their virtualization,
 with lots of different departments which at will can spawn their own
 vms, maybe even from different countrys with different time zones (just
 like red hat ;) ).
 
 of course, if just the engine service is down for some reason and you
 can just restart it with an outage of some seconds, or maybe a minute -
 fine.
 
 but everything above a minute could become critical for large orgs
 relying on the ability to spawn vms at any given time.
 

I think you're getting away from the point here. If the hosted engine's HA 
isn't fast enough, you can cluster the engine in other ways, that were 
available way before hosted engine came to be. 

 or imagine critical HA vms running on ovirt:
 you can't migrate them, when the engine is not running.
 you might not even want a downtime of a single second for them, that's
 why you implemented things like live migration in the first place.
 
 the bottom line is:
 if you manage critical infrastructure, the tools to manage
 this infrastructure have to be as reliable as the infrastructure itself.
 
 --
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt HA.

2015-04-30 Thread Sven Kieske
On 30/04/15 15:14, Dan Yasny wrote:
 No doubt there, however, that's not *just* HA any longer :)

sorry for being a nitpick and to quote wikipedia but:

There are three principles of high availability engineering. They are
1. Elimination of single points of failure. This means adding redundancy
to the system so that failure of a component does not mean failure of
the entire system.[..]

e.g active/active clusters ;)

I promise this will be my last mail to this thread :-)

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Simone Tiraboschi


- Original Message -
 From: Lars Nielsen l...@steinwurf.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users users@ovirt.org
 Sent: Thursday, April 30, 2015 2:29:05 PM
 Subject: Re: [ovirt-users] About installing ovirt - ovirt-engine not found
 
 
 
 On 30/04/15 14:21, Simone Tiraboschi wrote:
 
  - Original Message -
  From: Lars Nielsen l...@steinwurf.com
  To: Simone Tiraboschi stira...@redhat.com
  Cc: users users@ovirt.org
  Sent: Thursday, April 30, 2015 2:08:10 PM
  Subject: Re: [ovirt-users] About installing ovirt - ovirt-engine not found
 
 
 
  On 30/04/15 13:55, Simone Tiraboschi wrote:
  - Original Message -
  From: Lars Nielsen l...@steinwurf.com
  To: users users@ovirt.org
  Sent: Thursday, April 30, 2015 1:45:29 PM
  Subject: [ovirt-users] About installing ovirt - ovirt-engine not found
 
  Hey I need help, with the installation of oVirt (again).
  After installing gluster and getting this up and running, I am trying to
  install ovirt, by first running this command:
 
   yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
 
  and then
 
   yum install -y ovirt-engine
 
  However this does not work as I get: Error packages does not exist.
  Sorry, which distribution and which architecture are you trying to deploy
  on?
  x86_64 and fedora 21
  We are still not supporting fedora 21 and we will probably jump to fedora
  22 skipping 21 at all.
  Please see:
  https://bugzilla.redhat.com/show_bug.cgi?id=1163062
 But should I not still be able to at least see the package?

No, cause it is using fc$releasever variable in the yum repo specification and 
it doesn't exist for fc21.

  Can some please tell me how to fix it? I have tried to run yum update
  -y, which did not help at all
 
  Thanks in advance.
 
  - Lars
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  --
  Med venlig hilsen / Best Regards
  Lars Nielsen
  Student developer at Steinwurf
  l...@steinwurf.com
 
 
 
 --
 Med venlig hilsen / Best Regards
 Lars Nielsen
 Student developer at Steinwurf
 l...@steinwurf.com
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Simone Tiraboschi


- Original Message -
 From: Lars Nielsen l...@steinwurf.com
 To: users users@ovirt.org
 Sent: Thursday, April 30, 2015 1:45:29 PM
 Subject: [ovirt-users] About installing ovirt - ovirt-engine not found
 
 Hey I need help, with the installation of oVirt (again).
 After installing gluster and getting this up and running, I am trying to
 install ovirt, by first running this command:
 
   yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
 
 and then
 
   yum install -y ovirt-engine
 
 However this does not work as I get: Error packages does not exist.

Sorry, which distribution and which architecture are you trying to deploy on?

 Can some please tell me how to fix it? I have tried to run yum update
 -y, which did not help at all
 
 Thanks in advance.
 
 - Lars
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Lars Nielsen



On 30/04/15 13:55, Simone Tiraboschi wrote:


- Original Message -

From: Lars Nielsen l...@steinwurf.com
To: users users@ovirt.org
Sent: Thursday, April 30, 2015 1:45:29 PM
Subject: [ovirt-users] About installing ovirt - ovirt-engine not found

Hey I need help, with the installation of oVirt (again).
After installing gluster and getting this up and running, I am trying to
install ovirt, by first running this command:

yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

and then

yum install -y ovirt-engine

However this does not work as I get: Error packages does not exist.

Sorry, which distribution and which architecture are you trying to deploy on?

x86_64 and fedora 21



Can some please tell me how to fix it? I have tried to run yum update
-y, which did not help at all

Thanks in advance.

- Lars
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Med venlig hilsen / Best Regards
Lars Nielsen
Student developer at Steinwurf
l...@steinwurf.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Lars Nielsen

Hey I need help, with the installation of oVirt (again).
After installing gluster and getting this up and running, I am trying to 
install ovirt, by first running this command:


yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm 

and then

yum install -y ovirt-engine

However this does not work as I get: Error packages does not exist.
Can some please tell me how to fix it? I have tried to run yum update 
-y, which did not help at all


Thanks in advance.

- Lars
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] About installing ovirt - ovirt-engine not found

2015-04-30 Thread Simone Tiraboschi


- Original Message -
 From: Lars Nielsen l...@steinwurf.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users users@ovirt.org
 Sent: Thursday, April 30, 2015 2:08:10 PM
 Subject: Re: [ovirt-users] About installing ovirt - ovirt-engine not found
 
 
 
 On 30/04/15 13:55, Simone Tiraboschi wrote:
 
  - Original Message -
  From: Lars Nielsen l...@steinwurf.com
  To: users users@ovirt.org
  Sent: Thursday, April 30, 2015 1:45:29 PM
  Subject: [ovirt-users] About installing ovirt - ovirt-engine not found
 
  Hey I need help, with the installation of oVirt (again).
  After installing gluster and getting this up and running, I am trying to
  install ovirt, by first running this command:
 
 yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
 
  and then
 
 yum install -y ovirt-engine
 
  However this does not work as I get: Error packages does not exist.
  Sorry, which distribution and which architecture are you trying to deploy
  on?
 x86_64 and fedora 21

We are still not supporting fedora 21 and we will probably jump to fedora 22 
skipping 21 at all.
Please see:
https://bugzilla.redhat.com/show_bug.cgi?id=1163062

 
  Can some please tell me how to fix it? I have tried to run yum update
  -y, which did not help at all
 
  Thanks in advance.
 
  - Lars
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 --
 Med venlig hilsen / Best Regards
 Lars Nielsen
 Student developer at Steinwurf
 l...@steinwurf.com
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.5.2 live merge

2015-04-30 Thread Patrick Russell
Our packages in my earlier email were incorrect. Here are the correct packages 
and some more logs:

# rpm -qa |grep ovirt
ovirt-engine-cli-3.5.0.5-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.2-1.el6.noarch
ovirt-engine-setup-plugin-allinone-3.5.2-1.el6.noarch
ovirt-engine-setup-3.5.2-1.el6.noarch
ovirt-engine-dbscripts-3.5.2-1.el6.noarch
ovirt-engine-backend-3.5.2-1.el6.noarch
ovirt-guest-tools-3.5.0-0.5.master.noarch
ovirt-host-deploy-1.3.1-1.el6.noarch
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2-1.el6.noarch
ovirt-engine-restapi-3.5.2-1.el6.noarch
ovirt-engine-userportal-3.5.2-1.el6.noarch
ovirt-engine-3.5.2-1.el6.noarch
ovirt-host-deploy-offline-1.3.1-1.el6.x86_64
ovirt-engine-setup-plugin-websocket-proxy-3.5.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-extensions-api-impl-3.5.2-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-guest-tools-iso-3.5-7.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-lib-3.5.2-1.el6.noarch
ovirt-engine-setup-base-3.5.2-1.el6.noarch
ovirt-release35-003-1.noarch
ovirt-engine-tools-3.5.2-1.el6.noarch
ovirt-engine-webadmin-portal-3.5.2-1.el6.noarch
ovirt-host-deploy-java-1.3.1-1.el6.noarch

Logs:

2015-04-30 19:50:24,578 INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] 
(ajp--127.0.0.1-8702-6) [4cbab1c6] Lock Acquired to object EngineLock 
[exclusiveLocks= key: f2afb8c7-6b1d-4822-a43a-0889178de719 value: VM
, sharedLocks= ]
2015-04-30 19:50:24,689 INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] 
(ajp--127.0.0.1-8702-6) [4cbab1c6] Running command: RemoveSnapshotCommand 
internal: false. Entities affected :  ID: f2afb8c7-6b1d-4822-a43a-0889178de719 
Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
2015-04-30 19:50:24,723 INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] 
(ajp--127.0.0.1-8702-6) [4cbab1c6] Lock freed to object EngineLock 
[exclusiveLocks= key: f2afb8c7-6b1d-4822-a43a-0889178de719 value: VM
, sharedLocks= ]
2015-04-30 19:50:24,762 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(pool-7-thread-7) [598dbb1e] Running command: 
RemoveSnapshotSingleDiskLiveCommand internal: true. Entities affected :  ID: 
---- Type: Storage
2015-04-30 19:50:24,771 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(pool-7-thread-8) [5c8285eb] Running command: 
RemoveSnapshotSingleDiskLiveCommand internal: true. Entities affected :  ID: 
---- Type: Storage
2015-04-30 19:50:24,805 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-6) Correlation ID: 4cbab1c6, Job ID: 
b0057939-d15a-4da7-b782-c835127a1413, Call Stack: null, Custom Event ID: -1, 
Message: Snapshot 'test_snap1' deletion for VM '3.5.2.wintest1' was initiated 
by patrick_russ...@corp.volusion.commailto:patrick_russ...@corp.volusion.com.
2015-04-30 19:50:29,642 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(DefaultQuartzScheduler_Worker-13) [5c8285eb] Executing Live Merge command step 
MERGE
2015-04-30 19:50:29,665 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback] 
(DefaultQuartzScheduler_Worker-13) Waiting on Live Merge child commands to 
complete
2015-04-30 19:50:29,671 INFO  [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-9) [e10f6a3] Running command: MergeCommand internal: true. 
Entities affected :  ID: 58ef50e1-2efb-4081-a3ca-493c2ee7e7b6 Type: Storage
2015-04-30 19:50:29,679 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(DefaultQuartzScheduler_Worker-13) [598dbb1e] Executing Live Merge command step 
MERGE
2015-04-30 19:50:29,697 INFO  [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-10) [4898b008] Running command: MergeCommand internal: true. 
Entities affected :  ID: 58ef50e1-2efb-4081-a3ca-493c2ee7e7b6 Type: Storage
2015-04-30 19:50:29,703 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-9) 
[e10f6a3] START, MergeVDSCommand(HostName = 
aus02gdkvm01.dev.volusion.comhttp://aus02gdkvm01.dev.volusion.com, 
MergeVDSCommandParameters{HostId = cfc227ef-c9ab-4c35-aa22-138c169f2ba6, 
vmId=f2afb8c7-6b1d-4822-a43a-0889178de719, 
storagePoolId=edca3dc6-d9ca-410f-af84-9e24d49df3e0, 
storageDomainId=58ef50e1-2efb-4081-a3ca-493c2ee7e7b6, 
imageGroupId=bb8359c6-1dba-45e2-97cb-929d1c83a7c8, 
imageId=87f26ef2-6a84-4f7c-9771-be76650c703c, 
baseImageId=3f90ced5-58ea-43cc-955a-559e8d89fbb2, 
topImageId=87f26ef2-6a84-4f7c-9771-be76650c703c, bandwidth=0}), log id: 5859dc0
2015-04-30 19:50:29,728 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-10) 
[4898b008] START, MergeVDSCommand(HostName = 
aus02gdkvm01.dev.volusion.comhttp://aus02gdkvm01.dev.volusion.com, 
MergeVDSCommandParameters{HostId = cfc227ef-c9ab-4c35-aa22-138c169f2ba6, 

Re: [ovirt-users] 3.5.2 live merge

2015-04-30 Thread Tim Macy
Seems very similar to this bug -
https://bugzilla.redhat.com/show_bug.cgi?id=1213157

On Thu, Apr 30, 2015 at 3:54 PM, Patrick Russell 
patrick_russ...@volusion.com wrote:

  Our packages in my earlier email were incorrect. Here are the correct
 packages and some more logs:

  # rpm -qa |grep ovirt
 ovirt-engine-cli-3.5.0.5-1.el6.noarch
 ovirt-engine-setup-plugin-ovirt-engine-3.5.2-1.el6.noarch
 ovirt-engine-setup-plugin-allinone-3.5.2-1.el6.noarch
 ovirt-engine-setup-3.5.2-1.el6.noarch
 ovirt-engine-dbscripts-3.5.2-1.el6.noarch
 ovirt-engine-backend-3.5.2-1.el6.noarch
 ovirt-guest-tools-3.5.0-0.5.master.noarch
 ovirt-host-deploy-1.3.1-1.el6.noarch
 ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
 ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2-1.el6.noarch
 ovirt-engine-restapi-3.5.2-1.el6.noarch
 ovirt-engine-userportal-3.5.2-1.el6.noarch
 ovirt-engine-3.5.2-1.el6.noarch
 ovirt-host-deploy-offline-1.3.1-1.el6.x86_64
 ovirt-engine-setup-plugin-websocket-proxy-3.5.2-1.el6.noarch
 ovirt-engine-websocket-proxy-3.5.2-1.el6.noarch
 ovirt-iso-uploader-3.5.2-1.el6.noarch
 ovirt-engine-extensions-api-impl-3.5.2-1.el6.noarch
 ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
 ovirt-guest-tools-iso-3.5-7.noarch
 ovirt-image-uploader-3.5.1-1.el6.noarch
 ovirt-engine-lib-3.5.2-1.el6.noarch
 ovirt-engine-setup-base-3.5.2-1.el6.noarch
 ovirt-release35-003-1.noarch
 ovirt-engine-tools-3.5.2-1.el6.noarch
 ovirt-engine-webadmin-portal-3.5.2-1.el6.noarch
 ovirt-host-deploy-java-1.3.1-1.el6.noarch

  Logs:

  2015-04-30 19:50:24,578 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-6)
 [4cbab1c6] Lock Acquired to object EngineLock [exclusiveLocks= key:
 f2afb8c7-6b1d-4822-a43a-0889178de719 value: VM
 , sharedLocks= ]
 2015-04-30 19:50:24,689 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-6)
 [4cbab1c6] Running command: RemoveSnapshotCommand internal: false. Entities
 affected :  ID: f2afb8c7-6b1d-4822-a43a-0889178de719 Type: VMAction group
 MANIPULATE_VM_SNAPSHOTS with role type USER
 2015-04-30 19:50:24,723 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-6)
 [4cbab1c6] Lock freed to object EngineLock [exclusiveLocks= key:
 f2afb8c7-6b1d-4822-a43a-0889178de719 value: VM
 , sharedLocks= ]
 2015-04-30 19:50:24,762 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
 (pool-7-thread-7) [598dbb1e] Running command:
 RemoveSnapshotSingleDiskLiveCommand internal: true. Entities affected :
  ID: ---- Type: Storage
 2015-04-30 19:50:24,771 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
 (pool-7-thread-8) [5c8285eb] Running command:
 RemoveSnapshotSingleDiskLiveCommand internal: true. Entities affected :
  ID: ---- Type: Storage
 2015-04-30 19:50:24,805 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-6) Correlation ID: 4cbab1c6, Job ID:
 b0057939-d15a-4da7-b782-c835127a1413, Call Stack: null, Custom Event ID:
 -1, Message: Snapshot 'test_snap1' deletion for VM '3.5.2.wintest1' was
 initiated by patrick_russ...@corp.volusion.com.
 2015-04-30 19:50:29,642 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
 (DefaultQuartzScheduler_Worker-13) [5c8285eb] Executing Live Merge command
 step MERGE
 2015-04-30 19:50:29,665 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback]
 (DefaultQuartzScheduler_Worker-13) Waiting on Live Merge child commands to
 complete
 2015-04-30 19:50:29,671 INFO  [org.ovirt.engine.core.bll.MergeCommand]
 (pool-7-thread-9) [e10f6a3] Running command: MergeCommand internal: true.
 Entities affected :  ID: 58ef50e1-2efb-4081-a3ca-493c2ee7e7b6 Type: Storage
 2015-04-30 19:50:29,679 INFO
  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
 (DefaultQuartzScheduler_Worker-13) [598dbb1e] Executing Live Merge command
 step MERGE
 2015-04-30 19:50:29,697 INFO  [org.ovirt.engine.core.bll.MergeCommand]
 (pool-7-thread-10) [4898b008] Running command: MergeCommand internal: true.
 Entities affected :  ID: 58ef50e1-2efb-4081-a3ca-493c2ee7e7b6 Type: Storage
 2015-04-30 19:50:29,703 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
 (pool-7-thread-9) [e10f6a3] START, MergeVDSCommand(HostName =
 aus02gdkvm01.dev.volusion.com, MergeVDSCommandParameters{HostId =
 cfc227ef-c9ab-4c35-aa22-138c169f2ba6,
 vmId=f2afb8c7-6b1d-4822-a43a-0889178de719,
 storagePoolId=edca3dc6-d9ca-410f-af84-9e24d49df3e0,
 storageDomainId=58ef50e1-2efb-4081-a3ca-493c2ee7e7b6,
 imageGroupId=bb8359c6-1dba-45e2-97cb-929d1c83a7c8,
 imageId=87f26ef2-6a84-4f7c-9771-be76650c703c,
 baseImageId=3f90ced5-58ea-43cc-955a-559e8d89fbb2,
 topImageId=87f26ef2-6a84-4f7c-9771-be76650c703c, bandwidth=0}), log id:
 5859dc0
 2015-04-30 19:50:29,728 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
 (pool-7-thread-10) [4898b008] START, MergeVDSCommand(HostName =
 

Re: [ovirt-users] oVirt HA.

2015-04-30 Thread Sven Kieske


On 29/04/15 21:53, Dan Yasny wrote:
 There is always room for improvement, but think about it: ever since 
 SolidICE, there has been a demand to minimize the amount of hardware used in 
 a minimalistic setup, thus the hosted engine project. And now that we have 
 it, all of a sudden, we need to provide a way to make multiple engines work 
 in active/passive mode? If that capability is provided, I'm sure a new demand 
 will arise, asking for active/active engines, infinitely scalable, and so on.

of course you wan't active/active clusters for an enterprise product,
rather sooner than later
 
 The question really is, where the line is drawn. The engine downtime can be a 
 few minutes, it's not that critical in setups of hundreds of hosts. oVirt's 
 raison d'etre is to make VMs run, everything else is just plumbing around 
 that.
I disagree:

ovirt is a provider of critical infrastructure
(vms and their management) for modern it business.

imagine a large organisation just using ovirt for their virtualization,
with lots of different departments which at will can spawn their own
vms, maybe even from different countrys with different time zones (just
like red hat ;) ).

of course, if just the engine service is down for some reason and you
can just restart it with an outage of some seconds, or maybe a minute -
fine.

but everything above a minute could become critical for large orgs
relying on the ability to spawn vms at any given time.

or imagine critical HA vms running on ovirt:
you can't migrate them, when the engine is not running.
you might not even want a downtime of a single second for them, that's
why you implemented things like live migration in the first place.

the bottom line is:
if you manage critical infrastructure, the tools to manage
this infrastructure have to be as reliable as the infrastructure itself.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.5.2 live merge

2015-04-30 Thread Patrick Russell
Hi everyone,

We’re not seeing live merge working as of the 3.5.2 update. We’ve tested using 
fibre channel and NFS attached storage. Both throwing the same error code. Are 
other people seeing success with live-merge after the update?

Here’s the environment:

Engine Running on CentOS 6x64 updated to 3.5.2 via yum update (standalone 
physical box, dual socket hex core + hyperthreading, 16GB memory)

# rpm -qa |grep ovirt
ovirt-engine-cli-3.5.0.5-1.el6.noarch
ovirt-engine-3.5.1.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.2-1.el6.noarch
ovirt-engine-setup-plugin-allinone-3.5.2-1.el6.noarch
ovirt-engine-setup-3.5.2-1.el6.noarch
ovirt-guest-tools-3.5.0-0.5.master.noarch
ovirt-host-deploy-1.3.1-1.el6.noarch
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2-1.el6.noarch
ovirt-engine-backend-3.5.1.1-1.el6.noarch
ovirt-engine-userportal-3.5.1.1-1.el6.noarch
ovirt-engine-dbscripts-3.5.1.1-1.el6.noarch
ovirt-engine-tools-3.5.1.1-1.el6.noarch
ovirt-host-deploy-offline-1.3.1-1.el6.x86_64
ovirt-engine-setup-plugin-websocket-proxy-3.5.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-extensions-api-impl-3.5.2-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-webadmin-portal-3.5.1.1-1.el6.noarch
ovirt-engine-restapi-3.5.1.1-1.el6.noarch
ovirt-guest-tools-iso-3.5-7.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-lib-3.5.2-1.el6.noarch
ovirt-engine-setup-base-3.5.2-1.el6.noarch
ovirt-release35-003-1.noarch
ovirt-host-deploy-java-1.3.1-1.el6.noarch

Hypervisors are running ovirt-node, upgraded from ISO : 
http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-node/el7-3.5.2/ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso


Here’s a snippet from the logs:

2015-04-29 18:47:16,947 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-2) 
[48eb0b1d] FINISH, MergeVDSCommand, log id: 5121ecc9
2015-04-29 18:47:16,947 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Command org.ovirt.engine.core.bll.MergeCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge 
failed, code = 52 (Failed with error mergeErr and code 52)
2015-04-29 18:47:16,954 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.MergeCommand.
2015-04-29 18:47:16,981 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Failed in MergeVDS method
2015-04-29 18:47:16,982 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand 
return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=52, mMessage=Merge 
failed]]


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users