Re: [ovirt-users] How to change a pool's template

2014-06-11 Thread Omer Frenkel


- Original Message -
> From: "John Xue" 
> To: users@ovirt.org
> Sent: Thursday, June 12, 2014 7:03:22 AM
> Subject: [ovirt-users]  How to change a pool's template
> 
> Hello
> 
> I'm a new guy in ovirt 3.3.5, I try to change a pool's template. When
> I edit the pool, the option "Based of Template" is gay. How to do
> that? I don't want to delete and rebuild again.
> 

you cannot change pool's template, because it means replacing all the disks for 
all the vms in the pool,
which is basically the same as removing the pool and creating a new one.

but, you can do something similar in 3.4, using template versions:
in 3.4, when you create the pool, you can select a template and LATEST as the 
version.
when you create a new template, you can make it a version of the original 
template,
and the pool will update to this template.

you can read about it here
http://www.ovirt.org/Features/Template_Versions

> Thank you!
> 
> --
> Regards,
> John Xue
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] igmp snooping and bridging

2014-06-11 Thread Nicolas Ecarnot

Hi,

I know this may sounds few related to oVirt, but here is what happened 
here :
We have two oVirt datacenters in 3.4.1 (and 3.4 compat. level), and 
amongst many other things, we have a ctdb cluster composed of two nodes, 
each of them being in a different datacenter.


Some days ago, I raise the compat. level from 3.3 to 3.4, upgraded the 
hosts of datacenter#1 from centOS 6.4 to 6.5 (datacenter#2 was already 
6.5), and our two (VM)ctdb nodes (centOS 6.5) were constantly failing 
into cluster partition.
Almost 3 days of googling lead me to a point that seems know for some 
times, being : multicast on bridged interfaces is more or less 
supported, or randomly supported.


See :
http://lists.corosync.org/pipermail/discuss/2012-November/002208.html

On the concerned hosts, I tried the advised workaround, and yes, that 
stabilized the situation.


I understand this is not directly oVirt/RHEV related, but I really don't 
get why this ctdb cluster has worked for months, and stopped recently 
(I'll have to deeply dig into the release notes of the upgraded packages 
and try to find something useful).


I post that here to :
- ask if some of you are also running clusters amongst VMs (not 
particularly amongst datacenters - VM discussion amongst hosts may also 
be an issue)

- leave a trace in case that may help debug some setups

Regards,

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to change a pool's template

2014-06-11 Thread John Xue
Hello

I'm a new guy in ovirt 3.3.5, I try to change a pool's template. When
I edit the pool, the option "Based of Template" is gay. How to do
that? I don't want to delete and rebuild again.

Thank you!

-- 
Regards,
John Xue
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] fail to shutdown ubuntu guest

2014-06-11 Thread John Xue
Hello,

I try to grace shutdown my ubuntu guest from user or admin portal, but
all are fail. I can saw a shutdown option dialog in guest, so I think
maybe some shutdown parameter are miss in ovirth guest agent for
ubuntu 14.04.

Thanks

-- 
Regards,
John Xue
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] localdomain

2014-06-11 Thread Andrew Lau
The cloud-init integration was a little flaky when I was using it,

I ended up not using any of the inbuilt oVirt options (eg. hostname,
root password). Root password never worked for me as it'd force a
reset on first login.. defeating the purpose.
Just passing a full cloud-init config into the bottom section worked
for me, so for your case just define the hostname there instead.


On Tue, May 27, 2014 at 9:33 PM, Koen Vanoppen  wrote:
> Hi Guys,
>
> It's bin a while :-). Luckily :-).
>
> I have a quick question. Is there a way to change the default .localdomain
> for the FQDN in ovirt?
> I would be handy if we just had to fill in the hostname of our vm (we are
> using 3.4, with the cloud-init feature) and he automatically adds our domain
> in stead of .localdomain.
>
> Kind regards,
>
> Koen
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine HA?

2014-06-11 Thread Itamar Heim

On 06/11/2014 04:16 AM, kevint...@umac.mo wrote:

Dear all,

I know oVirt Host (VM) can be HA and cluster, all VM can be migrate
between all operational Node.

Since the main oVirt Engine is standalone, event shelf-hosted engine, it
still is a single Engine.

I want to ensure my Engine can be HA, how should I do? Do I need to
create a Cluster Linux first?



hosted-engine has built-in HA mechanism for the engine, so if the host 
its running on has an issue, another host will launch the hosted engine.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] problem engine-manage-domains add ldap domain

2014-06-11 Thread lucas castro
I'm trying to add a ldap domain to ovirt-engine,
but getting problem with that.

I sent three files with the engine-manage-domains log
the krb5 config generated for testing
and the tcpdump port 53 from my dns server

can anybody help me to find what is happening?
-- 
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastrobor...@hotmail.com
15:37:19.818449 IP (tos 0x0, ttl 64, id 59985, offset 0, flags [DF], proto UDP 
(17), length 57)
172.0.0.10.58058 > 172.0.0.254.53: 29364+ A? example.com. (29)
15:37:19.818535 IP (tos 0x0, ttl 64, id 59986, offset 0, flags [DF], proto UDP 
(17), length 57)
172.0.0.10.58058 > 172.0.0.254.53: 4753+ ? example.com. (29)
15:37:19.819683 IP (tos 0x0, ttl 64, id 11084, offset 0, flags [none], proto 
UDP (17), length 107)
172.0.0.254.53 > 172.0.0.10.58058: 29364* 1/1/1 example.com. A 
192.168.20.21 (79)
15:37:19.820157 IP (tos 0x0, ttl 64, id 11085, offset 0, flags [none], proto 
UDP (17), length 108)
172.0.0.254.53 > 172.0.0.10.58058: 4753* 0/1/0 (80)
15:37:19.821217 IP (tos 0x0, ttl 64, id 59987, offset 0, flags [DF], proto UDP 
(17), length 72)
172.0.0.10.43610 > 172.0.0.254.53: 35399+ PTR? 21.20.168.192.in-addr.arpa. 
(44)
15:37:19.821708 IP (tos 0x0, ttl 64, id 11086, offset 0, flags [none], proto 
UDP (17), length 173)
172.0.0.254.53 > 172.0.0.10.43610: 35399* 1/2/2 21.20.168.192.in-addr.arpa. 
PTR example.com. (145)
15:37:19.856761 IP (tos 0x0, ttl 64, id 59988, offset 0, flags [DF], proto UDP 
(17), length 72)
172.0.0.10.58127 > 172.0.0.254.53: 40570+ SRV? _kerberos._tcp.example.com. 
(44)
15:37:19.857789 IP (tos 0x0, ttl 64, id 11087, offset 0, flags [none], proto 
UDP (17), length 153)
172.0.0.254.53 > 172.0.0.10.58127: 40570* 1/1/2 _kerberos._tcp.example.com. 
SRV example.com.:88 0 100 (125)

2014-06-11 15:32:50,900 INFO  [org.ovirt.engine.core.domains.ManageDomains] Creating kerberos configuration for domain(s): example.com
2014-06-11 15:32:50,935 INFO  [org.ovirt.engine.core.domains.ManageDomains] Successfully created kerberos configuration for domain(s): example.com
2014-06-11 15:32:50,936 INFO  [org.ovirt.engine.core.domains.ManageDomains] Testing kerberos configuration for domain: example.com
2014-06-11 15:32:51,091 ERROR [org.ovirt.engine.core.utils.kerberos.KerberosConfigCheck] Error:  exception message: Conexão recusada
2014-06-11 15:32:51,095 ERROR [org.ovirt.engine.core.domains.ManageDomains] Failure while testing domain example.com. Details: Kerberos error. Please check log for further details.



krb5.conf.manage_domains_utility
Description: Binary data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Weekly Meeting Minutes -- 2014-06-11

2014-06-11 Thread Doron Fediuck
Minutes:http://ovirt.org/meetings/ovirt/2014/ovirt.2014-06-11-14.05.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-06-11-14.05.txt
Log:
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-06-11-14.05.log.html

=
#ovirt: oVirt Weekly Sync
=


Meeting started by doron at 14:05:00 UTC. The full logs are available at
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-06-11-14.05.log.html .



Meeting summary
---
* Agenda and roll Call  (doron, 14:05:01)
  * infra update  (doron, 14:05:03)
  * 3.4.z updates  (doron, 14:05:04)
  * 3.5 status  (doron, 14:05:06)
  * conferences and workshops  (doron, 14:05:07)
  * other topics  (doron, 14:05:09)

* infra update  (doron, 14:05:32)
  * infra updates: we had a major meltdown of the gerrit server
yesterday, we are trying to leverage the load while investigating
the causes  (doron, 14:07:19)
  * old os1 slaves soon to be replaced with the new ones. We do not
expect interruption.  (doron, 14:09:02)

* 3.4.z updates  (doron, 14:09:31)
  * updated timeline for 3.4.z: 3.4.3 RC in July 10 and GA in July 17
(doron, 14:20:20)

* 3.5 status  (doron, 14:21:15)
  * gluster 3.5 updates: 2 features merged. Last one pending a patch
review in vdsm. Good chances for making it on time.  (doron,
14:27:27)
  * infra 3.5 updates; most features are in. a few are being verified
and some will need further stabilization.  (doron, 14:29:20)
  * infra REST 3.5 status: 1069204 RFE postponed.  (doron, 14:33:12)
  * integration 3.5 updates: most features are in.Separate installation
for dwh / reports / web proxy is in final stages and expected to
make it.  (doron, 14:38:28)
  * network 3.5 status: all set. several features postponed to the next
version.  (doron, 14:40:02)
  * network 3.5 status: all set. 2 features postponed to the next
version.  (doron, 14:40:43)
  * node 3.5 status: 2 features are in. ovirt virt appliance should be
in in the next day or so.  (doron, 14:42:42)
  * sla 3.5 updates: 3 features merged. NUMA and QoS mostly there. We
may need heavy stabilization post feature freeze for it. 1 feature
postponed.  (doron, 14:46:07)
  * storage 3.5 updates: most features are in. Exceptions: live merge -
engine side, , aiming for eo. attach/detach - basic functionality
in, the rest should be by eow. exp/imp will be postponed.  (doron,
14:55:44)
  * UX 3.5 updates: none. all set for 3.5.  (doron, 14:57:00)
  * LINK: http://gerrit.ovirt.org/#/c/26917/ would be nice to get in to
close line 72 (again, very small feature, not worth waiting)
(mskrivanek, 15:00:58)
  * virt 3.5 status: most features are in. virtio-rng should make it.
others may slip.  (doron, 15:11:05)
  * ACTION: sbonazzo starting next week we should start tracking 3.5
blockers next week.  (doron, 15:15:54)

* conferences and workshops  (doron, 15:16:04)
  * oVirt workshop to be co-located with KVM Forum October 14 - 16, 2014
(doron, 15:16:37)
  * We may be setting up a workshop/event in China sometime in late
summer, based on feedback from China users group. OSCON is next up
on July 20-24, 2014  (doron, 15:17:12)

* other topics  (doron, 15:17:29)=
#ovirt: oVirt Weekly Sync
=


Meeting started by doron at 14:05:00 UTC. The full logs are available at
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-06-11-14.05.log.html .



Meeting summary
---
* Agenda and roll Call  (doron, 14:05:01)
  * infra update  (doron, 14:05:03)
  * 3.4.z updates  (doron, 14:05:04)
  * 3.5 status  (doron, 14:05:06)
  * conferences and workshops  (doron, 14:05:07)
  * other topics  (doron, 14:05:09)

* infra update  (doron, 14:05:32)
  * infra updates: we had a major meltdown of the gerrit server
yesterday, we are trying to leverage the load while investigating
the causes  (doron, 14:07:19)
  * old os1 slaves soon to be replaced with the new ones. We do not
expect interruption.  (doron, 14:09:02)

* 3.4.z updates  (doron, 14:09:31)
  * updated timeline for 3.4.z: 3.4.3 RC in July 10 and GA in July 17
(doron, 14:20:20)

* 3.5 status  (doron, 14:21:15)
  * gluster 3.5 updates: 2 features merged. Last one pending a patch
review in vdsm. Good chances for making it on time.  (doron,
14:27:27)
  * infra 3.5 updates; most features are in. a few are being verified
and some will need further stabilization.  (doron, 14:29:20)
  * infra REST 3.5 status: 1069204 RFE postponed.  (doron, 14:33:12)
  * integration 3.5 updates: most features are in.Separate installation
for dwh / reports / web proxy is in final stages and expected to
make it.  (doron, 14:38:28)
  * network 3.5 status: all set. several features postponed to the next
version.  (doron, 14:40:02)
  * network 3.5 status: all set. 2 features postponed to the next
version.  (doron, 14:40:43)
  * node 3.5 status: 2 feature

Re: [ovirt-users] ISO

2014-06-11 Thread Maor Lipchuk
Hi Moritz,

Is your DC is up? do you have an SPM host in it?

Regards,
Maor


On 06/11/2014 05:13 PM, Moritz Mlynarek wrote:
> 
> Hello,
> 
> after installing ovirt 3.4 I tried to add an ISO File to my ovirt
> system. But there was no local-iso-share.
> 
> The command "engine-iso-uploader list" told me "ERROR: There are no ISO
> storage domains". I tried to create a new domain (ISO / Local on Host)
> but I was not able to choose a HOST. 
> 
> Sorry for my poor english. :(
> 
> Moritz Mlynarek
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO

2014-06-11 Thread Gabi C
Did you add firsy an iso DOMAIN?
Pe 11.06.2014 17:13, "Moritz Mlynarek"  a scris:

>
> Hello,
>
> after installing ovirt 3.4 I tried to add an ISO File to my ovirt system.
> But there was no local-iso-share.
>
> The command "engine-iso-uploader list" told me "ERROR: There are no ISO
> storage domains". I tried to create a new domain (ISO / Local on Host) but
> I was not able to choose a HOST.
>
> Sorry for my poor english. :(
>
> Moritz Mlynarek
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ISO

2014-06-11 Thread Moritz Mlynarek
Hello,

after installing ovirt 3.4 I tried to add an ISO File to my ovirt system.
But there was no local-iso-share.

The command "engine-iso-uploader list" told me "ERROR: There are no ISO
storage domains". I tried to create a new domain (ISO / Local on Host) but
I was not able to choose a HOST.

Sorry for my poor english. :(

Moritz Mlynarek
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO_DOMAIN can't be attached

2014-06-11 Thread Peter Haraldson

I am using the webgui, documentation doesn't mention any cli commandds.

As the ISO_DOMAIN actually resides on the /engine/, not on the node, 
there is no vdsm log. Also the command 'vdsClient' does not exist.

/Peter

2014-06-11 14:46, Dafna Ron skrev:
Can you please also attach the vdsm log and can you please tell me 
what command you are using to add the domain?

and... please run vdsClient -s 0 getStorageDomainsList 

Thanks,
Dafna


On 06/11/2014 03:08 PM, Peter Haraldson wrote:

Hi
I have one Engine, one node.
Fresh install of oVirt 3.4 on new CentOS 6.5.
Engine holds ISO_DOMAIN, created automatically during setup.
The ISO_DOMAIN exists under Default -> Storage. When trying to attach 
it, it is visible in web interface a few seconds as "locked", then 
disappears.

Log says:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-24) [5a5288b3] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 
477 and error message is: problem while trying to mount target 
(thanks for that error message... )

And a bit down:
ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-27) [43f2f913] Command 
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Storage domain does not exist: 
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with 
error StorageDomainDoesNotExist and code 358)


I can, however, mount the ISO_DOMAIN manually, it has correct 
permissions and a directory structure obviously created by oVirt (see 
below).

There is no problem adding nfs storage from the node.
iptables is off, no firewalld, selinux is permissive.

ISO_DOMAIN mounted on engine ("mount -t nfs 172.19.1.10:/ct/isos/ 
/mnt/tmp/")

ls -l /mnt/
totalt 4
drwxr-xr-x. 3 vdsm kvm 4096 10 jun 19.18 tmp

ls -l /mnt/tmp/0f6485ab-0301-4989-a59a-56efcd447ba0/images/
totalt 4
drwxr-xr-x. 2 vdsm kvm 4096 10 jun 19.19 
----


I have read many posts about this problem, seems to be a common one, 
but no solution found.


Complete log from one try:
2014-06-11 15:48:23,513 INFO 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Running command: 
AttachStorageDomainToPoolCommand internal: false. Entities affected 
:  ID: 0f6485ab-0301-4989-a59a-56efcd447ba0 Type: Storage
2014-06-11 15:48:23,525 INFO 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Running command: 
ConnectStorageToVdsCommand internal: true. Entities affected : ID: 
aaa0----123456789aaa Type: System
2014-06-11 15:48:23,533 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] START, 
ConnectStorageServerVDSCommand(HostName = ExtTest, HostId = 
1947b88f-b02e-4acf-bb30-1d92de626b45, storagePoolId = 
----, storageType = NFS, 
connectionList = [{ id: 53c268c3-5b04-4d42-bfa3-58d31c982a5d, 
connection: ctmgm0.certitrade.net:/ct/isos, iqn: null, vfsType: null, 
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: 
null };]), log id: 5107b953
2014-06-11 15:48:23,803 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-23) 
[795f995f] Correlation ID: null, Call Stack: null, Custom Event ID: 
-1, Message: Failed to connect Host ExtTest to the Storage Domains 
ISO_DOMAIN.
2014-06-11 15:48:23,805 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] FINISH, 
ConnectStorageServerVDSCommand, return: 
{53c268c3-5b04-4d42-bfa3-58d31c982a5d=477}, log id: 5107b953
2014-06-11 15:48:23,808 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-23) 
[795f995f] Correlation ID: null, Call Stack: null, Custom Event ID: 
-1, Message: The error message for connection 
ctmgm0.certitrade.net:/ct/isos returned by VDSM was: Problem while 
trying to mount target
2014-06-11 15:48:23,810 ERROR 
[org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 
477 and error message is: problem while trying to mount target
2014-06-11 15:48:23,814 ERROR 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Transaction 
rolled-back for command: 
org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2014-06-11 15:48:23,816 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org

Re: [ovirt-users] ISO_DOMAIN can't be attached

2014-06-11 Thread Dafna Ron
Can you please also attach the vdsm log and can you please tell me what 
command you are using to add the domain?

and... please run vdsClient -s 0 getStorageDomainsList 

Thanks,
Dafna


On 06/11/2014 03:08 PM, Peter Haraldson wrote:

Hi
I have one Engine, one node.
Fresh install of oVirt 3.4 on new CentOS 6.5.
Engine holds ISO_DOMAIN, created automatically during setup.
The ISO_DOMAIN exists under Default -> Storage. When trying to attach 
it, it is visible in web interface a few seconds as "locked", then 
disappears.

Log says:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-24) [5a5288b3] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 
477 and error message is: problem while trying to mount target (thanks 
for that error message... )

And a bit down:
ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-27) [43f2f913] Command 
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Storage domain does not exist: 
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with 
error StorageDomainDoesNotExist and code 358)


I can, however, mount the ISO_DOMAIN manually, it has correct 
permissions and a directory structure obviously created by oVirt (see 
below).

There is no problem adding nfs storage from the node.
iptables is off, no firewalld, selinux is permissive.

ISO_DOMAIN mounted on engine ("mount -t nfs 172.19.1.10:/ct/isos/ 
/mnt/tmp/")

ls -l /mnt/
totalt 4
drwxr-xr-x. 3 vdsm kvm 4096 10 jun 19.18 tmp

ls -l /mnt/tmp/0f6485ab-0301-4989-a59a-56efcd447ba0/images/
totalt 4
drwxr-xr-x. 2 vdsm kvm 4096 10 jun 19.19 
----


I have read many posts about this problem, seems to be a common one, 
but no solution found.


Complete log from one try:
2014-06-11 15:48:23,513 INFO 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Running command: 
AttachStorageDomainToPoolCommand internal: false. Entities affected :  
ID: 0f6485ab-0301-4989-a59a-56efcd447ba0 Type: Storage
2014-06-11 15:48:23,525 INFO 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Running command: 
ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
aaa0----123456789aaa Type: System
2014-06-11 15:48:23,533 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] START, 
ConnectStorageServerVDSCommand(HostName = ExtTest, HostId = 
1947b88f-b02e-4acf-bb30-1d92de626b45, storagePoolId = 
----, storageType = NFS, 
connectionList = [{ id: 53c268c3-5b04-4d42-bfa3-58d31c982a5d, 
connection: ctmgm0.certitrade.net:/ct/isos, iqn: null, vfsType: null, 
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
};]), log id: 5107b953
2014-06-11 15:48:23,803 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Failed to connect Host 
ExtTest to the Storage Domains ISO_DOMAIN.
2014-06-11 15:48:23,805 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] FINISH, 
ConnectStorageServerVDSCommand, return: 
{53c268c3-5b04-4d42-bfa3-58d31c982a5d=477}, log id: 5107b953
2014-06-11 15:48:23,808 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: The error message for 
connection ctmgm0.certitrade.net:/ct/isos returned by VDSM was: 
Problem while trying to mount target
2014-06-11 15:48:23,810 ERROR 
[org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 
477 and error message is: problem while trying to mount target
2014-06-11 15:48:23,814 ERROR 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Transaction rolled-back 
for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2014-06-11 15:48:23,816 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] START, 
AttachStorageDomainVDSCommand( storagePoolId = 
0002-0002-0002-0002-011b, ignoreFailoverLimit = false, 
storageDomainId = 0f6485ab-0301-4989-a59a-56efcd447ba0), log id: a2cca1b
2014-06-11 15:4

Re: [ovirt-users] SLA : RAM scheduling

2014-06-11 Thread noc

On 26-5-2014 16:22, Gilad Chaplik wrote:

Hi Nathanaël,

happy to assist :) hope it will work in first run:

1) install the proxy and ovirtsdk.
2) put attached file in the right place (according to docs: ".../plugins"), 
make sure to edit the file with your ovirt's ip, user@domain and PW.
3) restart proxy service.
3) use config tool to configure ovirt-engine:
* "ExternalSchedulerServiceURL"="http://:18781/"
* "ExternalSchedulerEnabled"=true
4) restart ovirt-engine service.
5) under configure->cluster_policy see that weight function 
memory_even_distribution was added (should be in manage policy units or /sth- you 
will see it in the main dialog as well).
6) clone/copy currernt cluster's used cluster policy (probably none - prefer it 
to have no balancing modules to avoid conflicts), name it 'your_name' and 
attach memory_even_distribution weight (you can leave it as the only weight 
module in weight section to avoid configuring factors).
7) replace cluster's cluster policy with newly created one.

try it out and let me know how goes :-)



Ok, progress of some sort :-)

I added the weight function to the cluster and when I replace my dns 
name with localhost in ExternalSchedulerServiceURL then engine.log shows 
that it can contact the scheduler. I expected a rebalance but nothing 
happened. Stopping and starting a VM does provoke a reaction, an error :-(


From the scheduler.log I see that engine contacts it and pushes some 
information, the log also shows that some information is returned and 
then there is a big error message in the log of engine.


Joop

2014-06-11 14:19:03,647 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler_Worker-79) FINISH, GlusterVolumesListVDSCommand, 
return: 
{955b86a9-10b4-463b-8555-3c321bd72f5c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@37074874,
 
0bf7869b-873b-4c28-9c09-d31100f4e12a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@99f2d266,
 
248aa48d-6aa5-4b21-867e-994265a3f145=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@12074ee1},
 log id: 6dca802f
2014-06-11 14:19:08,755 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler_Worker-9) [5d65b296] START, 
GlusterVolumesListVDSCommand(HostName = st02, HostId = 
5077a01d-7273-4d58-92ee-c3315b0a973e), log id: 3af08062
2014-06-11 14:19:08,870 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler_Worker-9) [5d65b296] FINISH, 
GlusterVolumesListVDSCommand, return: 
{955b86a9-10b4-463b-8555-3c321bd72f5c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a301dc77,
 
0bf7869b-873b-4c28-9c09-d31100f4e12a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2cf62fb2,
 
248aa48d-6aa5-4b21-867e-994265a3f145=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@b2cabf68},
 log id: 3af08062
2014-06-11 14:19:13,977 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler_Worker-19) START, GlusterVolumesListVDSCommand(HostName 
= st02, HostId = 5077a01d-7273-4d58-92ee-c3315b0a973e), log id: 32bca729
2014-06-11 14:19:14,093 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler_Worker-19) FINISH, GlusterVolumesListVDSCommand, 
return: 
{955b86a9-10b4-463b-8555-3c321bd72f5c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a57e1af5,
 
0bf7869b-873b-4c28-9c09-d31100f4e12a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ca2565b6,
 
248aa48d-6aa5-4b21-867e-994265a3f145=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@c4b83856},
 log id: 32bca729
2014-06-11 14:19:16,017 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(ajp--127.0.0.1-8702-6) [67888f4b] START, IsVmDuringInitiatingVDSCommand( vmId 
= ab505d9b-1811-4153-8d84-3efc7d878898), log id: 4a96a08
2014-06-11 14:19:16,018 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(ajp--127.0.0.1-8702-6) [67888f4b] FINISH, IsVmDuringInitiatingVDSCommand, 
return: false, log id: 4a96a08
2014-06-11 14:19:16,088 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(ajp--127.0.0.1-8702-6) [67888f4b] Running command: RunVmOnceCommand internal: 
false. Entities affected :  ID: ab505d9b-1811-4153-8d84-3efc7d878898 Type: VM
2014-06-11 14:19:17,091 INFO  [org.ovirt.engine.core.bll.LoginUserCommand] 
(ajp--127.0.0.1-8702-2) Running command: LoginUserCommand internal: false.
2014-06-11 14:19:17,096 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-2) Correlation ID: null, Call Stack: null, Custom Event 
ID: -1, Message: User admin logged in.
2014-06-11 14:19:18,919 ERROR 
[org.ovirt.engine.core.bll.scheduling.external.ExternalSchedulerBrokerImpl] 
(ajp--127

[ovirt-users] ISO_DOMAIN can't be attached

2014-06-11 Thread Peter Haraldson

Hi
I have one Engine, one node.
Fresh install of oVirt 3.4 on new CentOS 6.5.
Engine holds ISO_DOMAIN, created automatically during setup.
The ISO_DOMAIN exists under Default -> Storage. When trying to attach 
it, it is visible in web interface a few seconds as "locked", then 
disappears.

Log says:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-24) [5a5288b3] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 477 
and error message is: problem while trying to mount target (thanks for 
that error message... )

And a bit down:
ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-27) [43f2f913] Command 
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw 
Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Storage domain does not exist: 
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with error 
StorageDomainDoesNotExist and code 358)


I can, however, mount the ISO_DOMAIN manually, it has correct 
permissions and a directory structure obviously created by oVirt (see 
below).

There is no problem adding nfs storage from the node.
iptables is off, no firewalld, selinux is permissive.

ISO_DOMAIN mounted on engine ("mount -t nfs 172.19.1.10:/ct/isos/ 
/mnt/tmp/")

ls -l /mnt/
totalt 4
drwxr-xr-x. 3 vdsm kvm 4096 10 jun 19.18 tmp

ls -l /mnt/tmp/0f6485ab-0301-4989-a59a-56efcd447ba0/images/
totalt 4
drwxr-xr-x. 2 vdsm kvm 4096 10 jun 19.19 
----


I have read many posts about this problem, seems to be a common one, but 
no solution found.


Complete log from one try:
2014-06-11 15:48:23,513 INFO 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Running command: 
AttachStorageDomainToPoolCommand internal: false. Entities affected :  
ID: 0f6485ab-0301-4989-a59a-56efcd447ba0 Type: Storage
2014-06-11 15:48:23,525 INFO 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Running command: 
ConnectStorageToVdsCommand internal: true. Entities affected : ID: 
aaa0----123456789aaa Type: System
2014-06-11 15:48:23,533 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] START, 
ConnectStorageServerVDSCommand(HostName = ExtTest, HostId = 
1947b88f-b02e-4acf-bb30-1d92de626b45, storagePoolId = 
----, storageType = NFS, connectionList 
= [{ id: 53c268c3-5b04-4d42-bfa3-58d31c982a5d, connection: 
ctmgm0.certitrade.net:/ct/isos, iqn: null, vfsType: null, mountOptions: 
null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 
5107b953
2014-06-11 15:48:23,803 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Failed to connect Host 
ExtTest to the Storage Domains ISO_DOMAIN.
2014-06-11 15:48:23,805 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] FINISH, 
ConnectStorageServerVDSCommand, return: 
{53c268c3-5b04-4d42-bfa3-58d31c982a5d=477}, log id: 5107b953
2014-06-11 15:48:23,808 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: The error message for 
connection ctmgm0.certitrade.net:/ct/isos returned by VDSM was: Problem 
while trying to mount target
2014-06-11 15:48:23,810 ERROR 
[org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] The connection with 
details ctmgm0.certitrade.net:/ct/isos failed because of error code 477 
and error message is: problem while trying to mount target
2014-06-11 15:48:23,814 ERROR 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-6-thread-23) [795f995f] Transaction rolled-back 
for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2014-06-11 15:48:23,816 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-28) 
[7b977e41] START, AttachStorageDomainVDSCommand( storagePoolId = 
0002-0002-0002-0002-011b, ignoreFailoverLimit = false, 
storageDomainId = 0f6485ab-0301-4989-a59a-56efcd447ba0), log id: a2cca1b
2014-06-11 15:48:24,515 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-28) 
[7b977e41] Failed in AttachStorageDomainVDS method
2014-06-11 15:48:24,547 ERROR 
[org.ovirt.engine.core.vdsbr

[ovirt-users] Call for Papers Deadline in One Month: Linux.conf.au

2014-06-11 Thread Brian Proffitt
Conference: Linux.conf.au
Information: Each year open source geeks from across the globe gather in 
Australia or New Zealand to meet their fellow technologists, share the latest 
ideas and innovations, and spend a week discussing and collaborating on open 
source projects. The conference is well known for the speakers and delegates 
depth of talent, and its focus on technical linux content.
Possible topics: Virtualization, oVirt, KVM, libvirt, RDO, OpenStack, Foreman
Date: January 12-15, 2015
Location: Auckland, New Zealand
Website: http://lca2015.linux.org.au/
Call for Papers Deadline: July 13, 2014
Call for Papers URL: http://lca2015.linux.org.au/cfp

Contact me for more information and assistance with presentations.

-- 
Brian Proffitt

oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SLA : RAM scheduling

2014-06-11 Thread noc

On 28-5-2014 17:37, Gilad Chaplik wrote:

make sure to edit the file with your ovirt's ip, user@domain and PW.

My engine API can't be reached in http, so there is some work to do this
with https. Here is what I did : according to
/usr/lib/python2.6/site-packages/ovirtsdk/api.py, I add insecure=True to
the chain
connection = API(url='http*s*://host:port',
   username='user@domain',
password=''*insecure='True'*)
Maybe it is not enough and it would be useful to add
validate_cert_chain=False...

Martin?


3) restart proxy service.
3) use config tool to configure ovirt-engine:
* "ExternalSchedulerServiceURL"="http://:18781/"

The scheduler proxy listens to localhost:18781, none of ips that can be
filled here will be reached on that port.

you config ovirt engine with the ip of the proxy, or I'm missing sth.
ovirt communicates with the proxy and not the other way around.


I'm following this and found out that ovirt-scheduler-proxy only listens 
on localhost so you'll need to adjust accordingly.


Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature GlusterVolumeSnapshots not in 3.5?

2014-06-11 Thread Sahina Bose


On 06/11/2014 01:19 PM, Jorick Astrego wrote:

Hi again,

After reading up on all the backup possibilities of our ovirt cluster 
(we now do in VM backup with our traditional backup software), I came 
across http://www.ovirt.org/Features/GlusterVolumeSnapshots


*Name*: Gluster Volume Snapshot
*Modules*: engine
*Target version*: 3.5
*Status*: Not Started
*Last updated*: 2014-01-21 by Shtripat

It was originally planned for 3.5 but I haven't seen it on the oVirt 
Planning & Tracking document on google docs. It looks like a great 
feature, is this still planned somewhere in the future?



Being able to manage gluster volume snapshots is on our to-do list. 
We're currently prioritizing the backlog to see which of the gluster 
features -(managing/monitoring georeplication, quota, snapshot) - need 
to be added to next release.


Meanwhile, you could use the gluster cli option to create volume snapshot.




Kind regards,

Jorick Astrego
Netbulae BV


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM HostedEngie is down. Exist message: internal error Failed to acquire lock error -243

2014-06-11 Thread noc
Just to update everyone, I've the same problem with a 3 host setup and 
uploaded logs to the BZ 1093366


Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock issues after upgrading to 3.4

2014-06-11 Thread Maor Lipchuk
Hi Jairo,

Can u please open a bug on this at [1].
Also can u please add the logs of the sanlock, vdsm and engine to the bug.

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

Thanks,
Maor


On 06/07/2014 11:22 PM, Jairo Rizzo wrote:
> Hello, 
> 
> I have a small 2-node cluster setup running Glusterfs in replication mode :
> 
> CentOS v6.5 
> kernel-2.6.32-431.17.1.el6.x86_64
> vdsm-4.14.6-0.el6.x86_64
> ovirt-engine-3.4.0-1.el6.noarch  (on 1 node)
> 
> Basically I was running ovirt-engine v 3.3 for months fine and then
> upgraded to latest version of 3.3.X two days ago and could not join the
> nodes to the cluster due to a version mismatch,basically this:
> https://www.mail-archive.com/users@ovirt.org/msg17241.html  . After
> trying to correct this problem I ended up upgrading to 3.4 which created
> a new and challenng problem for me. Every couple of hours I get error
> messages like this:
> 
> Jun  7 13:40:01 hv1 sanlock[2341]: 2014-06-07 13:40:01-0400 19647
> [2341]: s3 check_our_lease warning 70 last_success 19577
> Jun  7 13:40:02 hv1 sanlock[2341]: 2014-06-07 13:40:02-0400 19648
> [2341]: s3 check_our_lease warning 71 last_success 19577
> Jun  7 13:40:03 hv1 sanlock[2341]: 2014-06-07 13:40:03-0400 19649
> [2341]: s3 check_our_lease warning 72 last_success 19577
> Jun  7 13:40:04 hv1 sanlock[2341]: 2014-06-07 13:40:04-0400 19650
> [2341]: s3 check_our_lease warning 73 last_success 19577
> Jun  7 13:40:05 hv1 sanlock[2341]: 2014-06-07 13:40:05-0400 19651
> [2341]: s3 check_our_lease warning 74 last_success 19577
> Jun  7 13:40:06 hv1 sanlock[2341]: 2014-06-07 13:40:06-0400 19652
> [2341]: s3 check_our_lease warning 75 last_success 19577
> Jun  7 13:40:07 hv1 sanlock[2341]: 2014-06-07 13:40:07-0400 19653
> [2341]: s3 check_our_lease warning 76 last_success 19577
> Jun  7 13:40:08 hv1 sanlock[2341]: 2014-06-07 13:40:08-0400 19654
> [2341]: s3 check_our_lease warning 77 last_success 19577
> Jun  7 13:40:09 hv1 wdmd[2330]: test warning now 19654 ping 19644 close
> 0 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:09 hv1 wdmd[2330]: /dev/watchdog closed unclean
> Jun  7 13:40:09 hv1 kernel: SoftDog: Unexpected close, not stopping
> watchdog!
> Jun  7 13:40:09 hv1 sanlock[2341]: 2014-06-07 13:40:09-0400 19655
> [2341]: s3 check_our_lease warning 78 last_success 19577
> Jun  7 13:40:10 hv1 wdmd[2330]: test warning now 19655 ping 19644 close
> 19654 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:10 hv1 sanlock[2341]: 2014-06-07 13:40:10-0400 19656
> [2341]: s3 check_our_lease warning 79 last_success 19577
> Jun  7 13:40:11 hv1 wdmd[2330]: test warning now 19656 ping 19644 close
> 19654 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657
> [2341]: s3 check_our_lease failed 80
> Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657
> [2341]: s3 all pids clear
> 
> Jun  7 13:40:11 hv1 wdmd[2330]: /dev/watchdog reopen
> Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738
> [5050]: s3 delta_renew write error -202
> Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738
> [5050]: s3 renewal error -202 delta_length 140 last_success 19577
> Jun  7 13:41:42 hv1 sanlock[2341]: 2014-06-07 13:41:42-0400 19748
> [5050]: 1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
> Jun  7 13:41:52 hv1 sanlock[2341]: 2014-06-07 13:41:52-0400 19758
> [5050]: 1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
> 
> This makes one of the nodes not be able to see the storage and all its
> VMs will go into pause mode/stop. Wondering if you could provide some
> advice. Thank you
> 
> --Rizzo
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Engine HA?

2014-06-11 Thread KevinTang
Dear all,

I know oVirt Host (VM) can be HA and cluster, all VM can be migrate 
between all operational Node.

Since the main oVirt Engine is standalone, event shelf-hosted engine, it 
still is a single Engine.

I want to ensure my Engine can be HA, how should I do? Do I need to create 
a Cluster Linux first?

Thanks


Best Regards,
Kevin Tang

AMSV - STATE KEY LABORATORY OF ANALOG AND MIXED-SIGNAL VLSI
University of Macau
Tel: (+853) 8397-8035

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Feature GlusterVolumeSnapshots not in 3.5?

2014-06-11 Thread Jorick Astrego

Hi again,

After reading up on all the backup possibilities of our ovirt cluster 
(we now do in VM backup with our traditional backup software), I came 
across http://www.ovirt.org/Features/GlusterVolumeSnapshots


*Name*: Gluster Volume Snapshot
*Modules*: engine
*Target version*: 3.5
*Status*: Not Started
*Last updated*: 2014-01-21 by Shtripat

It was originally planned for 3.5 but I haven't seen it on the oVirt 
Planning & Tracking document on google docs. It looks like a great 
feature, is this still planned somewhere in the future?


Kind regards,

Jorick Astrego
Netbulae BV
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Working but unstable storage domains

2014-06-11 Thread Maor Lipchuk
Hi Paul,

Can u please attach the logs,

Regards,
Maor

On 06/10/2014 09:36 PM, Paul Heinlein wrote:
> 
> I'm running oVirt Engine 3.2.0-2.fc18 (which I know is out of date) on a
> dedicated physical host; we have 12 hosts split between two clusters and
> nine storage domains, all NFS.
> 
> Late last week, a VM that in the scope of our clusters consumes a lot of
> resources failed in migration. Since then the storage domains have from
> the engine's point of view been going up and down (though the underlying
> NFS exports are fine). Key symptoms from the oVirt Manager:
> 
>  * two of the storage domains are always marked as having type of
>"Data (Master)" when historically only one was;
> 
>  * the Manager reports "Storage Pool Manager runs on $host" then
>"Sync Error on Master Domain..." then "Reconstruct Master Domain
>...completed" then "Data Center is being initialized" over and
>over and over again.
> 
> The Sync Error messages indicate "$pool is marked as Mater in oVirt
> Engine database but not on the Storage side. Please consult with Support
> on how to fix this issue." Note that $pool changes between the various
> domains that get marked as Data (Master).
> 
> Clues, anyone? I'm happy to provide logs (though they're all quite large).
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host Installation fails - Package openstack-neutron cannot be found

2014-06-11 Thread Udaya Kiran P
Hi All,

I am trying to add a new Host, with External Network Provider - Neutron. Host 
installation fails in oVirt Engine with below error.

Please help me resolve this

2014-06-11 12:54:34 ERROR otopi.plugins.otopi.packagers.yumpackager 
yumpackager.error:97 Yum Cannot queue package openstack-neutron: Package 
openstack-neutron cannot be found

2014-06-11 12:54:34 DEBUG otopi.context context._executeMethod:152 method 
exception
Traceback (most recent call last):
  File "/tmp/ovirt-YnooVggSrR/pythonlib/otopi/context.py", line 142, in 
_executeMethod
    method['method']()
  File 
"/tmp/ovirt-YnooVggSrR/otopi-plugins/ovirt-host-deploy/openstack/neutron.py", 
line 81, in _packages
    self.packager.installUpdate(('openstack-neutron',))
  File "/tmp/ovirt-YnooVggSrR/pythonlib/otopi/packager.py", line 139, in 
installUpdate
    ignoreErrors=ignoreErrors,
  File "/tmp/ovirt-YnooVggSrR/otopi-plugins/otopi/packagers/yumpackager.py", 
line 295, in install
    ignoreErrors=ignoreErrors
  File "/tmp/ovirt-YnooVggSrR/pythonlib/otopi/miniyum.py", line 865, in install
    **kwargs
  File "/tmp/ovirt-YnooVggSrR/pythonlib/otopi/miniyum.py", line 514, in _queue
    package=package,
RuntimeError: Package openstack-neutron cannot be found
2014-06-11 12:54:34 ERROR otopi.context context._executeMethod:161 Failed to 
execute stage 'Package installation': Package openstack-neutron cannot be found


Thank you,

Regards,
Udaya Kiran___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users