[ovirt-users] deploy failed with Engine health status page is not yet reachable.

2015-01-12 Thread Will K
Hi list,
I've been trying to setup hosted-engine on 2 nodes for a few times.  The latest 
failure which I can't figure out is the Engine health status page is not yet 
reachable.
I'm running CentOS 6.6 fully patched before setting up 3.5.  Sure I run screen, 
I also need to setup three vars to get around a proxy server, http_proxy, 
https_proxy, no_proxy.
`hosted-engine --deploy` went fine until the last steps.  After `engine setup 
is completed`, I picked (1) Continue setup.   It says 'Waiting for VM to 
shutdown`.  Once the VM is up again, I waited for a couple of minutes, then 
pick (1) again, it gives me
Engine health status page is not yet reachable.
At this point, I can ssh to the VM, I can also access and login the web GUI.
Any suggestion?
Will


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deploy failed with Engine health status page is not yet reachable.

2015-01-12 Thread Yedidyah Bar David
- Original Message -
 From: Will K yetanotherw...@yahoo.com
 To: users@ovirt.org
 Sent: Monday, January 12, 2015 10:54:25 AM
 Subject: [ovirt-users] deploy failed with Engine health status page is not 
 yet reachable.
 
 Hi list,
 
 I've been trying to setup hosted-engine on 2 nodes for a few times. The
 latest failure which I can't figure out is the Engine health status page is
 not yet reachable.
 
 I'm running CentOS 6.6 fully patched before setting up 3.5. Sure I run
 screen, I also need to setup three vars to get around a proxy server,
 http_proxy, https_proxy, no_proxy.
 
 `hosted-engine --deploy` went fine until the last steps. After `engine setup
 is completed`, I picked (1) Continue setup. It says 'Waiting for VM to
 shutdown`. Once the VM is up again, I waited for a couple of minutes, then
 pick (1) again, it gives me
 
 Engine health status page is not yet reachable.
 
 At this point, I can ssh to the VM, I can also access and login the web GUI.

So the health page _is_ probably reachable, but for some reason the host
can't access it.

 
 Any suggestion?

Check/post logs, check correctness of names/IP addresses/etc., iptables, try
to manually get the health page from the host and see if it works, e.g.

curl http://{fqdn}/ovirt-engine/services/health

(replace {fqdn} with the name you input as engine fqdn).

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove a storage domain related to a broken hardware

2015-01-12 Thread Olivier Navas

Hi ! 

I finally solved my problem by removing information related to broken storage 
in the engine database. 

I post below the sql queries I executed in postgres (with engine stopped while 
executing them) in order to remove storage domain. Maybe it can be useful for 
somebody else in the same situation. 


Connect to database 

# su - postgres 
# psql engine 


identify broken storage 

engine=# select id, storage_name from storage_domain_static; 
id | storage_name 
--+ 
34c95a44-db7f-4d0f-ba13-5f06a7feefe7 | my-broken-storage-domain 



identify ovf disks on this storage 

engine=# select storage_domain_id, ovf_disk_id from storage_domains_ovf_info 
where storage_domain_id = '34c95a44-db7f-4d0f-ba13-5f06a7feefe7'; 
storage_domain_id | ovf_disk_id 
--+-- 
34c95a44-db7f-4d0f-ba13-5f06a7feefe7 | 033f8fba-5145-47e8-b3b5-32a34a39ad11 
34c95a44-db7f-4d0f-ba13-5f06a7feefe7 | 2d9a6a40-1dd0-4180-b7c7-3829a443c825 


engine=# delete from storage_domain_dynamic where id = 
'34c95a44-db7f-4d0f-ba13-5f06a7feefe7'; 
engine=# delete from storage_domain_static where id = 
'34c95a44-db7f-4d0f-ba13-5f06a7feefe7'; 
engine=# delete from base_disks where disk_id = 
'033f8fba-5145-47e8-b3b5-32a34a39ad11'; 
engine=# delete from base_disks where disk_id = 
'2d9a6a40-1dd0-4180-b7c7-3829a443c825'; 
engine=# delete from storage_domains_ovf_info where storage_domain_id = 
'34c95a44-db7f-4d0f-ba13-5f06a7feefe7'; 
engine=# delete from storage_pool_iso_map where storage_id = 
'34c95a44-db7f-4d0f-ba13-5f06a7feefe7'; 


identify and delete luns and connections related to storage (i did not take 
notes of results of these ones, but it was easy to find the good lines). Also, 
i noticed that lun_storage_server_connection_map only contained information 
about iscsi storage, not fiber channel. 

engine=# select * from luns; 

engine=# delete from luns where physical_volume_id = 
'IqOdm6-BWuT-9YBW-uvM1-q41E-a3Cz-zPnWHq'; 
engine=# delete from lun_storage_server_connection_map where 
storage_server_connection='1b9f3167-3236-431e-93c2-ab5ee18eba04'; 
engine=# delete from lun_storage_server_connection_map where 
storage_server_connection='ea5971f8-e1a0-42e3-826d-b95e9031ce53'; 
engine=# delete from storage_server_connections where 
id='1b9f3167-3236-431e-93c2-ab5ee18eba04'; 
engine=# delete from storage_server_connections where 
id='ea5971f8-e1a0-42e3-826d-b95e9031ce53'; 


delete remaining disk(s) ; I had 1 virtual disk on this storage 

engine=# delete from base_disks where 
disk_id='03d651eb-14a9-4dca-8c87-605f101a5e0c'; 
engine=# delete from permissions where 
object_id='03d651eb-14a9-4dca-8c87-605f101a5e0c'; 


Then started engine and all is fine now. 



- Mail original - 
 De: Amit Aviram aavi...@redhat.com 
 À: Olivier Navas olivier.na...@sdis33.fr 
 Envoyé: Mardi 6 Janvier 2015 17:24:19 
 Objet: Re: [ovirt-users] Can't remove a storage domain related to a broken 
 hardware 
 
 
 
 - Original Message - 
 From: Olivier Navas olivier.na...@sdis33.fr 
 Hi, can you please add the engine log? 
 
 To: users@ovirt.org 
 Sent: Tuesday, January 6, 2015 11:28:42 AM 
 Subject: [ovirt-users] Can't remove a storage domain related to a broken 
 hardware 
 
 
 Hello Ovirt users ! 
 
 I experienced an hardware failure on a ISCSI storage making it unrecoverable, 
 and I would like to remove it from storage domains. 
 
 There was 1 disk on this storage domain, and this disk isn't attached to any 
 VM anymore, but I still can't detach this storage domain from cluster. 
 
 The storage domain is in inactive status and if I try to detach it from 
 data center, ovirt tries to activate it. Obviously it can't activate it 
 since hardware is broken, and it fails after several minutes with the event 
 Failed to detach Storage Domain my_storage_domain to Data Center Default. 
 (User: admin). I can post my engine.log if useful. 
 
 I need a way to force removal of this storage domain. Any trick would be 
 greatly appreciated. 
 
 Perhaps does Ovirt miss some sort of force detach, i know what i'm doing 
 button ? 
 
 I am running an Ovirt 3.5 cluster (engine with CentOS 6.6, 4 nodes with 
 CentOS 7) with FC and ISCSI storage domains. 
 
 Thanks for your help. 
 
 
 
 
 Ce courriel et tous les fichiers attachés qu'il contient sont confidentiels 
 et destinés exclusivement à la personne à laquelle ils sont adressés. Si 
 vous avez reçu ce courriel par erreur, merci de le retourner à son 
 expéditeur et de le détruire. Il est rappelé que tout message électronique 
 est susceptible d'alteration au cours de son acheminement sur internet. 
 Seuls les documents officiels du SDIS sont de nature à engager sa 
 responsabilité. Les idées ou opinions présentées dans ce courriel sont 
 celles de son auteur et ne représentent pas nécessairement celles du SDIS de 
 la Gironde. 
 
 ___ 
 

Re: [ovirt-users] Setting Base DN for LDAP authentication

2015-01-12 Thread jdeloro
Hello,

many thanks to Alon! We have a working setup with support for base dn. The 
special challenge in our setup is the constraint of specifying a base dn for 
every ldap search and referrals inside the branches that must be processed.

If anyone has the same problem, our working configuration with a slightly newer 
version of ovirt-engine-extension-aaa-ldap is:

$ cat /etc/ovirt-engine/aaa/company-ldap.properties 
include = rfc2307-openldap.properties

vars.server = ldap.company.de

vars.user = cn=system,dc=company,dc=de
vars.password = password

pool.default.serverset.single.server = ${global:vars.server}
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}

sequence-init.init.100-my-basedn-init-vars = my-basedn-init-vars
sequence.my-basedn-init-vars.010.description = set baseDN
sequence.my-basedn-init-vars.010.type = var-set
sequence.my-basedn-init-vars.010.var-set.variable = simple_baseDN
sequence.my-basedn-init-vars.010.var-set.value = dc=company,dc=de

search.default.search-request.derefPolicy = ALWAYS

Best regards

Jannick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Einav Cohen
 2. Storage Network: if you intend to keep this role in the feature (I
 don't think it adds a lot of functionality, see article 1b), it might be
 better to call it Gluster Network - otherwise people using virt mode
 might think this network is gonna be used to communicate with other
 types of storage domains.

+1 on Storage Network - Gluster Network (assuming this role is kept, 
as Lior mentioned). 


 - Original Message -
 From: Lior Vernia lver...@redhat.com
 Sent: Monday, January 12, 2015 7:51:05 AM
 
 Hi Sahina! :)
 
 Cool feature, and I think long-awaited by many users. I have a few comments:
 
 1. In the Add Bricks dialog, it seems like the IP Address field is a
 list box - I presume the items contained there are all IP addresses
 configured on the host's interfaces.
 
 1. a. May I suggest that this contain network names instead of IP
 addresses? Would be easier for users to think about things (they surely
 remember the meaning of network names, not necessarily of IP addresses).
 
 1. b. If I correctly understood the mock-up, then configuring a Storage
 Network role only affects the default entry chosen in the list box. Is
 it really worth the trouble of implementing this added role? It's quite
 different than display/migration roles, which are used to determine what
 IP address to use at a later time (i.e. not when configuring the host),
 when a VM is run/migrated in the cluster.
 
 1. c. A word of warning: sometimes a host interface's IP address is
 missing in the engine - this usually happens when they're configured for
 the first time with DHCP, and the setup networks command returns before
 an IP address is allocated (this can later be resolved by refreshing
 host capabilities, there's a button for that). So when displaying items
 in the list box, you should really check that an IP address exists for
 each network.
 
 2. Storage Network: if you intend to keep this role in the feature (I
 don't think it adds a lot of functionality, see article 1b), it might be
 better to call it Gluster Network - otherwise people using virt mode
 might think this network is gonna be used to communicate with other
 types of storage domains.
 
 Yours, Lior.
 
 On 12/01/15 14:00, Sahina Bose wrote:
  Hi all,
  
  Please review the feature page for this proposed solution and provide
  your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster
  
  thanks
  sahina
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setting Base DN for LDAP authentication

2015-01-12 Thread Alon Bar-Lev


- Original Message -
 From: jdel...@web.de
 To: Alon Bar-Lev alo...@redhat.com
 Cc: users@ovirt.org
 Sent: Monday, January 12, 2015 4:16:17 PM
 Subject: Re: [ovirt-users] Setting Base DN for LDAP authentication
 
 Hello,
 
 many thanks to Alon! We have a working setup with support for base dn. The
 special challenge in our setup is the constraint of specifying a base dn for
 every ldap search and referrals inside the branches that must be processed.
 
 If anyone has the same problem, our working configuration with a slightly
 newer version of ovirt-engine-extension-aaa-ldap is:

Note that this environment has more than only baseDN issue, it also requires to 
dereference references at server side. Most environments should not require 
this, nor have invalid baseDN in their rootDSE naming context.

In this specific environment a query for baseDN X result in baseDN Y.

Thank you Jannick for the problem determination process.

Supporting baseDN X-Y will be formally released in 1.0.2.

 
 $ cat /etc/ovirt-engine/aaa/company-ldap.properties
 include = rfc2307-openldap.properties
 
 vars.server = ldap.company.de
 
 vars.user = cn=system,dc=company,dc=de
 vars.password = password
 
 pool.default.serverset.single.server = ${global:vars.server}
 pool.default.auth.simple.bindDN = ${global:vars.user}
 pool.default.auth.simple.password = ${global:vars.password}
 
 sequence-init.init.100-my-basedn-init-vars = my-basedn-init-vars
 sequence.my-basedn-init-vars.010.description = set baseDN
 sequence.my-basedn-init-vars.010.type = var-set
 sequence.my-basedn-init-vars.010.var-set.variable = simple_baseDN
 sequence.my-basedn-init-vars.010.var-set.value = dc=company,dc=de
 
 search.default.search-request.derefPolicy = ALWAYS
 
 Best regards
 
 Jannick
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create volume in OVirt with gluster

2015-01-12 Thread Kanagaraj Mayilsamy
I can see the failures in glusterd log.

Can someone from glusterfs dev pls help on this?

Thanks,
Kanagaraj

- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Kanagaraj kmayi...@redhat.com
 Cc: Martin Pavlík mpav...@redhat.com, Vijay Bellur 
 vbel...@redhat.com, Kaushal M kshlms...@gmail.com,
 users@ovirt.org, gluster-us...@gluster.org
 Sent: Monday, January 12, 2015 3:36:43 PM
 Subject: Re: Failed to create volume in OVirt with gluster
 
 Hi Kanagaraj,
 
 Please find the logs from here :- http://ur1.ca/jeszc
 
 [image: Inline image 1]
 
 [image: Inline image 2]
 
 On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj kmayi...@redhat.com wrote:
 
   Looks like there are some failures in gluster.
  Can you send the log output from glusterd log file from the relevant hosts?
 
  Thanks,
  Kanagaraj
 
 
  On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
 
  Hi,
 
   Is there any one from gluster can help me here :-
 
   Engine logs :-
 
   2015-01-12 12:50:33,841 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:34,725 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,824 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,853 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,866 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:37,751 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,849 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,878 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,890 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:40,776 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,878 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,903 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,916 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:43,771 INFO
   [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
  (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand,
  log id: 303e70a4
  2015-01-12 12:50:43,780 ERROR
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID:
  896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID:
  -1, Message: Creation of Gluster Volume vol01 failed.
  2015-01-12 12:50:43,785 INFO
   

Re: [ovirt-users] Issues with vm start up

2015-01-12 Thread Koen Vanoppen
Glad I could help :-)

2015-01-08 16:48 GMT+01:00 VONDRA Alain avon...@unicef.fr:

  Hi,

 Thanks A LOT Koen, you saved me with your solution 

 Regards





 --

  *Alain VONDRA   *
 *Chargé d'exploitation des Systèmes d'Information   *
 *Direction Administrative et Financière*
 * +33 1 44 39 77 76 %2B33%201%2044%2039%2077%2076 *

 *UNICEF France 3 rue Duguay Trouin  75006 PARIS*
 * www.unicef.fr http://www.unicef.fr/ *
  http://www.unicef.fr

 http://www.unicef.fr/





--
  http://www.unicef.fr

 *De :* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *De la
 part de* Koen Vanoppen
 *Envoyé :* lundi 8 décembre 2014 07:49
 *À :* users@ovirt.org
 *Objet :* Re: [ovirt-users] Issues with vm start up



 We also had a issue when starting our vms. This was due the fact of the
 NUMA option under the host section. I attached a procedure I created for
 work. Maybe it is also useful for your case...



 2014-12-02 5:58 GMT+01:00 Shanil S xielessha...@gmail.com:

 Hi Omer,

 We have opened this as a bug, you can view the new bug here
 https://bugzilla.redhat.com/show_bug.cgi?id=1169625.


   --
 Regards
 Shanil



 On Tue, Dec 2, 2014 at 8:42 AM, Shanil S xielessha...@gmail.com wrote:

 Hi Omer,

 Thanks for your reply. We will open it as a bug.


   --
 Regards
 Shanil



 On Mon, Dec 1, 2014 at 4:38 PM, Omer Frenkel ofren...@redhat.com wrote:



 - Original Message -
  From: Shanil S xielessha...@gmail.com
  To: Omer Frenkel ofren...@redhat.com, users@ovirt.org, Juan
 Hernandez jhern...@redhat.com
  Sent: Monday, December 1, 2014 12:39:12 PM
  Subject: Re: [ovirt-users] Issues with vm start up
 
  Hi Omer,
 
  Thanks for your reply.
 
  We are deploying those VM's through templates and all the templates has
  first boot from hard disk and second with cdrom. We are using cdrom in
  cloud-init section to insert the cloud-init data .Why we need this boot
  loader (VM boot order or in the cloud-init section) as in the template
 the
  first boot order is harddisk ?
 
 

 if the template's boot order has hard-disk it should be ok,
 not sure why it seems for this vm the boot order is only cd.
 does this happen to any vm created from this template?
 can you check the boot order is corrent in the edit vm dialog?

 if the boot order is ok in the template,
 and it happens to any new vm from this template,
 you should open a bug so we could investigate this.


 
  --
  Regards
  Shanil
 
  On Mon, Dec 1, 2014 at 3:37 PM, Omer Frenkel ofren...@redhat.com
 wrote:
 
  
  
   - Original Message -
From: Shanil S xielessha...@gmail.com
To: Shahar Havivi shah...@redhat.com
Cc: users@ovirt.org, Juan Hernandez jhern...@redhat.com, Omer
   Frenkel ofren...@redhat.com
Sent: Thursday, November 27, 2014 10:32:54 AM
Subject: Re: [ovirt-users] Issues with vm start up
   
Hi Omer,
   
I have attached the engine.log and vdsm.log. Please check it. The vm
 id
   is
- 7c7aa9dedb57248f5da291117164f0d7
   
  
   sorry Shanil for the delay,
   it looks that in the vm settings, you have chosen to boot only from cd
 ?
   this is why the vm doest boot, the cloud-init disk is a settings disk,
 not
   a boot disk
   please go to update vm, change the boot order to use hard-disk, and
 boot
   the vm.
  
   let me know if it helps,
   Omer.
  
--
Regards
Shanil
   
On Thu, Nov 27, 2014 at 1:46 PM, Shahar Havivi shah...@redhat.com
   wrote:
   
 Try to remove content and run the vm,
 ie remove the runcmd: or some of it - try to use the xml with out
 CDATA
 maybe
 you can pinpoint the problem that way...


 On 27.11.14 10:03, Shanil S wrote:
  Hi All,
 
 
  I am using the ovirt version 3.5 and having some issues with the
 vm
 startup
  with cloud-init using api in run-once mode.
 
  Below is the steps i follow :-
 
  1. Create the VM by API from precreated Template..
  2. Start the VM in run-once mode and push the cloud-init data
 from
   API..
  3. VM stuck and from console it display the following :-
  Booting from DVD/CD.. ...
  Boot failed : could not read from CDROM (code 004)
 
  I am using the following xml for this operation :-
 
  action
  vm
   os
boot dev='cdrom'/
   /os
   initialization
cloud_init
 host
  addresstest/address
 /host
 network_configuration
  nics
   nic
interfacevirtIO/interface
nameeth0/name
boot_protocolstatic/boot_protocol
mac address=''/
network
 ip address='' netmask='' gateway=''/
/network
on_boottrue/on_bootvnic_profile id='' /
   /nic
   nic
interfacevirtIO/interface
   

[ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Sahina Bose

Hi all,

Please review the feature page for this proposed solution and provide 
your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster


thanks
sahina


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Lior Vernia
Hi Sahina! :)

Cool feature, and I think long-awaited by many users. I have a few comments:

1. In the Add Bricks dialog, it seems like the IP Address field is a
list box - I presume the items contained there are all IP addresses
configured on the host's interfaces.

1. a. May I suggest that this contain network names instead of IP
addresses? Would be easier for users to think about things (they surely
remember the meaning of network names, not necessarily of IP addresses).

1. b. If I correctly understood the mock-up, then configuring a Storage
Network role only affects the default entry chosen in the list box. Is
it really worth the trouble of implementing this added role? It's quite
different than display/migration roles, which are used to determine what
IP address to use at a later time (i.e. not when configuring the host),
when a VM is run/migrated in the cluster.

1. c. A word of warning: sometimes a host interface's IP address is
missing in the engine - this usually happens when they're configured for
the first time with DHCP, and the setup networks command returns before
an IP address is allocated (this can later be resolved by refreshing
host capabilities, there's a button for that). So when displaying items
in the list box, you should really check that an IP address exists for
each network.

2. Storage Network: if you intend to keep this role in the feature (I
don't think it adds a lot of functionality, see article 1b), it might be
better to call it Gluster Network - otherwise people using virt mode
might think this network is gonna be used to communicate with other
types of storage domains.

Yours, Lior.

On 12/01/15 14:00, Sahina Bose wrote:
 Hi all,
 
 Please review the feature page for this proposed solution and provide
 your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster
 
 thanks
 sahina
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] rename ovirt-engine

2015-01-12 Thread Koen Vanoppen
Oh yeah, we are using ovirt 3.5 :-)

2015-01-12 13:57 GMT+01:00 Koen Vanoppen vanoppen.k...@gmail.com:

 Dear all,

 A while ago we migrated our engine to a new host with a new name.
 But now, we come to realistion that our certificate is still using the old
 name of the host.
 I tried to run the ovirt-engine-rename script, but after typed the new
 hostname I get this output:

 [ INFO  ] The following files will be updated:

   /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
   /etc/ovirt-engine/imageuploader.conf.d/10-engine-setup.conf
   /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
   /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf
   /etc/pki/ovirt-engine/cert.conf
   /etc/pki/ovirt-engine/cert.template
   /etc/pki/ovirt-engine/certs/apache.cer
   /etc/pki/ovirt-engine/keys/apache.key.nopass
   /etc/pki/ovirt-engine/keys/apache.p12

 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stopping engine service
 [ INFO  ] Stopping ovirt-fence-kdump-listener service
 [ INFO  ] Stopping websocket-proxy service
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  ] Stage: Misc configuration
 [ ERROR ] Failed to execute stage 'Misc configuration': invalid literal
 for int() with base 10: 'None'
 [ INFO  ] Stage: Clean up
   Log file is located at
 /var/log/ovirt-engine/setup/ovirt-engine-rename-20150112135225-wfbear.log
 [ INFO  ] Generating answer file
 '/var/lib/ovirt-engine/setup/answers/20150112135247-rename.conf'
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination
 [ ERROR ] Execution of rename failed


 From the log file:

 2015-01-12 13:52:46 DEBUG otopi.plugins.otopi.services.rhel
 plugin.executeRaw:785 execute: ('/sbin/service', 'ovirt-websocket-proxy',
 'stop'), executable='None', cwd='None', env=None
 2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
 plugin.executeRaw:803 execute-result: ('/sbin/service',
 'ovirt-websocket-proxy', 'stop'), rc=0
 2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
 plugin.execute:861 execute-output: ('/sbin/service',
 'ovirt-websocket-proxy', 'stop') stdout:
 Stopping oVirt Engine websockets proxy: [  OK  ]

 2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
 plugin.execute:866 execute-output: ('/sbin/service',
 'ovirt-websocket-proxy', 'stop') stderr:


 2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage: Misc
 configuration
 2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE
 early_misc
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 early_misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._early_misc
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage:
 Package installation
 2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE
 packages
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 packages METHOD otopi.plugins.otopi.network.iptables.Plugin._packages
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 packages METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._packages
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage: Misc
 configuration
 2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE misc
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD otopi.plugins.otopi.system.command.Plugin._misc
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._misc
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD otopi.plugins.otopi.network.iptables.Plugin._store_iptables
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD otopi.plugins.otopi.network.ssh.Plugin._append_key
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD otopi.plugins.otopi.system.clock.Plugin._set_clock
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
 misc METHOD
 otopi.plugins.ovirt_engine_common.ovirt_engine.db.pgpass.Plugin._misc
 2015-01-12 13:52:47 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2015-01-12 13:52:47 DEBUG otopi.context context.dumpEnvironment:500 ENV
 

Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Lior Vernia


On 12/01/15 14:44, Oved Ourfali wrote:
 Hi Sahina,
 
 Some comments:
 
 1. As far as I understand, you might not have an IP available immediately 
 after setupNetworks runs (getCapabilities should run, but it isn't run 
 automatically, afair).
 2. Perhaps you should pass not the IP but the name of the network? IPs might 
 change.

Actually, IP address can indeed change - which would be very bad for
gluster functioning! I think moving networks or changing their IP
addresses via Setup Networks should be blocked if they're used by
gluster bricks.

 3. Adding to 2, perhaps using DNS names is a more valid approach?
 4. You're using the terminology role, but it might be confusing, as we have 
 roles with regards to permissions. Consider changing storage usage and 
 not storage role in the feature page.

Well, we've already been using this terminology for a while now
concerning display/migration roles for networks... That's probably the
terminology to use.

 
 Thanks,
 Oved
 
 - Original Message -
 From: Sahina Bose sab...@redhat.com
 To: de...@ovirt.org, users users@ovirt.org
 Sent: Monday, January 12, 2015 2:00:16 PM
 Subject: [ovirt-users] [Feature review] Select network to be used for
 glusterfs

 Hi all,

 Please review the feature page for this proposed solution and provide
 your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster

 thanks
 sahina


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Oved Ourfali
Hi Sahina,

Some comments:

1. As far as I understand, you might not have an IP available immediately after 
setupNetworks runs (getCapabilities should run, but it isn't run automatically, 
afair).
2. Perhaps you should pass not the IP but the name of the network? IPs might 
change.
3. Adding to 2, perhaps using DNS names is a more valid approach?
4. You're using the terminology role, but it might be confusing, as we have 
roles with regards to permissions. Consider changing storage usage and not 
storage role in the feature page.

Thanks,
Oved

- Original Message -
 From: Sahina Bose sab...@redhat.com
 To: de...@ovirt.org, users users@ovirt.org
 Sent: Monday, January 12, 2015 2:00:16 PM
 Subject: [ovirt-users] [Feature review] Select network to be used for 
 glusterfs
 
 Hi all,
 
 Please review the feature page for this proposed solution and provide
 your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster
 
 thanks
 sahina
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setting Base DN for LDAP authentication

2015-01-12 Thread jdeloro
Hello Ondra,

  I'm trying to configure LDAP authentication with oVirt 3.5 and 
  ovirt-engine-extension-aaa-ldap. I chose the simple bind transport example. 
  But the given examples are missing the explicit specification of a base dn. 
  Could you please advise me how this can be done?
 

[...]

 
  I could not use namingContexts from RootDSE cause this results in base dn 
  dc=de instead of dc=company,dc=de.
 
 Can you try user 'cn=Manager', I think it's incorrectly configured ACL.

Nice try, but this is no proper solution for the problem. Raising privileges 
should be always avoided.

Alon is currently troubleshooting this issue and he is close to find a good 
solution.

Best regards

Jannick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] rename ovirt-engine

2015-01-12 Thread Koen Vanoppen
Dear all,

A while ago we migrated our engine to a new host with a new name.
But now, we come to realistion that our certificate is still using the old
name of the host.
I tried to run the ovirt-engine-rename script, but after typed the new
hostname I get this output:

[ INFO  ] The following files will be updated:

  /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
  /etc/ovirt-engine/imageuploader.conf.d/10-engine-setup.conf
  /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
  /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf
  /etc/pki/ovirt-engine/cert.conf
  /etc/pki/ovirt-engine/cert.template
  /etc/pki/ovirt-engine/certs/apache.cer
  /etc/pki/ovirt-engine/keys/apache.key.nopass
  /etc/pki/ovirt-engine/keys/apache.p12

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping ovirt-fence-kdump-listener service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': invalid literal for
int() with base 10: 'None'
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-rename-20150112135225-wfbear.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150112135247-rename.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of rename failed


From the log file:

2015-01-12 13:52:46 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:785 execute: ('/sbin/service', 'ovirt-websocket-proxy',
'stop'), executable='None', cwd='None', env=None
2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:803 execute-result: ('/sbin/service',
'ovirt-websocket-proxy', 'stop'), rc=0
2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service',
'ovirt-websocket-proxy', 'stop') stdout:
Stopping oVirt Engine websockets proxy: [  OK  ]

2015-01-12 13:52:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service',
'ovirt-websocket-proxy', 'stop') stderr:


2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage: Misc
configuration
2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE
early_misc
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
early_misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._early_misc
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage:
Package installation
2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE
packages
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
packages METHOD otopi.plugins.otopi.network.iptables.Plugin._packages
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
packages METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._packages
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 INFO otopi.context context.runSequence:417 Stage: Misc
configuration
2015-01-12 13:52:47 DEBUG otopi.context context.runSequence:421 STAGE misc
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD otopi.plugins.otopi.system.command.Plugin._misc
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._misc
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD otopi.plugins.otopi.network.iptables.Plugin._store_iptables
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD otopi.plugins.otopi.network.ssh.Plugin._append_key
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD otopi.plugins.otopi.system.clock.Plugin._set_clock
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:144
condition False
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage
misc METHOD
otopi.plugins.ovirt_engine_common.ovirt_engine.db.pgpass.Plugin._misc
2015-01-12 13:52:47 DEBUG otopi.context context.dumpEnvironment:490
ENVIRONMENT DUMP - BEGIN
2015-01-12 13:52:47 DEBUG otopi.context context.dumpEnvironment:500 ENV
OVESETUP_DB/pgPassFile=str:'/tmp/tmpHWoEfz'
2015-01-12 13:52:47 DEBUG otopi.context context.dumpEnvironment:504
ENVIRONMENT DUMP - END
2015-01-12 13:52:47 DEBUG otopi.context context._executeMethod:138 Stage

Re: [ovirt-users] Alerts

2015-01-12 Thread Koen Vanoppen
Thanks!!! Just what I needed!

2015-01-07 14:19 GMT+01:00 Juan Hernández jhern...@redhat.com:

 On 01/07/2015 02:05 PM, Koen Vanoppen wrote:
  Thanks. But, is there no other way, besides screwing around in the
 database?
 

 You can use the Python SDK:

 #!/usr/bin/python

 import ovirtsdk.api
 import ovirtsdk.xml

 api = ovirtsdk.api.API(
   url=https://engine.example.com/ovirt-engine/api;,
   username=admin@internal,
   password=**,
   ca_file=/etc/pki/ovirt-engine/ca.pem
 )

 alerts = api.events.list(query=severity=alert)
 for alert in alerts:
 alert.delete()

 api.disconnect()


  2015-01-07 13:07 GMT+01:00 Eli Mesika emes...@redhat.com
  mailto:emes...@redhat.com:
 
 
 
  - Original Message -
   From: Koen Vanoppen vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com
   To: users@ovirt.org mailto:users@ovirt.org
   Sent: Wednesday, January 7, 2015 7:42:25 AM
   Subject: [ovirt-users] Alerts
  
   Hi All,
  
   We recently had a major crash on our ovirt environment due a
  strange bug (has
   been reported in the meanwhile).
  
   But now we are left we a bunch of aerts (+100) that are still
 standing
   there... Is there a way I can flush them manually from command
  line or so?
   Because the right click+clear all, doesn't seem to work that
  good... :-).
 
  If you want to remove ALL alerts run from command line
 
  psql -U engine -c delete from audit_log where severity = 10; engine
 
  NOTE:
  Keep in mind that before any manual operation on your database you
  should backup it first.
 
  
   Kind regards,
  
   Koen
  
   ___
   Users mailing list
   Users@ovirt.org mailto:Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 


 --
 Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
 3ºD, 28016 Madrid, Spain
 Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using gluster on other hosts?

2015-01-12 Thread Kaushal M
Hey Will,

It seems to me you are trying manage GlusterFS from oVirt, and trying to get 
your multi-network setup to work. As Sahina mentioned already, this is not 
currently possible as oVirt doesn't have the required support.

If you want to make this work right now, I suggest you manage GlusterFS 
manually. You could do the following,

- Install GlusterFS on both the hosts and setup a GlusterFS trusted storage 
pool using the 'gluster peer probe' commands. Run 'gluster peer probe gfs2' 
from node1 (and the reverse just for safety)
- Create a GlusterFS volume, 'gluster volume create VOLUMENAME 
gfs1:BRICKPATH gfs2:BRICKPATH; and start it, 'gluster volume start 
VOLUMENAME'.
After this you'll have GlusterFS setup on the particular network and you'll 
have volume ready to be added as a oVirt storage domain.

- Now setup oVirt on the nodes with the node* network.
- Add the gfs* network to oVirt. I'm not sure if this would be required, but 
you can try it anyway.
- Add the created GlusterFS volume as a storage domain using a gfs* address.

You should now be ready to begin using the new storage domain.

If you would want to expand the volume later, you will need to do it manually 
with an explicit 'gluster volume add-brick' command.

You could possible add the GlusterFS cluster to the oVirt interface, just so 
you can get stats and monitoring. But even then you shouldn't use the oVirt 
interface to do any management tasks.

Multi-network support for GlusterFS within oVirt is an upcoming feature, and 
Sahina can give you more details on when to expect it to be available.

Thanks,
Kaushal


- Original Message -
 From: Sahina Bose sab...@redhat.com
 To: Will K yetanotherw...@yahoo.com, users@ovirt.org, Kaushal M 
 kaus...@redhat.com
 Sent: Friday, 9 January, 2015 11:10:48 AM
 Subject: Re: [ovirt-users] Using gluster on other hosts?
 
 
 On 01/08/2015 09:41 PM, Will K wrote:
  That's what I did, but didn't work for me.
 
  1. use the 192.168.x interface to setup gluster. I used hostname in
  /etc/hosts.
  2. setup oVirt using the switched network hostnames, let's say 10.10.10.x
  3. oVirt and all that comes up fine.
  4. When try to create a storage domain, it only shows the 10.10.10.x
  hostnames available.
 
 
  Tried to add a brick and I would get something like
  Host gfs2 is not in 'Peer in Cluster' state  (while node2 is the
  hostname and gfs2 is the 192.168 name)
 
 
 Which version of glusterfs do you have?
 
 Kaushal, will this work in glusterfs3.6 and above?
 
 
 
  Ran command `gluster probe peer gfs2` or `gluster probe peer
  192.168.x.x` didn't work
  peer probe: failed: Probe returned with unknown errno 107
 
  Ran probe again with the switched network hostname or IP worked fine.
  May be it is not possible with current GlusterFS version?
  http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork
 
 
  Will
 
 
  On Thursday, January 8, 2015 3:43 AM, Sahina Bose sab...@redhat.com
  wrote:
 
 
 
  On 01/08/2015 12:07 AM, Will K wrote:
  Hi
 
  I would like to see if anyone has good suggestion.
 
  I have two physical hosts with 1GB connections to switched networks.
  The hosts also have 10GB interface connected directly using Twinax
  cable like copper crossover cable.  The idea was to use the 10GB as a
  private network for GlusterFS till the day we want to grow out of
  this 2 node setup.
 
  GlusterFS was setup with the 10GB ports using non-routable IPs and
  hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2
  192.168.1.2.  I'm following example from
  community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
  http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
  , Currently I'm only using Gluster volume on node1, but `gluster
  probe peer` test worked fine with node2 through the 10GB connection.
 
  oVirt engine was setup on physical host1 with hosted engine.  Now,
  when I try to create new Gluster storage domain, I can only see the
  host node1 available.
 
  Is there anyway I can setup oVirt on node1 and node2, while using
  gfs1 and gfs2 for GlusterFS? or some way to take advantage of the
  10GB connection?
 
  If I understand right, you have 2 interfaces on each of your hosts,
  and you want oVirt to communicate via 1 interface and glusterfs to use
  other?
 
  While adding the hosts to oVirt you could use ip1 and then.while
  creating the volume, add the brick using the other ip address.
  For instance, gluster volume create volname 192.168.1.2:/bricks/b1
 
  Currently, there's no way to specify the IP address to use while
  adding a brick from oVirt UI (we're working on this for 3.6), but you
  could do this from the gluster CLI commands. This would then be
  detected in the oVirt UI.
 
 
 
 
  Thanks
  W
 
 
 
  ___
  Users mailing list
  Users@ovirt.org  mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 
 

Re: [ovirt-users] Using gluster on other hosts?

2015-01-12 Thread Kaushal M
That should work, provided the names being used resolve correctly and to the 
same values everywhere.
But I'd suggest that a manual peer probe is done using the alternative names, 
before creating the volume. This way Gluster explicitly knows all the 
names/addresses, and should not cause any troubles.

But as I said before, after doing the manual operations, I'd suggest avoiding 
doing any management actions from the oVirt interface; as it still doesn't know 
how to handle multiple networks with Gluster.

~kaushal

- Original Message -
 From: Sahina Bose sab...@redhat.com
 To: Kaushal M kaus...@redhat.com
 Cc: Will K yetanotherw...@yahoo.com, users@ovirt.org
 Sent: Friday, 9 January, 2015 1:47:17 PM
 Subject: Re: [ovirt-users] Using gluster on other hosts?
 
 
 On 01/09/2015 01:13 PM, Kaushal M wrote:
  Hey Will,
 
  It seems to me you are trying manage GlusterFS from oVirt, and trying to
  get your multi-network setup to work. As Sahina mentioned already, this is
  not currently possible as oVirt doesn't have the required support.
 
  If you want to make this work right now, I suggest you manage GlusterFS
  manually. You could do the following,
 
  - Install GlusterFS on both the hosts and setup a GlusterFS trusted storage
  pool using the 'gluster peer probe' commands. Run 'gluster peer probe
  gfs2' from node1 (and the reverse just for safety)
  - Create a GlusterFS volume, 'gluster volume create VOLUMENAME
  gfs1:BRICKPATH gfs2:BRICKPATH; and start it, 'gluster volume start
  VOLUMENAME'.
  After this you'll have GlusterFS setup on the particular network and you'll
  have volume ready to be added as a oVirt storage domain.
 
 
 To enable, oVirt to use the node1 interface, is it possible to peer
 probe using node1 and node2 interface in steps above - i.e gluster peer
 probe node2 (This is essentially what happens when a host is added with
 host address node1 or node2)
 
 and then create a GlusterFS volume from CLI using the command you
 mentioned above?
 
 
  - Now setup oVirt on the nodes with the node* network.
  - Add the gfs* network to oVirt. I'm not sure if this would be required,
  but you can try it anyway.
  - Add the created GlusterFS volume as a storage domain using a gfs*
  address.
 
  You should now be ready to begin using the new storage domain.
 
  If you would want to expand the volume later, you will need to do it
  manually with an explicit 'gluster volume add-brick' command.
 
  You could possible add the GlusterFS cluster to the oVirt interface, just
  so you can get stats and monitoring. But even then you shouldn't use the
  oVirt interface to do any management tasks.
 
  Multi-network support for GlusterFS within oVirt is an upcoming feature,
  and Sahina can give you more details on when to expect it to be available.
 
  Thanks,
  Kaushal
 
 
  - Original Message -
  From: Sahina Bose sab...@redhat.com
  To: Will K yetanotherw...@yahoo.com, users@ovirt.org, Kaushal M
  kaus...@redhat.com
  Sent: Friday, 9 January, 2015 11:10:48 AM
  Subject: Re: [ovirt-users] Using gluster on other hosts?
 
 
  On 01/08/2015 09:41 PM, Will K wrote:
  That's what I did, but didn't work for me.
 
  1. use the 192.168.x interface to setup gluster. I used hostname in
  /etc/hosts.
  2. setup oVirt using the switched network hostnames, let's say 10.10.10.x
  3. oVirt and all that comes up fine.
  4. When try to create a storage domain, it only shows the 10.10.10.x
  hostnames available.
 
 
  Tried to add a brick and I would get something like
   Host gfs2 is not in 'Peer in Cluster' state  (while node2 is the
  hostname and gfs2 is the 192.168 name)
 
  Which version of glusterfs do you have?
 
  Kaushal, will this work in glusterfs3.6 and above?
 
 
  Ran command `gluster probe peer gfs2` or `gluster probe peer
  192.168.x.x` didn't work
   peer probe: failed: Probe returned with unknown errno 107
 
  Ran probe again with the switched network hostname or IP worked fine.
  May be it is not possible with current GlusterFS version?
  http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork
 
 
  Will
 
 
  On Thursday, January 8, 2015 3:43 AM, Sahina Bose sab...@redhat.com
  wrote:
 
 
 
  On 01/08/2015 12:07 AM, Will K wrote:
  Hi
 
  I would like to see if anyone has good suggestion.
 
  I have two physical hosts with 1GB connections to switched networks.
  The hosts also have 10GB interface connected directly using Twinax
  cable like copper crossover cable.  The idea was to use the 10GB as a
  private network for GlusterFS till the day we want to grow out of
  this 2 node setup.
 
  GlusterFS was setup with the 10GB ports using non-routable IPs and
  hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2
  192.168.1.2.  I'm following example from
  community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
  http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
  , Currently I'm only using Gluster volume on node1, but 

[ovirt-users] about Ovirt - KVM - Ubuntu

2015-01-12 Thread Carlos Laurent

Hi,

I have questions about ovirt, my english is not good,I do not know if 
there are some mail list in spanish.


I need install ovirt in ubuntu server, it is posible to do? I read is in 
experimental. I try install but there areproblems with Python libraries.


Is posible Ovirt server read guest machines in KVM ubuntu server? or I 
need install Ovirt and migrade one to one machines.


Thank you very much for the answers.

Carlos Laurent
Costa Rica,NUMAR




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] I have a question about the spice client.

2015-01-12 Thread jaemin baek
hi,
i'm korean

I have a question about the spice client.

My spice client connect to VDI  Windows 7 VDI

input key --- Korea keyboard(103/106 key) + hangul key   --- windows 7 IME

*but...*

input key --- Korea keyboard(103/106 key) + hangul key -- hangul key
exchanged --- Alt key

*Why??*

linux spice client Hangul key OK.

but windows spice client Hangul key --- Alt key...


Hangul key scancode = 0x38
Alt key scancode = 0x38

*WHY??*

Help me Thank you.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deploy failed with Engine health status page is not yet reachable.

2015-01-12 Thread Will K
thanks, the URL helped troubleshoot it.W 

 On Monday, January 12, 2015 4:18 AM, Yedidyah Bar David d...@redhat.com 
wrote:
   

 - Original Message -
 From: Will K yetanotherw...@yahoo.com
 To: users@ovirt.org
 Sent: Monday, January 12, 2015 10:54:25 AM
 Subject: [ovirt-users] deploy failed with Engine health status page is not 
 yet reachable.
 
 Hi list,
 
 I've been trying to setup hosted-engine on 2 nodes for a few times. The
 latest failure which I can't figure out is the Engine health status page is
 not yet reachable.
 
 I'm running CentOS 6.6 fully patched before setting up 3.5. Sure I run
 screen, I also need to setup three vars to get around a proxy server,
 http_proxy, https_proxy, no_proxy.
 
 `hosted-engine --deploy` went fine until the last steps. After `engine setup
 is completed`, I picked (1) Continue setup. It says 'Waiting for VM to
 shutdown`. Once the VM is up again, I waited for a couple of minutes, then
 pick (1) again, it gives me
 
 Engine health status page is not yet reachable.
 
 At this point, I can ssh to the VM, I can also access and login the web GUI.

So the health page _is_ probably reachable, but for some reason the host
can't access it.

 
 Any suggestion?

Check/post logs, check correctness of names/IP addresses/etc., iptables, try
to manually get the health page from the host and see if it works, e.g.

curl http://{fqdn}/ovirt-engine/services/health

(replace {fqdn} with the name you input as engine fqdn).

Best,
-- 
Didi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS Centos 7 unable to mount NFS on gluster members

2015-01-12 Thread Donny Davis
Here is a quick rundown of the system, and the problem.

 

All hosts on centOS 7 fully up to date

GlusterFS is running on 6 servers, 3x2 distribute/replicate. 

CTDB is running on all hosts

I am unable to mount via nfs the exported volume on any of the gluster
server members.

 

I am able to mount, read, write, umount from any server that is not a
gluster member. 

 

Topology - all are hostnames that are resolvable 

Gluster Members

Node1

Node2

Node3

Node4

Node5

Node6

 

CTDB Virtual IP/Hostname

SharedNFS

 

Test Machine

Test1

 

Gluster Volumes

Engine

Data

 

I am trying to bring up hosted-engine using nfs using the gluster members

I run hosted-engine -deploy

Nfsv3

 

Host:/path sharednfs:/engine

Error while mounting specified storage path: mount.nfs: an incorrect mount
option was specified

 

Ok well lets try that without using the hosted-engine script

 

Mount -v -t nfs -o vers=3 sharednfs:/engine /tmp

mount.nfs: timeout set for Mon Jan 12 22:00:08 2015

mount.nfs: trying text-based options 'vers=3,lock=Flase,addr=192.168.0.240

mount.nfs: prog 13, trying vers=3, prot=6

mount.nfs: trying 192.168.0.240 prog 13 vers 3 prot TCP port 2049

mount.nfs: prog 15, trying vers=3, prot=17

mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 15, trying vers=3, prot=6

mount.nfs: trying 192.168.0.240 prog 15 vers 3 prot TCP port 38465

mount.nfs: mount(2): Invalid argument

mount.nfs: an incorrect mount option was specified

 

 

[root@node4 ~]# systemctl status rpcbind

rpcbind.service - RPC bind service

   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)

   Active: active (running) since Mon 2015-01-12 20:01:13 EST; 1h 57min ago

  Process: 1349 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited,
status=0/SUCCESS)

Main PID: 1353 (rpcbind)

   CGroup: /system.slice/rpcbind.service

   L-1353 /sbin/rpcbind -w

 

Jan 12 20:01:13 node4 systemd[1]: Starting RPC bind service...

Jan 12 20:01:13 node4 systemd[1]: Started RPC bind service.

Jan 12 21:19:22 node4 systemd[1]: Started RPC bind service.

 

 

Ummm: this makes no sense:. 

[root@test1 ~]# mount -v -o vers=3 -t nfs 192.168.0.240:/engine /tmp

mount.nfs: timeout set for Mon Jan 12 20:02:58 2015

mount.nfs: trying text-based options 'vers=3,addr=192.168.0.240

mount.nfs: prog 13, trying vers=3, prot=6

mount.nfs: trying 192.168.0.240 prog 13 vers 3 prot TCP port 2049

mount.nfs: prog 15, trying vers=3, prot=17

mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 15, trying vers=3, prot=6

mount.nfs: trying 192.168.0.240 prog 15 vers 3 prot TCP port 38465

192.168.0.240:/engine on /tmp type nfs (rw,vers=3)

 

 

 

On the test machine mounts the nfs share with no problems. I have confirmed
this does not work on a single machine that is part of the gluster.  And any
other machine is able to mount the exact same share, with the exact same
parameters: on the exact same OS:.

 

I am at a loss

 

Donny D

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create volume in OVirt with gluster

2015-01-12 Thread Punit Dambiwal
Hi,

Please find the more details on this can anybody from gluster will help
me here :-


Gluster CLI Logs :- /var/log/glusterfs/cli.log

[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got
RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify]
0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler]
0-transport: disconnecting now
[2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify]
0-glusterfs: got RPC_CLNT_DISCONNECT
[2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record]
0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
[2015-01-13 02:06:23.072176] T
[rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen
128, payload: 64, rpc hdr: 64
[2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--
/usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293]
(-- /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--
/usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--
/usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ) 0-glusterfs: connect
() called on transport already connected
[2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit]
0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2,
Proc: 27) to rpc-transport (glusterfs)
[2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping]
0-glusterfs: ping timeout is 0, returning
[2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init]
0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI,
ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
[2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk]
0-cli: Received response to status cmd
[2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli:
Returning 0
[2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume]
0-cli: Returning: 0
[2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output]
0-cli: Returning 0
[2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common]
0-cli: Returning 0
[2015-01-13 02:06:23.076256] D
[cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output]
0-cli: Returning 0
[2015-01-13 02:06:23.076459] D
[cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0

Command log :- /var/log/glusterfs/.cmd_log_history

Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:10:35.836676]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:16:25.956514]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:17:36.977833]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:21:07.048053]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:26:57.168661]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:28:07.194428]  : volume status all tasks : FAILED : Staging
failed on ----. Please check log file for
details.
Staging failed on ----. Please check log
file for details.
Staging failed on ----. Please check log
file for details.
[2015-01-13 01:30:27.256667]  : volume status vol01 : FAILED : Locking
failed on cpu02.zne01.hkg1.stack.com. Please 

Re: [ovirt-users] about Ovirt - KVM - Ubuntu

2015-01-12 Thread Robert Story
On Thu, 08 Jan 2015 17:12:00 -0600 Carlos wrote:
CL I have questions about ovirt, my english is not good,I do not know if 
CL there are some mail list in spanish.

Hola Carlos,

CL Is posible Ovirt server read guest machines in KVM ubuntu server? or I 
CL need install Ovirt and migrade one to one machines.

You will need to migrate existing KVM guests to oVirt. There is a tool,
virt-v2v, that can help with the conversion. Unfortunately it's a long,
slow multi-step process. Hopefully google can help find some documentation
or a blog in Spanish for you.


Robert

-- 
Senior Software Engineer @ Parsons


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] about Ovirt - KVM - Ubuntu

2015-01-12 Thread Yedidyah Bar David
- Original Message -
 From: Carlos Laurent claur...@numar.net
 To: users@ovirt.org
 Sent: Friday, January 9, 2015 1:12:00 AM
 Subject: [ovirt-users] about Ovirt - KVM - Ubuntu
 
 Hi,
 
 I have questions about ovirt, my english is not good,I do not know if
 there are some mail list in spanish.

I don't know of any, but searching for 'ovirt spanish' does find some
interested people (and companies). Perhaps you should start one :-)

 
 I need install ovirt in ubuntu server, it is posible to do? I read is in
 experimental. I try install but there areproblems with Python libraries.

Debian/Ubuntu support is planned for 3.6. As you already found, there is
already some work done, but some dependencies added since then are missing,
as well as probably other things.

 
 Is posible Ovirt server read guest machines in KVM ubuntu server? or I
 need install Ovirt and migrade one to one machines.
 
 Thank you very much for the answers.
 
 Carlos Laurent
 Costa Rica,NUMAR
 
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

-- 
Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New Vdsm Build for oVirt 3.5

2015-01-12 Thread Vinzenz Feenstra

On 01/09/2015 01:52 PM, Sven Kieske wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

would it be possible to also generate some kind of changelog?

As there is no changelog yet (that I know of), I tried
to find some git tag
or a commit which bumped the version to 4.16.10
but I didn't find one in the 3.5 ovirt branch at:
http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=shortlog;h=refs%2Fheads%2Fovirt-3.5

am I looking in the wrong direction?

http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=shortlog;h=refs/tags/v4.16.10
v4.16.10 is tagged :-)


Any help would be appreciated

Thanks

Sven
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQGcBAEBAgAGBQJUr88UAAoJEAq0kGAWDrqlaJEL/j3mJj6AOF/d89CteCEKKnrd
1Pm1OtqtQ3ObACAcXOgLqZJL+DC4cgsO3MdO+50wZ3lmPztKugMyv9a3KZdr8Gr+
PGZWEta64Hbn9xl5qeSazY52I5B44SJNRDETNJHbqZOJYZDJ3wGsFN2SkLFjk32M
3WFxw5LtsoFTw012yYojVyZDV3RP5j1QnBNTY3awNDPbSQ1ee+8GjdsiEmvUmwJK
5ooJ6eSPAPNxRXp9Nhr8cV+6Gf2EuLvlviKLNV+OHFNR5X2kJG0JvdqrYxnWo1dX
65O1AKDLhunYFfl0XgCeZGSgOtBb1V/KYniKeDbv5/9rFyTQUnxFU9YAP668q1ZP
IdYBCUHGMpB4y0U7vTfWhmnja4krIQYbXaLhszk8yp3VntBQZG6dtozQH6jheGgw
HTAxmptw1CsVmvQjl/0SvG3GQTGSxcKfTlXHo6EfscFR+QMIq88yApcGTR1+z1my
6tn2Ju4CCwh/ftCqu5wm1NEHls329tU91TOsn4xaQg==
=VmOs
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R  D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine-lockspace broken symlinks

2015-01-12 Thread Yedidyah Bar David
- Original Message -
 From: Will K yetanotherw...@yahoo.com
 To: users@ovirt.org
 Sent: Tuesday, January 13, 2015 7:05:14 AM
 Subject: [ovirt-users] hosted-engine-lockspace broken symlinks
 
 Hi,
 
 still working on this hosted-engine setup. When deploy hosted-engine on the
 2nd node, hosted-engine.lockspace and hosted-engine.metadata cannot be
 found.
 
 1) Node1 is up with hosted engine installed on a GlusterFS volume. When try
 to deploy hosted engine on node2, I specified storage path to be nfs
 available on ovirtmount.xyz.com. ovirtmount is just an entry in the host
 file pointing to node1 as in the Up and Running with oVirt 3.5.
 
 The deploy process mounted the ifs export under
 /rhev/data-center/mnt/ovirtmount.xyz.com:_engine/9d2142eb-f414-46f1-895a-95099aeb7f69/ha_agent
 
 I fond symlinks point to /rhev/data-center/mnt/IP:_engine/ instead of
 ovirtmount
 hosted-engine-lockspace
 hosted-engine-metadata
 
 If I re-run deploy again using IP for the NFS export, the symlinks will look
 good and go forward with the process.

IMO if you always supplied a name (even if resolvable only by /etc/hosts) and
not an IP address, the mounts should always use the name. We have a different
bug [1] which seems similar, but is related to the name of the host, not of
the nfs server.

Can you please post setup logs of all of the hosts?
/var/log/ovirt-hosted-engine-setup/*

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1178535

 
 2) Then something killed the VM at this point. Sure it was running.
 [ INFO ] Configuring VDSM
 [ INFO ] Starting vdsmd
 [ INFO ] Waiting for VDSM hardware info
 [ INFO ] Waiting for VDSM hardware info
 [ INFO ] Connected to Storage Domain
 [ INFO ] Configuring VM
 [ INFO ] Updating hosted-engine configuration
 [ INFO ] Stage: Transaction commit
 [ INFO ] Stage: Closing up
 [ ERROR ] Failed to execute stage 'Closing up': urlopen error [Errno 113] No
 route to host

These logs will help here too.

Thanks,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine-lockspace broken symlinks

2015-01-12 Thread Will K
Hi,
still working on this hosted-engine setup.  When deploy hosted-engine on the 
2nd node, hosted-engine.lockspace and hosted-engine.metadata cannot be found.
1)  Node1 is up with hosted engine installed on a GlusterFS volume.  When try 
to deploy hosted engine on node2,  I specified storage path to be nfs available 
on ovirtmount.xyz.com.  ovirtmount is just an entry in the host file pointing 
to node1 as in the Up and Running with oVirt 3.5.
The deploy process mounted the ifs export under 
/rhev/data-center/mnt/ovirtmount.xyz.com:_engine/9d2142eb-f414-46f1-895a-95099aeb7f69/ha_agent
I fond symlinks point to /rhev/data-center/mnt/IP:_engine/   instead of 
ovirtmount    hosted-engine-lockspace 
    hosted-engine-metadata

If I re-run deploy again using IP for the NFS export, the symlinks will look 
good and go forward with the process.
2) Then something killed the VM at this point. Sure it was running.[ INFO  ] 
Configuring VDSM[ INFO  ] Starting vdsmd[ INFO  ] Waiting for VDSM hardware 
info[ INFO  ] Waiting for VDSM hardware info[ INFO  ] Connected to Storage 
Domain[ INFO  ] Configuring VM[ INFO  ] Updating hosted-engine configuration[ 
INFO  ] Stage: Transaction commit[ INFO  ] Stage: Closing up[ ERROR ] Failed to 
execute stage 'Closing up': urlopen error [Errno 113] No route to host[ INFO  
] Stage: Clean up[ INFO  ] Generating answer file 
'/etc/ovirt-hosted-engine/answers.conf'[ INFO  ] Answer file 
'/etc/ovirt-hosted-engine/answers.conf' has been updated[ INFO  ] Stage: 
Pre-termination[ INFO  ] Stage: Termination

Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Failed to create volume in OVirt with gluster

2015-01-12 Thread Punit Dambiwal
Hi Atin,

Please find the output from here :- http://ur1.ca/jf4bs

On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee amukh...@redhat.com
wrote:

 Punit,

 cli log wouldn't help much here. To debug this issue further can you
 please let us know the following:

 1. gluster peer status output
 2. gluster volume status output
 3. gluster --version output.
 4. Which command got failed
 5. glusterd log file of all the nodes

 ~Atin


 On 01/13/2015 07:48 AM, Punit Dambiwal wrote:
  Hi,
 
  Please find the more details on this can anybody from gluster will
 help
  me here :-
 
 
  Gluster CLI Logs :- /var/log/glusterfs/cli.log
 
  [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs:
 got
  RPC_CLNT_CONNECT
  [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify]
  0-glusterfs: got RPC_CLNT_CONNECT
  [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler]
  0-transport: disconnecting now
  [2015-01-13 02:06:23.072055] T
 [cli-quotad-client.c:100:cli_quotad_notify]
  0-glusterfs: got RPC_CLNT_DISCONNECT
  [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record]
  0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
  [2015-01-13 02:06:23.072176] T
  [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request
 fraglen
  128, payload: 64, rpc hdr: 64
  [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--
  /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420]
 (--
 
 /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293]
  (-- /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--
  /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--
  /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ) 0-glusterfs:
 connect
  () called on transport already connected
  [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit]
  0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers:
 2,
  Proc: 27) to rpc-transport (glusterfs)
  [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping]
  0-glusterfs: ping timeout is 0, returning
  [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init]
  0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI,
  ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
  [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk]
  0-cli: Received response to status cmd
  [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli:
  Returning 0
  [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume]
  0-cli: Returning: 0
  [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output]
  0-cli: Returning 0
  [2015-01-13 02:06:23.076244] D
 [cli-xml-output.c:131:cli_xml_output_common]
  0-cli: Returning 0
  [2015-01-13 02:06:23.076256] D
  [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning
 0
  [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output]
  0-cli: Returning 0
  [2015-01-13 02:06:23.076459] D
  [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
  [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
 
  Command log :- /var/log/glusterfs/.cmd_log_history
 
  Staging failed on ----. Please check log
  file for details.
  Staging failed on ----. Please check log
  file for details.
  [2015-01-13 01:10:35.836676]  : volume status all tasks : FAILED :
 Staging
  failed on ----. Please check log file for
  details.
  Staging failed on ----. Please check log
  file for details.
  Staging failed on ----. Please check log
  file for details.
  [2015-01-13 01:16:25.956514]  : volume status all tasks : FAILED :
 Staging
  failed on ----. Please check log file for
  details.
  Staging failed on ----. Please check log
  file for details.
  Staging failed on ----. Please check log
  file for details.
  [2015-01-13 01:17:36.977833]  : volume status all tasks : FAILED :
 Staging
  failed on ----. Please check log file for
  details.
  Staging failed on ----. Please check log
  file for details.
  Staging failed on ----. Please check log
  file for details.
  [2015-01-13 01:21:07.048053]  : volume status all tasks : FAILED :
 Staging
  failed on ----. Please check log file for
  details.
  Staging failed on ----. Please check log
  file for details.
  Staging failed on ----. Please check log
  file for details.
  [2015-01-13 01:26:57.168661]  : volume status all tasks : FAILED :
 Staging
  failed on ----. Please 

Re: [ovirt-users] deploy failed with Engine health status page is not yet reachable.

2015-01-12 Thread Yedidyah Bar David
- Original Message -
 From: Will K yetanotherw...@yahoo.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, January 13, 2015 6:02:47 AM
 Subject: Re: [ovirt-users] deploy failed with Engine health status page is 
 not yet reachable.
 
 thanks, the URL helped troubleshoot it.W

Thanks for the report!

Would you like to open a bug about this?

This will be useful even if it was your own error, if setup could have
helped you find it.

 
  On Monday, January 12, 2015 4:18 AM, Yedidyah Bar David
  d...@redhat.com wrote:

 
  - Original Message -
  From: Will K yetanotherw...@yahoo.com
  To: users@ovirt.org
  Sent: Monday, January 12, 2015 10:54:25 AM
  Subject: [ovirt-users] deploy failed with Engine health status page is not
  yet reachable.
  
  Hi list,
  
  I've been trying to setup hosted-engine on 2 nodes for a few times. The
  latest failure which I can't figure out is the Engine health status page
  is
  not yet reachable.
  
  I'm running CentOS 6.6 fully patched before setting up 3.5. Sure I run
  screen, I also need to setup three vars to get around a proxy server,
  http_proxy, https_proxy, no_proxy.
  
  `hosted-engine --deploy` went fine until the last steps. After `engine
  setup
  is completed`, I picked (1) Continue setup. It says 'Waiting for VM to
  shutdown`. Once the VM is up again, I waited for a couple of minutes, then
  pick (1) again, it gives me
  
  Engine health status page is not yet reachable.
  
  At this point, I can ssh to the VM, I can also access and login the web
  GUI.
 
 So the health page _is_ probably reachable, but for some reason the host
 can't access it.
 
  
  Any suggestion?
 
 Check/post logs, check correctness of names/IP addresses/etc., iptables, try
 to manually get the health page from the host and see if it works, e.g.
 
 curl http://{fqdn}/ovirt-engine/services/health
 
 (replace {fqdn} with the name you input as engine fqdn).
 
 Best,
 --
 Didi
 
 
 

-- 
Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setting Base DN for LDAP authentication

2015-01-12 Thread jdeloro
Hello Alon,

 All over? :)

traveling into the past. This e-mail comes from Mailmans moderator queue. It 
was sent before I was registered. And after being registered I was unable to 
cancel the e-mail, because Mailman always told me the token was unknown.

The problem is solved to my complete satisfaction.

Kind regards

Jannick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Centos 7 unable to mount NFS on gluster members

2015-01-12 Thread Karli Sjöberg
On Mon, 2015-01-12 at 20:03 -0700, Donny Davis wrote:
 Here is a quick rundown of the system, and the problem.
 
  
 
 All hosts on centOS 7 fully up to date
 
 GlusterFS is running on 6 servers, 3x2 distribute/replicate. 
 
 CTDB is running on all hosts
 
 I am unable to mount via nfs the exported volume on any of the gluster
 server members.
 
  
 
 I am able to mount, read, write, umount from any server that is not a
 gluster member. 
 
  
 
 Topology – all are hostnames that are resolvable 
 
 Gluster Members
 
 Node1
 
 Node2
 
 Node3
 
 Node4
 
 Node5
 
 Node6
 
  
 
 CTDB Virtual IP/Hostname
 
 SharedNFS
 
  
 
 Test Machine
 
 Test1
 
  
 
 Gluster Volumes
 
 Engine
 
 Data
 
  
 
 I am trying to bring up hosted-engine using nfs using the gluster
 members
 
 I run hosted-engine –deploy
 
 Nfsv3
 
  
 
 Host:/path sharednfs:/engine
 
 Error while mounting specified storage path: mount.nfs: an incorrect
 mount option was specified
 
  
 
 Ok well lets try that without using the hosted-engine script
 
  
 
 Mount –v –t nfs –o vers=3 sharednfs:/engine /tmp
 
 mount.nfs: timeout set for Mon Jan 12 22:00:08 2015
 
 mount.nfs: trying text-based options
 'vers=3,lock=Flase,addr=192.168.0.240
 
 mount.nfs: prog 13, trying vers=3, prot=6
 
 mount.nfs: trying 192.168.0.240 prog 13 vers 3 prot TCP port 2049
 
 mount.nfs: prog 15, trying vers=3, prot=17
 
 mount.nfs: portmap query retrying: RPC: Program not registered
 
 mount.nfs: prog 15, trying vers=3, prot=6
 
 mount.nfs: trying 192.168.0.240 prog 15 vers 3 prot TCP port 38465
 
 mount.nfs: mount(2): Invalid argument
 
 mount.nfs: an incorrect mount option was specified
 
  
 
  
 
 [root@node4 ~]# systemctl status rpcbind
 
 rpcbind.service - RPC bind service
 
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
 
Active: active (running) since Mon 2015-01-12 20:01:13 EST; 1h
 57min ago
 
   Process: 1349 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS}
 (code=exited, status=0/SUCCESS)
 
 Main PID: 1353 (rpcbind)
 
CGroup: /system.slice/rpcbind.service
 
└─1353 /sbin/rpcbind -w
 
  
 
 Jan 12 20:01:13 node4 systemd[1]: Starting RPC bind service...
 
 Jan 12 20:01:13 node4 systemd[1]: Started RPC bind service.
 
 Jan 12 21:19:22 node4 systemd[1]: Started RPC bind service.
 
  
 
  
 
 Ummm… this makes no sense…. 
 
 [root@test1 ~]# mount -v -o vers=3 -t nfs 192.168.0.240:/engine /tmp
 
 mount.nfs: timeout set for Mon Jan 12 20:02:58 2015
 
 mount.nfs: trying text-based options 'vers=3,addr=192.168.0.240
 
 mount.nfs: prog 13, trying vers=3, prot=6
 
 mount.nfs: trying 192.168.0.240 prog 13 vers 3 prot TCP port 2049
 
 mount.nfs: prog 15, trying vers=3, prot=17
 
 mount.nfs: portmap query retrying: RPC: Program not registered
 
 mount.nfs: prog 15, trying vers=3, prot=6
 
 mount.nfs: trying 192.168.0.240 prog 15 vers 3 prot TCP port 38465
 
 192.168.0.240:/engine on /tmp type nfs (rw,vers=3)
 
  
 
  
 
  
 
 On the test machine mounts the nfs share with no problems. I have
 confirmed this does not work on a single machine that is part of the
 gluster.  And any other machine is able to mount the exact same share,
 with the exact same parameters… on the exact same OS….
 
  
 
 I am at a loss

iptables? Can you ping 'sharednfs'? SSH in on it? 

/K

 
  
 
 Donny D
 
  
 
 
 plain text document attachment (ATT1)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



-- 

Med Vänliga Hälsningar

---
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Dan Kenigsberg
On Mon, Jan 12, 2015 at 02:59:50PM +0200, Lior Vernia wrote:
 
 
 On 12/01/15 14:44, Oved Ourfali wrote:
  Hi Sahina,
  
  Some comments:
  
  1. As far as I understand, you might not have an IP available immediately 
  after setupNetworks runs (getCapabilities should run, but it isn't run 
  automatically, afair).
  2. Perhaps you should pass not the IP but the name of the network? IPs 
  might change.
 
 Actually, IP address can indeed change - which would be very bad for
 gluster functioning! I think moving networks or changing their IP
 addresses via Setup Networks should be blocked if they're used by
 gluster bricks.

In the suggested feature, there is no real storage role. The storage
role title means only default value for glusterfs IP.

For example, once a brick was created, nothing protects the admin from
accidently removing the storage network, or changing its IP address.

Another proof that this is not a real role, is that it affects only
GUI: I am guessing that REST API would not make use of it at all. (maybe
I'm wrong; for sure, REST must be defined in the feature page)

Maybe that's the behavior we want. But alternatively, Engine can enforce
a stronger linkage between the brick to the network that it uses. When
adding a brick, the dialog would list available networks instead of the
specific IP. As long as the brick is being used, the admin would be
blocked/warned against deleting the network.

I'm missing a discussion regarding the upgrade path. If we would opt to
requiring a single storage role network in a cluster, in an upgraded
cluster the management network should take this role.

 
  3. Adding to 2, perhaps using DNS names is a more valid approach?
  4. You're using the terminology role, but it might be confusing, as we 
  have roles with regards to permissions. Consider changing storage usage 
  and not storage role in the feature page.
 
 Well, we've already been using this terminology for a while now
 concerning display/migration roles for networks... That's probably the
 terminology to use.
 
  
  Thanks,
  Oved
  
  - Original Message -
  From: Sahina Bose sab...@redhat.com
  To: de...@ovirt.org, users users@ovirt.org
  Sent: Monday, January 12, 2015 2:00:16 PM
  Subject: [ovirt-users] [Feature review] Select network to be used for  
  glusterfs
 
  Hi all,
 
  Please review the feature page for this proposed solution and provide
  your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster
 
  thanks
  sahina
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users