Re: [ovirt-users] Shared storage between DC

2016-05-02 Thread Idan Shaby
Hi Arsène,

There's no way to have a storage domain that is available in one cluster
but not available in the other, in the same dc. It is by design.


Regards,
Idan

On Mon, May 2, 2016 at 6:39 PM, Arsène Gschwind 
wrote:

> Hi Idan,
>
> Thanks a lot for your answer, that's clear now.
> I will change my setup and use 2 cluster in the same DC instead.
> In the case using 2 cluster in the same DC I've noticed that all Storage
> domain have to be available for all clusters, in my case I use SAN Storage.
> Is this by design or is there a way to have some Storage Domain only
> available on one cluster but not on the others?
>
> Regards,
> Arsène
>
>
> On 05/02/2016 02:33 PM, Idan Shaby wrote:
>
> Hi Arsène,
>
> The only storage domain that can be shared between datacenters is the iso
> domain
> But that's not what you're looking for. You can't share a data domain
> between two datacenters.
>
> As for importing a storage domain which is active on one datacenter to
> another datacenter, there are two cases:
> - If the datacenters are in the same oVirt setup, that won't be possible,
> and the operation will not be performed.
> - If the datacenters are in two different oVirt setups, the import to the
> second datacenter will succeed, but you will experience issues in the first
> datacenter.
> The reason for that is that the host in the second datacenter overrides
> the storage domain metadata when importing it.
>
> Therefore, you cannot import a storage domain to a second datacenter while
> it is active on the first one, anyway.
>
>
> Regards,
> Idan
>
> On Mon, May 2, 2016 at 10:44 AM, Arsène Gschwind <
> arsene.gschw...@unibas.ch> wrote:
>
>> Hi Roberto,
>>
>> Thanks for your information but
>> When I'm writing about DC I mean defined DC in oVirt and so far I could
>> see that when you define a storage domain in one oVirt DC it will not be
>> available in another DC and I don't know what would happen if I would
>> import it on the second DC.
>> In your setup, did you just define 1 DC and multiple clusters?
>>
>> Regards,
>> Arsène
>>
>>
>> On 05/01/2016 01:37 PM, NUNIN Roberto wrote:
>>
>> Hi
>> I have in production the scenery something similar to what you've
>> described.
>> The "enabling factor" is represented by an "storage virtualization" set
>> of appliances, that maintain mirrored logical volume over fc physical
>> volumes across two distinct datacenters, while giving rw simultaneus access
>> to cluster hypervisors split between datacenters, that run the VMs.
>>
>> So: cluster also is spread across dc, no need to import nothing.
>> Regards,
>>
>> *Roberto*
>>
>>
>>
>> Il giorno 01 mag 2016, alle ore 10:37, Arsène Gschwind <
>> arsene.gschw...@unibas.ch> ha scritto:
>>
>> Hi,
>>
>> Is it possible to have a shared Storage domain between 2 Datacenter in
>> oVirt?
>> We do replicate a FC Volume between 2 datacenter using FC SAN storage
>> technology and we have an oVirt cluster on each site defined in separate
>> DCs. The idea behind this is to setup a DR site and also balance the load
>> between each site.
>> What happens if I do import a storage domain already active in one DC,
>> will it break the Storage domain?
>>
>> Thanks for any information..
>> Regards,
>> Arsène
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> --
>>
>> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
>> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
>> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
>> immediatamente al mittente, cancellando l'originale e ogni sua copia e
>> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
>> proibito e potrebbe essere fonte di violazione di legge.
>>
>> This message is for the designated recipient only and may contain
>> privileged, proprietary, or otherwise private information. If you have
>> received it in error, please notify the sender immediately, deleting the
>> original and all copies and destroying any hard copies. Any other use is
>> strictly prohibited and may be unlawful.
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Sahina Bose



On 05/02/2016 09:36 PM, Maor Lipchuk wrote:

On Mon, May 2, 2016 at 5:45 PM, Sahina Bose  wrote:



On 05/02/2016 05:57 PM, Maor Lipchuk wrote:



On Mon, May 2, 2016 at 1:08 PM, Sahina Bose  wrote:



On 05/02/2016 03:15 PM, Maor Lipchuk wrote:



On Mon, May 2, 2016 at 12:29 PM, Sahina Bose  wrote:



On 05/01/2016 05:33 AM, Maor Lipchuk wrote:

Hi Sahina,

The disks with snapshots should be part of the VMs, once you will register 
those VMs you should see those disks in the disks sub tab.


Maor,

I was unable to import VM which prompted question - I assumed we had to 
register disks first. So maybe I need to troubleshoot why I could not import 
VMs from the domain first.
It fails with an error "Image does not exist". Where does it look for volume 
IDs to pass to GetImageInfoVDSCommand - the OVF disk?


In engine.log

2016-05-02 04:15:14,812 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
sBroker::getImageInfo::Failed getting image info 
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on 
domainName='sahinasl
ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code: 
'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
5a2-4d77-8091-d2fca3bbea1c',)
2016-05-02 04:15:14,814 WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c]
executeIrsBrokerCommand: getImageInfo on '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' 
threw an exception - assuming image doesn't exist: IRS
GenericException: IRSErrorException: VolumeDoesNotExist
2016-05-02 04:15:14,814 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c]
FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
2016-05-02 04:15:14,814 WARN  
[org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action 
'ImportVmFromConfiguration' failed for user admin@internal. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST



jsonrpc.Executor/2::DEBUG::2016-05-02 
13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
'Volume.getInfo' in
bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', 
u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID': 
u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}

jsonrpc.Executor/2::DEBUG::2016-05-02 
13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) validate 
path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
jsonrpc.Executor/2::ERROR::2016-05-02 
13:45:13,914::task::866::Storage.TaskManager.Task::(_setError) 
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 873, in _run
 return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
 res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
 volUUID=volUUID).getInfo()
   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
 volUUID)
   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
 volUUID)
   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
 volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
 self.validate()
   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
 self.validateVolumePath()
   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in validateVolumePath
 raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)

When I look at the tree output - there's no 
6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.


├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
│   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta



Usually the "image does not exists" message is prompted once the VM's disk is 
managed in a different storage domain which were not imported yet.

Few questions:
1. Were there any other Storage Domain which are not present in the setup?


In the original RHEV instance - there were 3 storage domain
i) Hosted engine storage domain: engine
ii) Master data domain: vmstore
iii) An export domain: expVol (no data here)

To my backup RHEV server, I only 

[ovirt-users] Additional hosted-engine host deployment failed due to message timeout

2016-05-02 Thread Wee Sritippho

Hi,

I'm making a fresh hosted-engine installation on 3 hosts. First 2 hosts 
succeeded, but the 3rd one stuck at termination state "Installing Host 
hosted_engine_3. Stage: Termination.", then, 3 minutes later, "VDSM 
hosted_engine_3 command failed: Message timeout which can be caused by 
communication issues". Currently, the 3rd host status in web UI is stuck 
at "Installing".


How can I proceed? Could I just run this script 
 
in the 3rd host, and run "hosted-engine --deploy" again?


log files: https://app.box.com/s/a5typfe6cbozs9uo9osg68gtmq8793t6

[me@host03 ~]$ rpm -qa | grep ovirt
ovirt-release36-007-1.noarch
ovirt-hosted-engine-setup-1.3.5.0-1.1.el7.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-engine-sdk-python-3.6.5.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.3-1.1.el7.noarch

[me@host03 ~]$ rpm -qa | grep vdsm
vdsm-hook-vmfex-dev-4.17.26-1.el7.noarch
vdsm-xmlrpc-4.17.26-1.el7.noarch
vdsm-infra-4.17.26-1.el7.noarch
vdsm-yajsonrpc-4.17.26-1.el7.noarch
vdsm-python-4.17.26-1.el7.noarch
vdsm-4.17.26-1.el7.noarch
vdsm-cli-4.17.26-1.el7.noarch
vdsm-jsonrpc-4.17.26-1.el7.noarch

[root@engine ~]# rpm -qa | grep ovirt
ovirt-engine-wildfly-8.2.1-1.el7.x86_64
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.5.3-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.5.3-1.el7.centos.noarch
ovirt-engine-backend-3.6.5.3-1.el7.centos.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.6.5.3-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-release36-007-1.noarch
ovirt-engine-sdk-python-3.6.5.0-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
ovirt-engine-setup-base-3.6.5.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.5.3-1.el7.centos.noarch
ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.5.3-1.el7.centos.noarch
ovirt-engine-setup-3.6.5.3-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.5.3-1.el7.centos.noarch
ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch
ovirt-engine-restapi-3.6.5.3-1.el7.centos.noarch
ovirt-engine-3.6.5.3-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
ovirt-engine-lib-3.6.5.3-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.6.5.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.5.3-1.el7.centos.noarch
ovirt-engine-userportal-3.6.5.3-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.5.3-1.el7.centos.noarch

Thanks,

--
Wee

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Web administration page gone

2016-05-02 Thread Francis Yap
Hi All, my ovirt web admin page overwrite by the foreman installation, I
have remove the foreman but ovirt admin page still not found, the running
VM and engine are not affect.

How do I recovery the admin page?


-- 
Thanks & Regards
Francis Yap
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vms in paused state

2016-05-02 Thread Bill James

.recovery setting before removing:
p298
sS'status'
p299
S'Paused'
p300



After removing .recovery file and shutdown and restart:
V0
sS'status'
p51
S'Up'
p52


So far looks good, GUI show's VM as Up.


another host was:
p318
sS'status'
p319
S'Paused'
p320

after moving .recovery file and restarting:
V0
sS'status'
p51
S'Up'


Thanks.

On 04/29/2016 02:36 PM, Nir Soffer wrote:

/run/vdsm/.recovery

On Fri, Apr 29, 2016 at 10:59 PM, Bill James > wrote:


where do I find the recovery files?

[root@ovirt1 test vdsm]# pwd
/var/lib/vdsm
[root@ovirt1 test vdsm]# ls -la
total 16
drwxr-xr-x   6 vdsm kvm100 Mar 17 16:33 .
drwxr-xr-x. 45 root root  4096 Apr 29 12:01 ..
-rw-r--r--   1 vdsm kvm  10170 Jan 19 05:04 bonding-defaults.json
drwxr-xr-x   2 vdsm root 6 Apr 19 11:34 netconfback
drwxr-xr-x   3 vdsm kvm 54 Apr 19 11:35 persistence
drwxr-x---.  2 vdsm kvm  6 Mar 17 16:33 transient
drwxr-xr-x   2 vdsm kvm 40 Mar 17 16:33 upgrade



On 4/29/16 10:02 AM, Michal Skrivanek wrote:



On 29 Apr 2016, at 18:26, Bill James > wrote:


yes they are still saying "paused" state.
No, bouncing libvirt didn't help.


Then my suspicion of vm recovery gets closer to a certainty:)
Can you get one of the paused vm's .recovery file from
/var/lib/vdsm and check it says Paused there? It's worth a shot
to try to remove that file and restart vdsm, then check logs and
that vm status...it should recover "good enough" from libvirt only.
Try it with one first


I noticed the errors about the ISO domain. Didn't think that was
related.
I have been migrating a lot of VMs to ovirt lately, and recently
added another node.
Also had some problems with /etc/exports for a while, but I
think those issues are all resolved.


Last "unresponsive" message in vdsm.log was:

vdsm.log.49.xz:jsonrpc.Executor/0::WARNING::*2016-04-21*
11:00:54,703::vm::5067::virt.vm::(_setUnresponsiveIfTimeout)
vmId=`b6a13808-9552-401b-840b-4f7022e8293d`::monitor become
unresponsive (command timeout, age=310323.97)
vdsm.log.49.xz:jsonrpc.Executor/0::WARNING::2016-04-21
11:00:54,703::vm::5067::virt.vm::(_setUnresponsiveIfTimeout)
vmId=`5bfb140a-a971-4c9c-82c6-277929eb45d4`::monitor become
unresponsive (command timeout, age=310323.97)



Thanks.



On 4/29/16 1:40 AM, Michal Skrivanek wrote:



On 28 Apr 2016, at 19:40, Bill James > wrote:

thank you for response.
I bold-ed the ones that are listed as "paused".


[root@ovirt1 test vdsm]# virsh -r list --all
 Id  Â
Name                          State






Looks like problem started around 2016-04-17 20:19:34,822,
based on engine.log attached.


yes, that time looks correct. Any idea what might have been a
trigger? Anything interesting happened at that time (power
outage of some host, some maintenance action, anything)?Â
logs indicate a problem when vdsm talks to libvirt(all those
"monitor become unresponsive†)

It does seem that at that time you started to have some storage
connectivity issues - first one at 2016-04-17 20:06:53,929.
And it doesn’t look temporary because such errors are still
there couple hours later(in your most recent file you attached
I can see at 23:00:54)
When I/O gets blocked the VMs may experience issues (then VM
gets Paused), or their qemu process gets stuck(resulting in
libvirt either reporting error or getting stuck as well ->
resulting in what vdsm sees as “monitor unresponsive†)

Since you now bounced libvirtd - did it help? Do you still see
wrong status for those VMs and still those "monitor
unresponsive" errors in vdsm.log?
If not…then I would suspect the “vm recovery†code not
working correctly. Milan is looking at that.

Thanks,
michal



There's a lot of vdsm logs!

fyi, the storage domain for these Vms is a "local" nfs share,
7e566f55-e060-47b7-bfa4-ac3c48d70dda.

attached more logs.


On 04/28/2016 12:53 AM, Michal Skrivanek wrote:

On 27 Apr 2016, at 19:16, Bill James 
  wrote:

virsh # list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such 
file or directory


you need to run virsh in read-only mode
virsh -r list —all


[root@ovirt1 test vdsm]# systemctl status libvirtd
â—  libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; 
vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
   

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Correction; I verified and on the Gluster Volume "engine-vol" nfs.disable is 
off. Not sure if that is significant or not.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 8:39 PM, Gianluca Cecchi 
wrote:

> On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
> wrote:
>
>>
>> >>>
>> >>> Can you please check the entropy value on your host?
>> >>>  cat /proc/sys/kernel/random/entropy_avail
>> >>>
>> >>
>> >> I have not at hand now the server. I'll check soon and report
>> >> Do you mean entropy of the physical server that will operate as
>> hypervisor?
>>
>> On the hypervisor
>>
>> > That's a good question. Simone - do you know if we start the guest with
>> > virtio-rng?
>>
>> AFAIK we are not.
>>
>>
> On the only existing hypervisor, just after booting and exiting global
> maintenance, causing hosted engine to start, I have
>
> [root@ovirt01 ~]# uptime
>  20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11
>
> [root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
> 3084
>
> BTW on the self hosted engine VM:
> [root@ovirt ~]# uptime
>  18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13
>
> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
> 14
>
> On the hypervisor:
> [root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
> [root@ovirt01 ~]#
>
> On engine VM:
> [root@ovirt ~]# ll /dev/hwrng
> ls: cannot access /dev/hwrng: No such file or directory
> [root@ovirt ~]#
>
> [root@ovirt ~]# lsmod | grep virtio_rng
> [root@ovirt ~]#
>
> May I change anything so that engine VM has virtio-rng enabled?
>
> Gianluca
>
>
>
I verified very slow login time in webadmin after welcome page, with my
configuration that is for now based on /etc/hosts.
After reading a previous post, and having after about 30 minutes only 114
as entropy in hosted engine vm, I made this in engine VM:

yum install haveged
systemctl enable haveged

put host in global maintenance
shutdown engine VM
exit from maintenance

engine VM starts and immediately I have:

[root@ovirt ~]# uptime
 19:05:10 up 0 min,  1 user,  load average: 0.68, 0.20, 0.07

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
1369

And login in web admin page now almost immediate

Inside the thread I read:
http://lists.ovirt.org/pipermail/users/2016-April/038805.html

it wasn't clear if I can edit the engine VM in webadmin (or other mean) and
enable the random generator option or if the haveged way is the one to go
with in case of self hosted engine
Is there a list of what I can change (if any) and what not for the engine
VM?
For example I would like to change the time zone that is GMT now (I think
inherited from the OVF of the appliance?)

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
wrote:

>
> >>>
> >>> Can you please check the entropy value on your host?
> >>>  cat /proc/sys/kernel/random/entropy_avail
> >>>
> >>
> >> I have not at hand now the server. I'll check soon and report
> >> Do you mean entropy of the physical server that will operate as
> hypervisor?
>
> On the hypervisor
>
> > That's a good question. Simone - do you know if we start the guest with
> > virtio-rng?
>
> AFAIK we are not.
>
>
On the only existing hypervisor, just after booting and exiting global
maintenance, causing hosted engine to start, I have

[root@ovirt01 ~]# uptime
 20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11

[root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
3084

BTW on the self hosted engine VM:
[root@ovirt ~]# uptime
 18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
14

On the hypervisor:
[root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
[root@ovirt01 ~]#

On engine VM:
[root@ovirt ~]# ll /dev/hwrng
ls: cannot access /dev/hwrng: No such file or directory
[root@ovirt ~]#

[root@ovirt ~]# lsmod | grep virtio_rng
[root@ovirt ~]#

May I change anything so that engine VM has virtio-rng enabled?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Hi Sahina,

Thank you for your response. Let me know if you'll need any of the log before 
the Storage Configuration section. I looked at this earlier and I was wondering 
why, after choosing to use GlusterFS, there is still reference to NFS (nfs.py)? 
I do believe NFS is disabled in my Gluster config for the engine cluster. 
-Robert

2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND
2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During customization use 
CTRL-D to abort.
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0}, 
'poollist': []}
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 ENVIRONMENT 
DUMP - BEGIN
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_STORAGE/domainType=str:'glusterfs'
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required for the 
shared storage.
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the full 
shared storage connection path to use (example: host:/path):
2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
executable='None', cwd='None', env=None
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
rc=2
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:936 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:941 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:395 exception
Traceback (most recent call last):
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 390, in _customization
check_space=False,
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 302, in _validateDomain
self._check_volume_properties(connection)
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 179, in _check_volume_properties
raiseOnError=True
  File 

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 5:45 PM, Sahina Bose  wrote:
>
>
>
> On 05/02/2016 05:57 PM, Maor Lipchuk wrote:
>
>
>
> On Mon, May 2, 2016 at 1:08 PM, Sahina Bose  wrote:
>>
>>
>>
>> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>>
>>
>>
>> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose  wrote:
>>>
>>>
>>>
>>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>>
>>> Hi Sahina,
>>>
>>> The disks with snapshots should be part of the VMs, once you will register 
>>> those VMs you should see those disks in the disks sub tab.
>>>
>>>
>>> Maor,
>>>
>>> I was unable to import VM which prompted question - I assumed we had to 
>>> register disks first. So maybe I need to troubleshoot why I could not 
>>> import VMs from the domain first.
>>> It fails with an error "Image does not exist". Where does it look for 
>>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>>
>>>
>>> In engine.log
>>>
>>> 2016-05-02 04:15:14,812 ERROR 
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>>> sBroker::getImageInfo::Failed getting image info 
>>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on 
>>> domainName='sahinasl
>>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code: 
>>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>>> 5a2-4d77-8091-d2fca3bbea1c',)
>>> 2016-05-02 04:15:14,814 WARN  
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>> executeIrsBrokerCommand: getImageInfo on 
>>> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image 
>>> doesn't exist: IRS
>>> GenericException: IRSErrorException: VolumeDoesNotExist
>>> 2016-05-02 04:15:14,814 INFO  
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
>>> 2016-05-02 04:15:14,814 WARN  
>>> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action 
>>> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons: 
>>> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>>>
>>>
>>>
>>> jsonrpc.Executor/2::DEBUG::2016-05-02 
>>> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
>>> 'Volume.getInfo' in
>>> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', 
>>> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
>>> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID': 
>>> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>>>
>>> jsonrpc.Executor/2::DEBUG::2016-05-02 
>>> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) 
>>> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>>> jsonrpc.Executor/2::ERROR::2016-05-02 
>>> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError) 
>>> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>>> return fn(*args, **kargs)
>>>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>>> res = f(*args, **kwargs)
>>>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
>>> volUUID=volUUID).getInfo()
>>>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
>>> volUUID)
>>>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
>>> volUUID)
>>>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
>>> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>>>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
>>> self.validate()
>>>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
>>> self.validateVolumePath()
>>>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in 
>>> validateVolumePath
>>> raise se.VolumeDoesNotExist(self.volUUID)
>>> VolumeDoesNotExist: Volume does not exist: 
>>> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>>>
>>> When I look at the tree output - there's no 
>>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>>>
>>>
>>> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
>>> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>>
>>
>>
>> Usually the "image does not exists" message is prompted once the VM's disk 
>> is managed in a 

Re: [ovirt-users] Error deleting template

2016-05-02 Thread Giulio Casella

Thank you Idan,
those query solved my problem.
I really don't know when it happened, and with thousand VM running it's 
not so easy to dig into logs.


Thank you,
Giulio


Il 02/05/2016 15:39, Idan Shaby ha scritto:

Hi Giulio,

Indeed it seems that there's an inconsistency between the storage and
the database.
Somehow the template's disk was removed from the storage. Your logs
don't tell how and when.

This is what needs to be done in order to get rid of the stale entries -
run in the database:

Delete from vm_device where device_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from images where image_group_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from base_disks where disk_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from vm_static where vm_guid =
'6ec76d42-98a6-4094-a76c-af1b639c5b30';


If it happens again, please file a BZ with the full logs of engine and
vdsm, so we can investigate this issue.
Feel free to ask anything if you got further questions.

Regards
Idan

On Mon, May 2, 2016 at 2:36 PM, Giulio Casella > wrote:

Hi Idan,
you can find attached some snippet of my logs (vdsm.log from SPM
host and engine.log from manager).
Anyway I think logs are clear: there's no template on disk, only on
engine database.

My setup is composed of 3 datacenters (with clusters of 4, 4, and 6
hosts respectively). I have about a hundred templates, and about
1500 virtual machines.

The step to reproduce is quite simple: in admin portal right click
on the "damaged" template, remove. Other templates deletion work fine.

Thanks,
Giulio

Il 01/05/2016 08:37, Idan Shaby ha scritto:

Hi,

Can you please attach the engine and vdsm logs?
Also, can you describe your setup and the steps that reproduced
this error?


Thanks,
Idan

On Wed, Apr 27, 2016 at 1:47 PM, Giulio Casella

>> wrote:

Hi all,
I have a problem deleting a template from admin portal.
In file /var/log/vdsm/vdsm.log (on SPM hypervisor) I got:

jsonrpc.Executor/4::ERROR::2016-04-27
10:19:57,122::hsm::1518::Storage.HSM::(deleteImage) Empty or not
found image  in SD  [...]

Looking in the (data) storage domain the disk with that UUID
doesn't
exists.
It seems I reached an inconsistent state between engine
database and
images on disk.

Is there a (safe) way to rebuild a consistent situation? Maybe
deleting entries from database?

My setup is based:
manager RHEV 3.5.8-0.1.el6ev
hypervisors: RHEV Hypervisor - 7.2 - 20160328.0.el7ev


Thanx in advance,
Giulio

___
Users mailing list
Users@ovirt.org 
>
http://lists.ovirt.org/mailman/listinfo/users



--
Giulio Casellagiulio at
di.unimi.it 
System and network manager
Computer Science Dept. - University of Milano




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shared storage between DC

2016-05-02 Thread Arsène Gschwind

Hi Idan,

Thanks a lot for your answer, that's clear now.
I will change my setup and use 2 cluster in the same DC instead.
In the case using 2 cluster in the same DC I've noticed that all Storage 
domain have to be available for all clusters, in my case I use SAN 
Storage. Is this by design or is there a way to have some Storage Domain 
only available on one cluster but not on the others?


Regards,
Arsène

On 05/02/2016 02:33 PM, Idan Shaby wrote:

Hi Arsène,

The only storage domain that can be shared between datacenters is the 
iso domain
But that's not what you're looking for. You can't share a data domain 
between two datacenters.


As for importing a storage domain which is active on one datacenter to 
another datacenter, there are two cases:
- If the datacenters are in the same oVirt setup, that won't be 
possible, and the operation will not be performed.
- If the datacenters are in two different oVirt setups, the import to 
the second datacenter will succeed, but you will experience issues in 
the first datacenter.
The reason for that is that the host in the second datacenter 
overrides the storage domain metadata when importing it.


Therefore, you cannot import a storage domain to a second datacenter 
while it is active on the first one, anyway.



Regards,
Idan

On Mon, May 2, 2016 at 10:44 AM, Arsène Gschwind 
> wrote:


Hi Roberto,

Thanks for your information but
When I'm writing about DC I mean defined DC in oVirt and so far I
could see that when you define a storage domain in one oVirt DC it
will not be available in another DC and I don't know what would
happen if I would import it on the second DC.
In your setup, did you just define 1 DC and multiple clusters?

Regards,
Arsène


On 05/01/2016 01:37 PM, NUNIN Roberto wrote:

Hi
I have in production the scenery something similar to what you've
described.
The "enabling factor" is represented by an "storage
virtualization" set of appliances, that maintain mirrored logical
volume over fc physical volumes across two distinct datacenters,
while giving rw simultaneus access to cluster hypervisors split
between datacenters, that run the VMs.

So: cluster also is spread across dc, no need to import nothing.
Regards,

*Roberto*


Il giorno 01 mag 2016, alle ore 10:37, Arsène Gschwind
> ha
scritto:


Hi,

Is it possible to have a shared Storage domain between 2
Datacenter in oVirt?
We do replicate a FC Volume between 2 datacenter using FC SAN
storage technology and we have an oVirt cluster on each site
defined in separate DCs. The idea behind this is to setup a DR
site and also balance the load between each site.
What happens if I do import a storage domain already active in
one DC, will it break the Storage domain?

Thanks for any information..
Regards,
Arsène
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




Questo messaggio e' indirizzato esclusivamente al destinatario
indicato e potrebbe contenere informazioni confidenziali,
riservate o proprietarie. Qualora la presente venisse ricevuta
per errore, si prega di segnalarlo immediatamente al mittente,
cancellando l'originale e ogni sua copia e distruggendo eventuali
copie cartacee. Ogni altro uso e' strettamente proibito e
potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain
privileged, proprietary, or otherwise private information. If you
have received it in error, please notify the sender immediately,
deleting the original and all copies and destroying any hard
copies. Any other use is strictly prohibited and may be unlawful.



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CINLUG: Virtualization Management, The oVirt Way

2016-05-02 Thread Brian Proffitt
I don't know if they are recording it; but if they do, I will try to get it
online.

Peace,
BKP

On Sat, Apr 30, 2016 at 7:19 AM, Gianluca Cecchi 
wrote:

>
> Il 29/Apr/2016 21:13, "Brian Proffitt"  ha scritto:
> >
> > The world of virtualization seems to be getting passed by with all of
> the advances in containers and container management technology. But don't
> count virtual machines out just yet. Large-scale, centralized management
> for server and desktop virtual machines is available now, with the free and
> open source software platform oVirt. This KVM-based management tool
> provides production-ready VM management to organizations large and small,
> and is used by universities, businesses, and even major airports. Join Red
> Hat's Brian Proffitt on a tour of oVirt plus a fun look at how VM
> management and cloud computing *do* work together.
> >
> >
> > http://www.meetup.com/CINLUG/events/230746101/
> >
> >
>
> Interesting... is it possible to record the event ?
>
> Gianluca
>



-- 
Brian Proffitt
Principal Community Analyst
Open Source and Standards
@TheTechScribe
574.383.9BKP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Sahina Bose



On 05/02/2016 05:57 PM, Maor Lipchuk wrote:



On Mon, May 2, 2016 at 1:08 PM, Sahina Bose > wrote:




On 05/02/2016 03:15 PM, Maor Lipchuk wrote:



On Mon, May 2, 2016 at 12:29 PM, Sahina Bose > wrote:



On 05/01/2016 05:33 AM, Maor Lipchuk wrote:

Hi Sahina,

The disks with snapshots should be part of the VMs, once you
will register those VMs you should see those disks in the
disks sub tab.


Maor,

I was unable to import VM which prompted question - I assumed
we had to register disks first. So maybe I need to
troubleshoot why I could not import VMs from the domain first.
It fails with an error "Image does not exist". Where does it
look for volume IDs to pass to GetImageInfoVDSCommand - the
OVF disk?


In engine.log

2016-05-02 04:15:14,812 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
sBroker::getImageInfo::Failed getting image info
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist
on domainName='sahinasl
ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error
code: 'VolumeDoesNotExist', message: Volume does not exist:
(u'6f4da17a-0
5a2-4d77-8091-d2fca3bbea1c',)
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c]
executeIrsBrokerCommand: getImageInfo on
'6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception -
assuming image doesn't exist: IRS
GenericException: IRSErrorException: VolumeDoesNotExist
2016-05-02 04:15:14,814 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c]
FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
'ImportVmFromConfiguration' failed for user admin@internal.
Reasons:

VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST



jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'Volume.getInfo' in
bridge with {u'imageID':
u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', u'storagepoolID':
u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
u'storagedomainID': u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}

jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
jsonrpc.Executor/2::ERROR::2016-05-02
13:45:13,914::task::866::Storage.TaskManager.Task::(_setError) 
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected
error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3162, in
getVolumeInfo
volUUID=volUUID).getInfo()
  File "/usr/share/vdsm/storage/sd.py", line 457, in
produceVolume
volUUID)
  File "/usr/share/vdsm/storage/glusterVolume.py", line 16,
in __init__
volUUID)
  File "/usr/share/vdsm/storage/fileVolume.py", line 58, in
__init__
volume.Volume.__init__(self, repoPath, sdUUID, imgUUID,
volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
self.validate()
  File "/usr/share/vdsm/storage/volume.py", line 194, in validate
self.validateVolumePath()
  File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
validateVolumePath
raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist:
(u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)

When I look at the tree output - there's no
6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.


├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
 

Re: [ovirt-users] node unresponsive after reboot

2016-05-02 Thread Cam Mac
Hi Piotr,

Attached are the vdsm log, the engine log and the supervdsm log. I've
attached them as a .tgz.

I noticed it is complaining about configuring an interface in one of the
node logs. It shows as up in the engine web GUI though (and on the command
line).

Thanks for the help.

-Cam

On Mon, May 2, 2016 at 1:38 PM, Piotr Kliczewski  wrote:

> Cam,
>
> Please provide engine and failing vdsm logs.
>
> Thanks,
> Piotr
>
> On Sun, May 1, 2016 at 4:05 PM, Cam Mac  wrote:
> > Hi,
> >
> > I have a two node + engine ovirt setup, and I was having problems
> > doing a live migration between nodes. I looked in the vdsm logs and
> > noticed selinux errors, so I checked the selinux config, and both the
> > ovirt-engine host and one of the nodes had selinux disabled. So I
> > thought I would enable it on these two hosts, as it is officially
> > supported anyway. I started with the node, and put it into maintenance
> > mode, which interestingly, migrated the VMs off to the other node
> > without issue. After modifying the selinux config, I then rebooted
> > that node, which came back up. I then tried to activate the node but
> > it fails and marks it as unresponsive.
> >
> > --8<--
> >
> > 2016-04-28 16:34:31,326 INFO
> > [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
> > Reactor) [29acb18b] Connecting to
> > kvm-ldn-02/172.16.23.12
> > 2016-04-28 16:34:31,327 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> > (DefaultQuartzScheduler_Worker-32) [ac322cb] Command
> > 'GetCapabilitiesVDSCommand(HostName = kvm-ldn-02,
> > VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
> > hostId='b12c0b80-d64d-42fd-8a55-94f92b9ca3aa',
> > vds='Host[kvm-ldn-02,b12c0b80-d64d-42fd-8a55-94f92b9ca3aa]'})'
> > execution failed:
> > org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
> > failed
> > 2016-04-28 16:34:31,327 ERROR
> > [org.ovirt.engine.core.vdsbroker.HostMonitoring]
> > (DefaultQuartzScheduler_Worker-32) [ac322cb] Failure to refresh Vds
> > runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
> > Connection failed
> > 2016-04-28 16:34:31,327 ERROR
> > [org.ovirt.engine.core.vdsbroker.HostMonitoring]
> > (DefaultQuartzScheduler_Worker-32) [ac322cb] Exception:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> > org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
> > failed
> > at
> >
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:157)
> > [vdsbroker.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:120)
> > [vdsbroker.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
> > [vdsbroker.jar:]
> > at
> > org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> > [dal.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467)
> > [vdsbroker.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:652)
> > [vdsbroker.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:119)
> > [vdsbroker.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84)
> > [vdsbroker.jar:]
> > at
> > org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227)
> > [vdsbroker.jar:]
> > at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown
> > Source) [:1.8.0_71]
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > [rt.jar:1.8.0_71]
> > at java.lang.reflect.Method.invoke(Method.java:497)
> > [rt.jar:1.8.0_71]
> > at
> >
> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81)
> > [scheduler.jar:]
> > at
> > org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52)
> > [scheduler.jar:]
> > at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> > [quartz.jar:]
> > at
> >
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
> > [quartz.jar:]
> > Caused by: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
> > Connection failed
> > at
> >
> org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.connect(ReactorClient.java:157)
> > [vdsm-jsonrpc-java-client.jar:]
> > at
> >
> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.getClient(JsonRpcClient.java:114)
> > [vdsm-jsonrpc-java-client.jar:]
> > at
> > org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:73)
> > [vdsm-jsonrpc-java-client.jar:]
> > at
> >
> org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.(FutureMap.java:68)
> > [vdsbroker.jar:]
> >  

Re: [ovirt-users] Error deleting template

2016-05-02 Thread Idan Shaby
Hi Giulio,

Indeed it seems that there's an inconsistency between the storage and the
database.
Somehow the template's disk was removed from the storage. Your logs don't
tell how and when.

This is what needs to be done in order to get rid of the stale entries -
run in the database:

Delete from vm_device where device_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from images where image_group_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from base_disks where disk_id =
'6949d47a-0d38-468e-ad4e-670372841174';
Delete from vm_static where vm_guid =
'6ec76d42-98a6-4094-a76c-af1b639c5b30';


If it happens again, please file a BZ with the full logs of engine and
vdsm, so we can investigate this issue.
Feel free to ask anything if you got further questions.

Regards
Idan

On Mon, May 2, 2016 at 2:36 PM, Giulio Casella  wrote:

> Hi Idan,
> you can find attached some snippet of my logs (vdsm.log from SPM host and
> engine.log from manager).
> Anyway I think logs are clear: there's no template on disk, only on engine
> database.
>
> My setup is composed of 3 datacenters (with clusters of 4, 4, and 6 hosts
> respectively). I have about a hundred templates, and about 1500 virtual
> machines.
>
> The step to reproduce is quite simple: in admin portal right click on the
> "damaged" template, remove. Other templates deletion work fine.
>
> Thanks,
> Giulio
>
> Il 01/05/2016 08:37, Idan Shaby ha scritto:
>
>> Hi,
>>
>> Can you please attach the engine and vdsm logs?
>> Also, can you describe your setup and the steps that reproduced this
>> error?
>>
>>
>> Thanks,
>> Idan
>>
>> On Wed, Apr 27, 2016 at 1:47 PM, Giulio Casella > > wrote:
>>
>> Hi all,
>> I have a problem deleting a template from admin portal.
>> In file /var/log/vdsm/vdsm.log (on SPM hypervisor) I got:
>>
>> jsonrpc.Executor/4::ERROR::2016-04-27
>> 10:19:57,122::hsm::1518::Storage.HSM::(deleteImage) Empty or not
>> found image  in SD  [...]
>>
>> Looking in the (data) storage domain the disk with that UUID doesn't
>> exists.
>> It seems I reached an inconsistent state between engine database and
>> images on disk.
>>
>> Is there a (safe) way to rebuild a consistent situation? Maybe
>> deleting entries from database?
>>
>> My setup is based:
>> manager RHEV 3.5.8-0.1.el6ev
>> hypervisors: RHEV Hypervisor - 7.2 - 20160328.0.el7ev
>>
>>
>> Thanx in advance,
>> Giulio
>>
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
> --
> Giulio Casellagiulio at di.unimi.it
> System and network manager
> Computer Science Dept. - University of Milano
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-05-02 Thread Ondra Machacek

On 05/02/2016 03:02 PM, Alexis HAUSER wrote:




I am unsure I understand. What is missing in interactive setup to
properly setup TLS?
You just enter CA certificte path/url/system and Java keystore file is
created for you by the tool.



I'll try to generate a new file with the interactive setup and tell you if the 
result is different.


So, here is my problem when using the interactive setup :

[ INFO  ] Connecting to LDAP using 'ldaps://:636'
[WARNING] Cannot connect using 'ldaps://:636': {'info': "TLS error -8172:Peer's 
certificate issuer has been marked as not trusted by the user.", 'desc': "Can't contact 
LDAP server"}
[ ERROR ] Cannot connect using any of available options



Are you sure you've specified correct CA?

Can you try running this command:
 LDAPTLS_CACERT=your_ldap_ca_cert.crt ldapsearch -H ldaps://@HOST@ -x 
-D '@USERDN@' -w '@USERPW@' -b '@BASEDN@'


If it fail then most probably you have incorrect CA certificate.
If it succeed, please open bug in bugzilla with logs of setup tool if 
possible.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-05-02 Thread Alexis HAUSER


>>I am unsure I understand. What is missing in interactive setup to 
>>properly setup TLS?
>>You just enter CA certificte path/url/system and Java keystore file is 
>>created for you by the tool.

>I'll try to generate a new file with the interactive setup and tell you if the 
>result is different.

So, here is my problem when using the interactive setup : 

[ INFO  ] Connecting to LDAP using 'ldaps://:636'
[WARNING] Cannot connect using 'ldaps://:636': {'info': "TLS error 
-8172:Peer's certificate issuer has been marked as not trusted by the user.", 
'desc': "Can't contact LDAP server"}
[ ERROR ] Cannot connect using any of available options

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] node unresponsive after reboot

2016-05-02 Thread Piotr Kliczewski
Cam,

Please provide engine and failing vdsm logs.

Thanks,
Piotr

On Sun, May 1, 2016 at 4:05 PM, Cam Mac  wrote:
> Hi,
>
> I have a two node + engine ovirt setup, and I was having problems
> doing a live migration between nodes. I looked in the vdsm logs and
> noticed selinux errors, so I checked the selinux config, and both the
> ovirt-engine host and one of the nodes had selinux disabled. So I
> thought I would enable it on these two hosts, as it is officially
> supported anyway. I started with the node, and put it into maintenance
> mode, which interestingly, migrated the VMs off to the other node
> without issue. After modifying the selinux config, I then rebooted
> that node, which came back up. I then tried to activate the node but
> it fails and marks it as unresponsive.
>
> --8<--
>
> 2016-04-28 16:34:31,326 INFO
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
> Reactor) [29acb18b] Connecting to
> kvm-ldn-02/172.16.23.12
> 2016-04-28 16:34:31,327 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-32) [ac322cb] Command
> 'GetCapabilitiesVDSCommand(HostName = kvm-ldn-02,
> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
> hostId='b12c0b80-d64d-42fd-8a55-94f92b9ca3aa',
> vds='Host[kvm-ldn-02,b12c0b80-d64d-42fd-8a55-94f92b9ca3aa]'})'
> execution failed:
> org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
> failed
> 2016-04-28 16:34:31,327 ERROR
> [org.ovirt.engine.core.vdsbroker.HostMonitoring]
> (DefaultQuartzScheduler_Worker-32) [ac322cb] Failure to refresh Vds
> runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
> Connection failed
> 2016-04-28 16:34:31,327 ERROR
> [org.ovirt.engine.core.vdsbroker.HostMonitoring]
> (DefaultQuartzScheduler_Worker-32) [ac322cb] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
> failed
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:157)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:120)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]
> at
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:652)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:119)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227)
> [vdsbroker.jar:]
> at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown
> Source) [:1.8.0_71]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_71]
> at java.lang.reflect.Method.invoke(Method.java:497)
> [rt.jar:1.8.0_71]
> at
> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81)
> [scheduler.jar:]
> at
> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52)
> [scheduler.jar:]
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> [quartz.jar:]
> at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
> [quartz.jar:]
> Caused by: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
> Connection failed
> at
> org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.connect(ReactorClient.java:157)
> [vdsm-jsonrpc-java-client.jar:]
> at
> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.getClient(JsonRpcClient.java:114)
> [vdsm-jsonrpc-java-client.jar:]
> at
> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:73)
> [vdsm-jsonrpc-java-client.jar:]
> at
> org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.(FutureMap.java:68)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.getCapabilities(JsonRpcVdsServer.java:268)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand.executeVdsBrokerCommand(GetCapabilitiesVDSCommand.java:15)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110)
> [vdsbroker.jar:]
> ... 14 more
>
> --8<--
>
> Any ideas?
>
> Thanks for any help,
>
> Cam
>
> ___
> Users mailing list
> 

Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-05-02 Thread Alexis HAUSER


>> Yes this is actually the tool I used first, then I modified manually as on 
>> the documentation.
>>
>> The problem in this approach is the fact you need a .profile file to be able 
>> to set up a TLS connection between the LDAP >and the engine. But this file 
>> is generated after the interactive setup. But the interactive setup doesn't 
>> allow you to >setup things properly as the TLS isn't set up...

>I am unsure I understand. What is missing in interactive setup to 
>properly setup TLS?
>You just enter CA certificte path/url/system and Java keystore file is 
>created for you by the tool.

Interesting, so it's only an error in the Red Hat Documentation.

If you check on the administrative guide, the prerequisite for using the 
interactive tool is to have a TLS connection set up betweem LDAP and the engine 
:  
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Configuring_an_External_LDAP_Provider.html

But when you follow the link to set up this TLS connection, it makes you create 
the java keystore and modify the "profile1.properties" manually...Which doesn't 
exist because the interactive setup hasn't been done yet...

I'll report this on their bugzilla.

I'll try to generate a new file with the interactive setup and tell you if the 
result is different.

>>
>>So I had to setup things with "insecure" mode and then edit it manually...
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shared storage between DC

2016-05-02 Thread Idan Shaby
Hi Arsène,

The only storage domain that can be shared between datacenters is the iso
domain
But that's not what you're looking for. You can't share a data domain
between two datacenters.

As for importing a storage domain which is active on one datacenter to
another datacenter, there are two cases:
- If the datacenters are in the same oVirt setup, that won't be possible,
and the operation will not be performed.
- If the datacenters are in two different oVirt setups, the import to the
second datacenter will succeed, but you will experience issues in the first
datacenter.
The reason for that is that the host in the second datacenter overrides the
storage domain metadata when importing it.

Therefore, you cannot import a storage domain to a second datacenter while
it is active on the first one, anyway.


Regards,
Idan

On Mon, May 2, 2016 at 10:44 AM, Arsène Gschwind 
wrote:

> Hi Roberto,
>
> Thanks for your information but
> When I'm writing about DC I mean defined DC in oVirt and so far I could
> see that when you define a storage domain in one oVirt DC it will not be
> available in another DC and I don't know what would happen if I would
> import it on the second DC.
> In your setup, did you just define 1 DC and multiple clusters?
>
> Regards,
> Arsène
>
>
> On 05/01/2016 01:37 PM, NUNIN Roberto wrote:
>
> Hi
> I have in production the scenery something similar to what you've
> described.
> The "enabling factor" is represented by an "storage virtualization" set of
> appliances, that maintain mirrored logical volume over fc physical volumes
> across two distinct datacenters, while giving rw simultaneus access to
> cluster hypervisors split between datacenters, that run the VMs.
>
> So: cluster also is spread across dc, no need to import nothing.
> Regards,
>
> *Roberto*
>
>
>
> Il giorno 01 mag 2016, alle ore 10:37, Arsène Gschwind <
> arsene.gschw...@unibas.ch> ha scritto:
>
> Hi,
>
> Is it possible to have a shared Storage domain between 2 Datacenter in
> oVirt?
> We do replicate a FC Volume between 2 datacenter using FC SAN storage
> technology and we have an oVirt cluster on each site defined in separate
> DCs. The idea behind this is to setup a DR site and also balance the load
> between each site.
> What happens if I do import a storage domain already active in one DC,
> will it break the Storage domain?
>
> Thanks for any information..
> Regards,
> Arsène
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> --
>
> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> immediatamente al mittente, cancellando l'originale e ogni sua copia e
> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> proibito e potrebbe essere fonte di violazione di legge.
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise private information. If you have
> received it in error, please notify the sender immediately, deleting the
> original and all copies and destroying any hard copies. Any other use is
> strictly prohibited and may be unlawful.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 1:08 PM, Sahina Bose  wrote:

>
>
> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>
>
>
> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose  wrote:
>
>>
>>
>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>
>> Hi Sahina,
>>
>> The disks with snapshots should be part of the VMs, once you will
>> register those VMs you should see those disks in the disks sub tab.
>>
>>
>> Maor,
>>
>> I was unable to import VM which prompted question - I assumed we had to
>> register disks first. So maybe I need to troubleshoot why I could not
>> import VMs from the domain first.
>> It fails with an error "Image does not exist". Where does it look for
>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>
>
>> In engine.log
>>
>> 2016-05-02 04:15:14,812 ERROR
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>> sBroker::getImageInfo::Failed getting image info
>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
>> domainName='sahinasl
>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>> 5a2-4d77-8091-d2fca3bbea1c',)
>> 2016-05-02 04:15:14,814 WARN
>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>> executeIrsBrokerCommand: getImageInfo on
>> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image
>> doesn't exist: IRS
>> GenericException: IRSErrorException: VolumeDoesNotExist
>> 2016-05-02 04:15:14,814 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
>> 2016-05-02 04:15:14,814 WARN
>> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
>> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons:
>> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>>
>>
>>
>> jsonrpc.Executor/2::DEBUG::2016-05-02
>> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
>> 'Volume.getInfo' in
>> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
>> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
>> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
>> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>>
>> jsonrpc.Executor/2::DEBUG::2016-05-02
>> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
>> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>> jsonrpc.Executor/2::ERROR::2016-05-02
>> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
>> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>> return fn(*args, **kargs)
>>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>> res = f(*args, **kwargs)
>>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
>> volUUID=volUUID).getInfo()
>>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
>> volUUID)
>>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
>> volUUID)
>>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
>> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
>> self.validate()
>>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
>> self.validateVolumePath()
>>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
>> validateVolumePath
>> raise se.VolumeDoesNotExist(self.volUUID)
>> VolumeDoesNotExist: Volume does not exist:
>> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>>
>> When I look at the tree output - there's no
>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>>
>>
>> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
>> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>>
>
>
> Usually the "image does not exists" message is prompted once the VM's disk
> is managed in a different storage domain which were not imported yet.
>
> Few questions:
> 1. Were there any other Storage Domain which are not present in the setup?
>
>
> In the original RHEV instance - there were 3 storage domain
> i) Hosted engine storage domain: engine
> ii) Master data 

Re: [ovirt-users] iSCSI Data Domain Down

2016-05-02 Thread Arman Khalatyan
Nice!
To automate that you can put it into:
/etc/rc.local
chmod +x /etc/rc.local
Then put dmremove and targetcli there.
Your service will come back after the power fails.
Am 02.05.2016 12:39 vorm. schrieb "Clint Boggio" :

> Thank you so much Arman. With use of that command, I was able to restore
> service.
>
> I really appreciate the help
>
> On May 1, 2016, at 2:58 PM, Arman Khalatyan  wrote:
>
> Hi, before to start target cli you should remove all lvm auto-imported
> volumes:
> dmsetup remove_all
> Then restart your targetcli.
> Am 01.05.2016 1:51 nachm. schrieb "Clint Boggio" :
>
>> Greetings oVirt Family;
>>
>> Due to catastrophic power failure, my datacenter lost power. I am using a
>> CentOS7 server to provide ISCSI services to my OVirt platform.
>>
>> When the power came back on, and the iscsi server booted back up, the
>> filters in lvm.conf were faulty and LVM assumed control over the LVM's that
>> OVirt uses as the disks for the VMs. This tanked target.service because it
>> claims "device already in use" and my datacenter is down.
>>
>> I've tried several filter combinations in lvm.conf to no avail, and in my
>> search I've found no documentation on how to make LVM "forget" about the
>> volumes that it had assumed and release them.
>>
>> Do any of you know of a procedure to make lvm forget about and release
>> the volumes on /dev/sda ?
>>
>> OVirt 3.6.5 on CentOS 7
>> 4 Hypervisor nodes CentOS7
>> 1 Dedicated engine CentOS7
>> 1 iscsi SAN CentOS 7 exporting 10TB block device from a Dell Perc RAID
>> controller /dev/sda with targetcli.
>> 1 NFS server for ISO and Export Domains 5TB
>>
>> I'm out I ideas and any help would be greatly appreciated.
>>
>> I'm currently using dd to recover the VM disk drives over to the NFS
>> server in case this cannot be recovered.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] cloned vm template id

2016-05-02 Thread Dobó László

Hello,

How can i get back the original template id after vm is cloned with 
python sdk?

print api.vms.get(id=vm_id).template.id
result: ---- (blank template)

However under vm general tab on web ui, the tamplate name is showing 
correctli.



regards,
enax



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Sahina Bose



On 05/02/2016 03:15 PM, Maor Lipchuk wrote:



On Mon, May 2, 2016 at 12:29 PM, Sahina Bose > wrote:




On 05/01/2016 05:33 AM, Maor Lipchuk wrote:

Hi Sahina,

The disks with snapshots should be part of the VMs, once you will
register those VMs you should see those disks in the disks sub tab.


Maor,

I was unable to import VM which prompted question - I assumed we
had to register disks first. So maybe I need to troubleshoot why I
could not import VMs from the domain first.
It fails with an error "Image does not exist". Where does it look
for volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?


In engine.log

2016-05-02 04:15:14,812 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
sBroker::getImageInfo::Failed getting image info
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
domainName='sahinasl
ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
5a2-4d77-8091-d2fca3bbea1c',)
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c]
executeIrsBrokerCommand: getImageInfo on
'6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception -
assuming image doesn't exist: IRS
GenericException: IRSErrorException: VolumeDoesNotExist
2016-05-02 04:15:14,814 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c]
FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
'ImportVmFromConfiguration' failed for user admin@internal.
Reasons:
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST



jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'Volume.getInfo' in
bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}

jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) validate
path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
jsonrpc.Executor/2::ERROR::2016-05-02
13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
volUUID=volUUID).getInfo()
  File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
volUUID)
  File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in
__init__
volUUID)
  File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
self.validate()
  File "/usr/share/vdsm/storage/volume.py", line 194, in validate
self.validateVolumePath()
  File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
validateVolumePath
raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist:
(u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)

When I look at the tree output - there's no
6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.


├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
│   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta



Usually the "image does not exists" message is prompted once the VM's 
disk is managed in a different storage domain which were not imported yet.


Few questions:
1. Were there any other Storage Domain which are not present in the setup?


In the original RHEV instance - there were 3 storage domain
i) Hosted engine storage domain: engine
ii) Master data domain: vmstore

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 12:29 PM, Sahina Bose  wrote:

>
>
> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>
> Hi Sahina,
>
> The disks with snapshots should be part of the VMs, once you will register
> those VMs you should see those disks in the disks sub tab.
>
>
> Maor,
>
> I was unable to import VM which prompted question - I assumed we had to
> register disks first. So maybe I need to troubleshoot why I could not
> import VMs from the domain first.
> It fails with an error "Image does not exist". Where does it look for
> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>

> In engine.log
>
> 2016-05-02 04:15:14,812 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
> sBroker::getImageInfo::Failed getting image info
> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
> domainName='sahinasl
> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
> 5a2-4d77-8091-d2fca3bbea1c',)
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> executeIrsBrokerCommand: getImageInfo on
> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image
> doesn't exist: IRS
> GenericException: IRSErrorException: VolumeDoesNotExist
> 2016-05-02 04:15:14,814 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons:
> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>
>
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'Volume.getInfo' in
> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
> jsonrpc.Executor/2::ERROR::2016-05-02
> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
> volUUID=volUUID).getInfo()
>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
> volUUID)
>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
> volUUID)
>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
> self.validate()
>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
> self.validateVolumePath()
>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
> validateVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>
> When I look at the tree output - there's no
> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>
>
> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>


Usually the "image does not exists" message is prompted once the VM's disk
is managed in a different storage domain which were not imported yet.

Few questions:
1. Were there any other Storage Domain which are not present in the setup?
2. Can you look for the image id 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in
your storage server (Search on all the rest of the storage domains)?
Were there any operations being done on the VM before the recovery such as
remove disk, move disk, or a new creation of a disk?

Regards,
Maor


>
> Regarding floating disks (without snapshots), you can register them
> through 

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Sahina Bose



On 05/01/2016 05:33 AM, Maor Lipchuk wrote:

Hi Sahina,

The disks with snapshots should be part of the VMs, once you will 
register those VMs you should see those disks in the disks sub tab.


Maor,

I was unable to import VM which prompted question - I assumed we had to 
register disks first. So maybe I need to troubleshoot why I could not 
import VMs from the domain first.
It fails with an error "Image does not exist". Where does it look for 
volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?


In engine.log

2016-05-02 04:15:14,812 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
sBroker::getImageInfo::Failed getting image info 
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on 
domainName='sahinasl
ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code: 
'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0

5a2-4d77-8091-d2fca3bbea1c',)
2016-05-02 04:15:14,814 WARN 
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c]
executeIrsBrokerCommand: getImageInfo on 
'6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming 
image doesn't exist: IRS

GenericException: IRSErrorException: VolumeDoesNotExist
2016-05-02 04:15:14,814 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c]

FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
2016-05-02 04:15:14,814 WARN 
[org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand] 
(ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action 
'ImportVmFromConfiguration' failed for user admin@internal. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST




jsonrpc.Executor/2::DEBUG::2016-05-02 
13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) 
Calling 'Volume.getInfo' in
bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', 
u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID': 
u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}


jsonrpc.Executor/2::DEBUG::2016-05-02 
13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) 
validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
jsonrpc.Executor/2::ERROR::2016-05-02 
13:45:13,914::task::866::Storage.TaskManager.Task::(_setError) 
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
volUUID=volUUID).getInfo()
  File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
volUUID)
  File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
volUUID)
  File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
self.validate()
  File "/usr/share/vdsm/storage/volume.py", line 194, in validate
self.validateVolumePath()
  File "/usr/share/vdsm/storage/fileVolume.py", line 540, in 
validateVolumePath

raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)


When I look at the tree output - there's no 
6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.


├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
│   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
│   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
│   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
│   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta


Regarding floating disks (without snapshots), you can register them 
through REST.
If you are working on the master branch there should be a sub tab 
dedicated for those also.


Regards,
Maor

On Tue, Apr 26, 2016 at 1:44 PM, Sahina Bose > wrote:


Hi all,

I have a gluster volume used as data storage domain which is
replicated to a slave gluster volume (say, slavevol) using
gluster's geo-replication feature.

Now, in a new oVirt instance, I use the import storage domain to
import the slave gluster volume. The "VM Import" tab correctly
lists the VMs that were present in my original gluster volume.
However the "Disks" tab is empty.

GET

https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 11:06 AM, Yedidyah Bar David  wrote:
> On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
>  wrote:
>> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>>
>>>
>>>
>>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>>> to configure it and execute there engine-setup to configure the engine
>>> as needed.
>>> Since engine-setup is running on the engine VM triggered by
>>> cloud-init, hosted-engine-setup has no way to really control its
>>> process status so we simply gather its output with a timeout of 10
>>> minutes between each single output line.
>>> In nothing happens within 10 minutes (the value is easily
>>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>>
>>
>>
>> How can one customize the pre-set timeout?

To set 20 minutes you can pass this
OVEHOSTED_ENGINE/engineSetupTimeout=int:1200


>> Could it be better to ask the user at the end of timeout if he/she wants to
>> wait again, instead of directly fail?
>
> Perhaps, can you please open a bz?

+1

>>> So the issue we have to understood is why this simple command took
>>> more than 10 minutes in your env:
>>> 2016-04-30 17:56:57 DEBUG
>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>>> 'password-reset', 'admin', '--password=env:pass', '--force',
>>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>>
>>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>>> 'OTOPI_EXECDIR': '/'}
>>
>>
>>
>>
>> It seemed quite strange to me too (see below further info on this)
>>
>>>
>>> Can you please check the entropy value on your host?
>>>  cat /proc/sys/kernel/random/entropy_avail
>>>
>>
>> I have not at hand now the server. I'll check soon and report
>> Do you mean entropy of the physical server that will operate as hypervisor?

On the hypervisor

> That's a good question. Simone - do you know if we start the guest with
> virtio-rng?

AFAIK we are not.

> This is another case of [1], perhaps we should reopen it.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827
>
>>
>>
>>>
>>> > As a last question how to clean up things in case I have to start from
>>> > scratch.
>>>
>>> I'd recommend to redeploy from scratch instead of trying fixing it
>>> but, before that, we need to understand the root issue.
>>>
>>
>> So, trying restart the setup with generated answer file I got:
>> 1) if VM still powered on, an error about this condition
>> 2) if VM powered down, an error abut storage domain already in place and
>> restart not supported in this condition.
>>
>> I was able to continue with these steps:
>>
>> a) remove what inside the partially setup self hosted engine storage domain
>> rm -rf /SHE_DOMAIN/*
>> cd SHE_DOMAIN
>> mklost+found
>>
>> b) reboot the hypervisor
>>
>> c) stop vdsmd
>>
>> d) start the setup again with the answer file
>> It seems all went well and this time strangely the step that took more than
>> 10 minutes before lasted less than 2 seconds
>>
>> I was then able to deploy storage and iso domains without problems and self
>> hosted engine domain correctly detected and imported too.
>> Created two CentOS VMs without problems (6.7 and 7.2).
>>
>> See below the full output of deploy command
>>
>>
>> [root@ovirt01 ~]# hosted-engine --deploy
>> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Generating a temporary VNC password.
>> [ INFO  ] Stage: Environment setup
>>   Configuration files:
>> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>>   Log file:
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
>> [ INFO  ] Hardware supports virtualization
>> [ INFO  ] Bridge ovirtmgmt already created
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== STORAGE CONFIGURATION ==--
>>
>>   During customization use CTRL-D to abort.
>> [ INFO  ] Installing on first host
>>
>>   --== SYSTEM CONFIGURATION ==--
>>
>>
>>   --== NETWORK CONFIGURATION ==--
>>
>>
>>   --== VM CONFIGURATION ==--
>>
>> [ INFO  ] Checking OVF archive content (could take a few minutes depending
>> on 

Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-05-02 Thread Ondra Machacek

On 05/02/2016 09:35 AM, Alexis HAUSER wrote:



Should I report this on the bugzilla ?




You can, but I beleive this is not bug, but some misconfiguration, many
times I've tried completelly simillar setup and it worked.

Btw.. did you used 'ovirt-engine-extension-aaa-ldap-setup'? If not you
can install it.
 $ yum install ovirt-engine-extension-aaa-ldap-setup

Then just run:
 $ ovirt-engine-extension-aaa-ldap-setup

And follow the steps. This tool handle for you all perms and typos
issues, which could be introduces by manually creating those properties
files.


Yes this is actually the tool I used first, then I modified manually as on the 
documentation.

The problem in this approach is the fact you need a .profile file to be able to 
set up a TLS connection between the LDAP and the engine. But this file is 
generated after the interactive setup. But the interactive setup doesn't allow 
you to setup things properly as the TLS isn't set up...


I am unsure I understand. What is missing in interactive setup to 
properly setup TLS?
You just enter CA certificte path/url/system and Java keystore file is 
created for you by the tool.




So I had to setup things with "insecure" mode and then edit it manually...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Yedidyah Bar David
On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
 wrote:
> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>
>>
>>
>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>> to configure it and execute there engine-setup to configure the engine
>> as needed.
>> Since engine-setup is running on the engine VM triggered by
>> cloud-init, hosted-engine-setup has no way to really control its
>> process status so we simply gather its output with a timeout of 10
>> minutes between each single output line.
>> In nothing happens within 10 minutes (the value is easily
>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>
>
>
> How can one customize the pre-set timeout?
> Could it be better to ask the user at the end of timeout if he/she wants to
> wait again, instead of directly fail?

Perhaps, can you please open a bz?

>
>
>>
>> So the issue we have to understood is why this simple command took
>> more than 10 minutes in your env:
>> 2016-04-30 17:56:57 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>> 'password-reset', 'admin', '--password=env:pass', '--force',
>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>> 'OTOPI_EXECDIR': '/'}
>
>
>
>
> It seemed quite strange to me too (see below further info on this)
>
>>
>> Can you please check the entropy value on your host?
>>  cat /proc/sys/kernel/random/entropy_avail
>>
>
> I have not at hand now the server. I'll check soon and report
> Do you mean entropy of the physical server that will operate as hypervisor?

That's a good question. Simone - do you know if we start the guest with
virtio-rng?

This is another case of [1], perhaps we should reopen it.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827

>
>
>>
>> > As a last question how to clean up things in case I have to start from
>> > scratch.
>>
>> I'd recommend to redeploy from scratch instead of trying fixing it
>> but, before that, we need to understand the root issue.
>>
>
> So, trying restart the setup with generated answer file I got:
> 1) if VM still powered on, an error about this condition
> 2) if VM powered down, an error abut storage domain already in place and
> restart not supported in this condition.
>
> I was able to continue with these steps:
>
> a) remove what inside the partially setup self hosted engine storage domain
> rm -rf /SHE_DOMAIN/*
> cd SHE_DOMAIN
> mklost+found
>
> b) reboot the hypervisor
>
> c) stop vdsmd
>
> d) start the setup again with the answer file
> It seems all went well and this time strangely the step that took more than
> 10 minutes before lasted less than 2 seconds
>
> I was then able to deploy storage and iso domains without problems and self
> hosted engine domain correctly detected and imported too.
> Created two CentOS VMs without problems (6.7 and 7.2).
>
> See below the full output of deploy command
>
>
> [root@ovirt01 ~]# hosted-engine --deploy
> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Configuration files:
> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Bridge ovirtmgmt already created
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>   During customization use CTRL-D to abort.
> [ INFO  ] Installing on first host
>
>   --== SYSTEM CONFIGURATION ==--
>
>
>   --== NETWORK CONFIGURATION ==--
>
>
>   --== VM CONFIGURATION ==--
>
> [ INFO  ] Checking OVF archive content (could take a few minutes depending
> on archive size)
> [ INFO  ] Checking OVF XML content (could take a few minutes depending on
> archive size)
> [WARNING] OVF does not contain a valid image description, using default.
>   Enter root password that will be used for the engine appliance
> (leave it empty to skip):
>   Confirm appliance root password:
> 

Re: [ovirt-users] Deleting templates

2016-05-02 Thread Idan Shaby
Hi Nicolas,

You are right. This warning is unnecessary and should be removed.
I've opened a BZ [1] to track it.


Thanks for sharing,
Idan

[1] *Bug 1332095*
 - Incorrect
warning in section 7.3.3 - Deleting a Template

On Thu, Apr 28, 2016 at 12:27 PM, Ollie Armstrong  wrote:

> On 28 April 2016 at 10:20, Nicolas Ecarnot  wrote:
> > IIRC, I should be able to delete a template if all my templated VMs are
> *cloned*, and I should not be able to delete a template if some of my
> templated VMs are "based on/thined/tpl-snapshoted/whatever" ?
>
> As far as my understanding goes, this is correct.
>
> In my environment at least, whenever as VM is created through the web
> UI it is cloned and I can delete the template.  By default, the API
> doesn't seem to clone the disk, but this can be specified.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] cdrom-installed engine rejected the password

2016-05-02 Thread Simone Tiraboschi
On Fri, Apr 29, 2016 at 12:16 PM, Wee Sritippho  wrote:
> Hi,
>
> I installed oVirt hosted-engine using cdrom method. When I finished the
> engine vm installation and tried to proceed, I got this message:
>
> [ ERROR ] The engine API didnt accepted the administrator password you
> provided Please enter it again to retry.
>   Enter admin@internal user password that will be used for accessing
> the Administrator Portal:
>
> When I tried to login via the web GUI, I got this message:
>
> Cannot Login. User Password has expired, Please change your password.
>
> Could this occur because I used Thai locale to install CentOS in the engine
> vm?
>
> [root@engine ~]# date
> ศ. 29 เม.ย. 2559 17:04:44 ICT

Can you please provide engine-setup logs and engine.log from your engine VM?

If your VM really thinks to be in 2559 something strange could happen.

> [root@engine ~]# rpm -qa | grep ovirt
> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.5.3-1.el7.centos.noarch
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-backend-3.6.5.3-1.el7.centos.noarch
> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.6.5.3-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.5.0-1.el7.centos.noarch
> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
> ovirt-engine-setup-base-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-setup-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-restapi-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
> ovirt-engine-lib-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-websocket-proxy-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-userportal-3.6.5.3-1.el7.centos.noarch
> ovirt-engine-dbscripts-3.6.5.3-1.el7.centos.noarch
>
> --
> Wee
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Sat, Apr 30, 2016 at 10:59 PM, Gianluca Cecchi
 wrote:
> Hello,
> trying to deploy a self hosted engine on an Intel NUC6i5SYB with CentOS 7.2
> using oVirt 3.6.5 and appliance (picked up rpm is
> ovirt-engine-appliance-3.6-20160420.1.el7.centos.noarch)
>
> Near the end of the command
> hosted-engine --deploy
>
> I get
> ...
>   |- [ INFO  ] Initializing PostgreSQL
>   |- [ INFO  ] Creating PostgreSQL 'engine' database
>   |- [ INFO  ] Configuring PostgreSQL
>   |- [ INFO  ] Creating/refreshing Engine database schema
>   |- [ INFO  ] Creating/refreshing Engine 'internal' domain database
> schema
> [ ERROR ] Engine setup got stuck on the appliance
> [ ERROR ] Failed to execute stage 'Closing up': Engine setup is stalled on
> the appliance since 600 seconds ago. Please check its log on the appliance.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue, fix and redeploy
>
> On host log I indeed see the 10 minutes timeout:
>
> 2016-04-30 19:56:52 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND |- [ INFO  ]
> Creating/refreshing Engine 'internal' domain database schema
> 2016-04-30 20:06:53 ERROR
> otopi.plugins.ovirt_hosted_engine_setup.engine.health health._closeup:140
> Engine setup got stuck on the appliance
>
> On engine I don't see any particular problem but a ten minutes delay in its
> log:
>
> 2016-04-30 17:56:57 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
> 2016-04-30 17:56:57 DEBUG otopi.context context._executeMethod:142 Stage
> misc METHOD
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc.Plugin._setupAdminPassword
> 2016-04-30 17:56:57 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None', cwd='None',
> env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/', 'OVIRT_ENGINE_JAVA_HOME':
> u'/usr/lib/jvm/jre', 'PATH':
> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly', 'OTOPI_EXECDIR': '/'}
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:878 execute-result: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), rc=0
>
> and its last lines are:
>
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.execute:936 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z') stdout:
> updating user admin...
> user updated successfully

hosted-engine-setup creates a fresh VM and inject a cloud-init script
to configure it and execute there engine-setup to configure the engine
as needed.
Since engine-setup is running on the engine VM triggered by
cloud-init, hosted-engine-setup has no way to really control its
process status so we simply gather its output with a timeout of 10
minutes between each single output line.
In nothing happens within 10 minutes (the value is easily
customizable), hosted-engine-setup thinks that engine-setup is stuck.

So the issue we have to understood is why this simple command took
more than 10 minutes in your env:
2016-04-30 17:56:57 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
'/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
'OTOPI_EXECDIR': '/'}

Can you please check 

Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-05-02 Thread Alexis HAUSER

>> Should I report this on the bugzilla ?
>>

>You can, but I beleive this is not bug, but some misconfiguration, many 
>times I've tried completelly simillar setup and it worked.
>
>Btw.. did you used 'ovirt-engine-extension-aaa-ldap-setup'? If not you 
>can install it.
>  $ yum install ovirt-engine-extension-aaa-ldap-setup
>
>Then just run:
>  $ ovirt-engine-extension-aaa-ldap-setup
>
>And follow the steps. This tool handle for you all perms and typos 
>issues, which could be introduces by manually creating those properties 
>files.

Yes this is actually the tool I used first, then I modified manually as on the 
documentation.

The problem in this approach is the fact you need a .profile file to be able to 
set up a TLS connection between the LDAP and the engine. But this file is 
generated after the interactive setup. But the interactive setup doesn't allow 
you to setup things properly as the TLS isn't set up...

So I had to setup things with "insecure" mode and then edit it manually...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users