[ovirt-users] Re: VM pools broken in 4.3

2019-05-21 Thread Lucie Leistnerova

Hi Rik,

I tried also USB enabled pool and other combinations. And unfortunately 
I did not reproduce the problem.


Maybe Michal can say where to look further.

On 5/21/19 9:29 AM, Rik Theys wrote:


Hi,

I've now created a new pool without USB support. After creating the 
pool, I've restarted ovirt-engine as I could not start the VM's from 
the pool (it indicated a similar request was already running).


When the ovirt-engine was restarted I logged into the VM portal and 
was able to launch a VM from the new pool. Once booted, I powered it 
down (from within the VM). Once the VM portal UI indicated the VM was 
down, I clicked run again to launch a new instance from the pool. The 
same error as before comes up that there is no VM available (which is 
incorrect as the pool is larger than 1 VM and no VM's are running at 
that point).


In the log, the below errors and warnings are logged (stripped the 
INFO). It seems to try and release a lock which does not exist, or is 
referencing it with the wrong id. Is there a way to trace which locks 
are currently being held? Are they stored persistently somewhere that 
may be causing my issues?


Regards,

Rik

2019-05-21 09:16:39,342+02 ERROR 
[org.ovirt.engine.core.bll.GetPermissionsForObjectQuery] (default 
task-2) [5e16cd77-01c8-43ed-80d2-c85452732570] Query execution failed 
due to insufficient permissions.
2019-05-21 09:16:39,345+02 ERROR 
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] 
(default task-2) [] Operation Failed: query execution failed due to 
insufficient permissions.

  testpool-1
  6489cc72-8ca5-4943-901c-bbd405bdac68
  8388608
  8388608
  33554432
  16
  
    
  oVirt
  OS-NAME:
  OS-VERSION:
  HOST-SERIAL:
  6489cc72-8ca5-4943-901c-bbd405bdac68
    
  
  
    
    
    
  
  
    
  
  
    Haswell-noTSX
    
    
    
    
  
    
  
  
  
    
    
  
  path="/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.ovirt-guest-agent.0"/>

    
    
  
  path="/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.org.qemu.guest_agent.0"/>

    
    
  
  
  type="pci"/>

    
    
  
  type="pci"/>

    
    
  
  type="pci"/>

    
    
  
  type="pci"/>

    
    passwdValidTo="1970-01-01T00:00:01" keymap="en-us">

  
    
    
  type="pci"/>

    
    
  type="pci"/>

    
    
  vgamem="16384"/>

  
  type="pci"/>

    
    passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">

  
  
  
  
  
  
  
  
  
    
    
  /dev/urandom
  
    
    
  
    
    
  
  
  
  
  
  type="pci"/>

  
  
  
  
    
    
  
  
    
  
  
  
  
  
    
    
  
  dev="/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b">

    
  
  cache="none"/>

  
  
  
9651890f-9b0c-4857-abae-77b8b543a897
    
  
  
    
    
  
  
    hvm
    
  
  
    
    
  type="int">4096

4.3
  
  
    
  
  
5982eba0-03b9-0281-0363-037a
8690340a-8d4d-4d04-a8ab-b18fe6cbb78b
9651890f-9b0c-4857-abae-77b8b543a897
4194c70d-5b7e-441f-af6b-7d8754e89572
  
false
auto_resume
    
  

2019-05-21 09:16:44,399+02 WARN 
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (default task-9) 
[8a9f8c3f-e441-4121-aa7f-5b2cf26da6bb] Trying to release exclusive 
lock which does not exist, lock key: 'a

5bed59c-d2fe-4fe4-bff7-52efe089ebd6USER_VM_POOL'
  testpool-1
  6489cc72-8ca5-4943-901c-bbd405bdac68
  http://ovirt.org/vm/tune/1.0"; 
xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>

    
    http://ovirt.org/vm/1.0";>
4.3
    type="bool">False

false
    type="int">4096
    type="int">4096

auto_resume
    1558423004.25
    
ea-students
    
    4
    
    
    
4194c70d-5b7e-441f-af6b-7d8754e89572
9651890f-9b0c-4857-abae-77b8b543a897
5982eba0-03b9-0281-0363-037a
8690340a-8d4d-4d04-a8ab-b18fe6cbb78b
    
    
4194c70d-5b7e-441f-af6b-7d8754e89572
9651890f-9b0c-4857-abae-77b8b543a897
    type="int">119537664

/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases
/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/5f606a1e-3377-45b7-91d0-6398f7694c45
5f606a1e-3377-45b7-91d0-6398f7694c45
    
    
4194c70d-5b7e-441f-af6b-7d8754e89572
9651890f-9b0c-4857-abae-77b8b543a897
    type="int">108003328

/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases
/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b
8690340a-8d4d-4d04-a8ab-b18fe6cbb78b
    
    
    
    

  
  33554432
  8388608
  8388608
  16
  
    /machine
  
  
    
  oVirt
  oVirt Node
  7-6.1

[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi,

Things are going from bad to worse it seems.

I've created a new VM to be used as a template and installed it with
CentOS 7. I've created a template of this VM and created a new pool
based on this template.

When I try to boot one of the VM's from the pool, it fails and logs the
following error:

2019-05-17 14:48:01,709+0200 ERROR (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') The vm start process
failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861, in
_run
    dom = self._connection.defineXML(self._domain.xml)
  File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed',
conn=self)
libvirtError: XML error: requested USB port 3 not present on USB bus 0
2019-05-17 14:48:01,709+0200 INFO  (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') Changed state to Down: XML
error: requested USB port 3 not present on USB bus 0 (code=1) (vm:1675)

Strange thing is that this error was not present when I created the
initial master VM.

I get similar errors when I select Q35 type VM's instead of the default.

Did your test pool have VM's with USB enabled?

Regards,

Rik

On 5/17/19 10:48 AM, Rik Theys wrote:
>
> Hi Lucie,
>
> On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>>
>> Hi Rik,
>>
>> On 5/14/19 2:21 PM, Rik Theys wrote:
>>>
>>> Hi,
>>>
>>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>>> anybody else also experiencing this issue?
>>>
>> I've tried to reproduce this issue. And I can use pool VMs as
>> expected, no problem. I've tested clean install and also upgrade from
>> 4.2.8.7.
>> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
>> ovirt-web-ui-1.5.2-1.el7ev.noarch 
> That is strange. I will try to create a new pool to verify if I also
> have the problem with the new pool. Currently we are having this issue
> with two different pools. Both pools were created in August or
> September of last year. I believe it was on 4.2 but could still have
> been 4.1.
>>>
>>> Only a single instance from a pool can be used. Afterwards the pool
>>> becomes unusable due to a lock not being released. Once ovirt-engine
>>> is restarted, another (single) VM from a pool can be used.
>>>
>> What users are running the VMs? What are the permissions?
>
> The users are taking VM's from the pools using the user portal. They
> are all member of a group that has the UserRole permission on the pools.
>
>> Each VM is running by other user? Were already some VMs running
>> before the upgrade?
>
> A user can take at most 1 VM from each pool. So it's possible a user
> has two VM's running (but not from the same pool). It doesn't matter
> which user is taking a VM from the pool. Once a user has taken a VM
> from the pool, no other user can take a VM. If the user that was able
> to take a VM powers it down and tries to run a new VM, it will also fail.
>
> During the upgrade of the host, no VM's were running.
>
>> Please provide exact steps. 
>
> 1. ovirt-engine is restarted.
>
> 2. User A takes a VM from the pool and can work.
>
> 3. User B can not take a VM from that pool.
>
> 4. User A powers off the VM it was using. Once the VM is down, (s)he
> tries to take a new VM, which also fails now.
>
> It seems the VM pool is locked when the first user takes a VM and the
> lock is never released.
>
> In our case, there are no prestarted VM's. I can try to see if that
> makes a difference.
>
>
> Are there any more steps I can take to debug this issue regarding the
> locks?
>
> Regards,
>
> Rik
>
>>> I've added my findings to bug 1462236, but I'm no longer sure the
>>> issue is the same as the one initially reported.
>>>
>>> When the first VM of a pool is started:
>>>
>>> 2019-05-14 13:26:46,058+02 INFO  
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>>> IsVmDuringInitiatingVDSCommand( 
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>>  log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,058+02 INFO  
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
>>> (default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to 
>>> object 
>>> 'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
>>> shared

[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Gianluca,

We are not using gluster, but FC storage.

All VM's from the pool are created from a template.

Regards,

Rik

On 5/16/19 6:48 PM, Gianluca Cecchi wrote:
> On Thu, May 16, 2019 at 6:32 PM Lucie Leistnerova  > wrote:
>
> Hi Rik,
>
> On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3.
>> Is anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade
> from 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch
>>
>> Only a single instance from a pool can be used. Afterwards the
>> pool becomes unusable due to a lock not being released. Once
>> ovirt-engine is restarted, another (single) VM from a pool can be
>> used.
>>
> What users are running the VMs? What are the permissions?
> Each VM is running by other user? Were already some VMs running
> before the upgrade?
> Please provide exact steps.
>>
>>
> Hi, just an idea... could it be related in any way with disks always
> created as preallocated problems reported by users using gluster as
> backend storage?
> What kind of storage domains are you using Rik?
>
> Gianluca 

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YU4VIWOFBN4MB4FDOHQCUIUSEHNW2TV/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Lucie,

On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>
> Hi Rik,
>
> On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>> anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade from
> 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch 
That is strange. I will try to create a new pool to verify if I also
have the problem with the new pool. Currently we are having this issue
with two different pools. Both pools were created in August or September
of last year. I believe it was on 4.2 but could still have been 4.1.
>>
>> Only a single instance from a pool can be used. Afterwards the pool
>> becomes unusable due to a lock not being released. Once ovirt-engine
>> is restarted, another (single) VM from a pool can be used.
>>
> What users are running the VMs? What are the permissions?

The users are taking VM's from the pools using the user portal. They are
all member of a group that has the UserRole permission on the pools.

> Each VM is running by other user? Were already some VMs running before
> the upgrade?

A user can take at most 1 VM from each pool. So it's possible a user has
two VM's running (but not from the same pool). It doesn't matter which
user is taking a VM from the pool. Once a user has taken a VM from the
pool, no other user can take a VM. If the user that was able to take a
VM powers it down and tries to run a new VM, it will also fail.

During the upgrade of the host, no VM's were running.

> Please provide exact steps. 

1. ovirt-engine is restarted.

2. User A takes a VM from the pool and can work.

3. User B can not take a VM from that pool.

4. User A powers off the VM it was using. Once the VM is down, (s)he
tries to take a new VM, which also fails now.

It seems the VM pool is locked when the first user takes a VM and the
lock is never released.

In our case, there are no prestarted VM's. I can try to see if that
makes a difference.


Are there any more steps I can take to debug this issue regarding the locks?

Regards,

Rik

>> I've added my findings to bug 1462236, but I'm no longer sure the
>> issue is the same as the one initially reported.
>>
>> When the first VM of a pool is started:
>>
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>  log id: 2fb4f7f5
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
>> 2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
>> (default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to 
>> object 
>> 'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
>> sharedLocks=''}'
>>
>> -> it has acquired a lock (lock1)
>>
>> 2019-05-14 13:26:46,247+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
>> 'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
>>  sharedLocks=''}'
>>
>> -> it has acquired another lock (lock2)
>>
>> 2019-05-14 13:26:46,352+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: 
>> AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected :  
>> ID: 4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group 
>> VM_POOL_BASIC_OPERATIONS with role type USER
>> 2019-05-14 13:26:46,393+02 INFO  
>> [org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6) 
>> [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand 
>> internal: true. Entities affected :  ID: 
>> d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group 
>> MANIPULATE_PERMISSIONS with role type USER
>> 2019-05-14 13:26:46,433+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user 
>> 'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm 
>> 'd8a99676-d520-425e-9974-1b1efe6da8a5'
>> 2019-05-14 13:26:46,608+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>  log id: 67acc561
>> 2019-05-14 13:26:46,608+02 INFO  
>> [org.ovirt.engine.cor

[ovirt-users] Re: VM pools broken in 4.3

2019-05-16 Thread Gianluca Cecchi
On Thu, May 16, 2019 at 6:32 PM Lucie Leistnerova 
wrote:

> Hi Rik,
> On 5/14/19 2:21 PM, Rik Theys wrote:
>
> Hi,
>
> It seems VM pools are completely broken since our upgrade to 4.3. Is
> anybody else also experiencing this issue?
>
> I've tried to reproduce this issue. And I can use pool VMs as expected, no
> problem. I've tested clean install and also upgrade from 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch
>
> Only a single instance from a pool can be used. Afterwards the pool
> becomes unusable due to a lock not being released. Once ovirt-engine is
> restarted, another (single) VM from a pool can be used.
>
> What users are running the VMs? What are the permissions?
> Each VM is running by other user? Were already some VMs running before the
> upgrade?
> Please provide exact steps.
>
>
> Hi, just an idea... could it be related in any way with disks always
created as preallocated problems reported by users using gluster as backend
storage?
What kind of storage domains are you using Rik?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IBLEPP446ZONZG3C46OOHVU73CCF7LB/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-16 Thread Lucie Leistnerova

Hi Rik,

On 5/14/19 2:21 PM, Rik Theys wrote:


Hi,

It seems VM pools are completely broken since our upgrade to 4.3. Is 
anybody else also experiencing this issue?


I've tried to reproduce this issue. And I can use pool VMs as expected, 
no problem. I've tested clean install and also upgrade from 4.2.8.7.
Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with 
ovirt-web-ui-1.5.2-1.el7ev.noarch


Only a single instance from a pool can be used. Afterwards the pool 
becomes unusable due to a lock not being released. Once ovirt-engine 
is restarted, another (single) VM from a pool can be used.



What users are running the VMs? What are the permissions?
Each VM is running by other user? Were already some VMs running before 
the upgrade?

Please provide exact steps.


I've added my findings to bug 1462236, but I'm no longer sure the 
issue is the same as the one initially reported.


When the first VM of a pool is started:

2019-05-14 13:26:46,058+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
 log id: 2fb4f7f5
2019-05-14 13:26:46,058+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
sharedLocks=''}'

-> it has acquired a lock (lock1)

2019-05-14 13:26:46,247+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
 sharedLocks=''}'

-> it has acquired another lock (lock2)

2019-05-14 13:26:46,352+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: 
AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected :  ID: 
4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group 
VM_POOL_BASIC_OPERATIONS with role type USER
2019-05-14 13:26:46,393+02 INFO  
[org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6) 
[e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand 
internal: true. Entities affected :  ID: d8a99676-d520-425e-9974-1b1efe6da8a5 
Type: VMAction group MANIPULATE_PERMISSIONS with role type USER
2019-05-14 13:26:46,433+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user 
'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm 
'd8a99676-d520-425e-9974-1b1efe6da8a5'
2019-05-14 13:26:46,608+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
 log id: 67acc561
2019-05-14 13:26:46,608+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 67acc561
2019-05-14 13:26:46,719+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running 
command:RunVmCommand internal: true. Entities affected :  ID: 
d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group RUN_VM with role type 
USER
2019-05-14 13:26:46,791+02 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@6db8c94d'}), 
log id: 2c110e4
2019-05-14 13:26:46,795+02 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
UpdateVmDynamicDataVDSCommand, return: , log id: 2c110e4
2019-05-14 13:26:46,804+02 INFO  
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-6) 
[e3c5745c-e593-4aed-ba67-b173808140e8] START,CreateVDSCommand( 
CreateVDSCommandParameters:{hostId='eec7ec2b-cae1-4bb9-b933-4dff47a70bdb', 
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', vm='VM [stud-c7-1]'}), log id: 
71d599f2
2019-05-14 13:26:46,809+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
CreateBrokerVDSCommand(HostName = stud