Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-07-06 Thread Charles Tassell

Hi Fred,

  Unfortunately we reinstalled the cluster a while ago, so I don't know 
the exact version we were running at the time.  It would have been the 
latest version in the 4.1 production repos though if that helps.



On 2017-07-02 11:32 AM, Fred Rolland wrote:

Seems you hit [1].
Can you tell me what is the exact version of the engine ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793

On Thu, May 11, 2017 at 10:29 PM, Charles Tassell 
> wrote:


Sure, it's pretty big so I've put it online for download at
http://krissy.islandadmin.ca/public/engine.log.txt



On 2017-05-11 04:08 PM, Fred Rolland wrote:

The locking is on the engine side and restarting the vdsm will
not affect it .
Can you send the whole engine log ?
Which exact version are you using ?


On Thu, May 11, 2017 at 9:30 PM, Charles Tassell
> wrote:

Just as an update, I created a new VM and had the same issue:
the disk remains locked.  So I then added a new data store
(this one iSCSI not NFS) and create a new VM on that.  Again,
the disk remains locked.  So the problem seems to be that any
action that sets to modify a disk image on my cluster locks
the disk and keeps it locked permanently.

I tried restarting the vdsm daemon, but that didn't make a
difference.  I'm seeing this in my sanlock.log file though,
which worries me:


017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508

Here's the last 20 lines:
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio
timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto
10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read
timeout 10 sec offset 0

/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio
timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto
10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read
timeout 10 sec offset 0

/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio
collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio
collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio
collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000
result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long
write time 10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource

b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 

Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-07-02 Thread Fred Rolland
Seems you hit [1].
Can you tell me what is the exact version of the engine ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793

On Thu, May 11, 2017 at 10:29 PM, Charles Tassell 
wrote:

> Sure, it's pretty big so I've put it online for download at
> http://krissy.islandadmin.ca/public/engine.log.txt
>
>
> On 2017-05-11 04:08 PM, Fred Rolland wrote:
>
> The locking is on the engine side and restarting the vdsm will not affect
> it .
> Can you send the whole engine log ?
> Which exact version are you using ?
>
>
> On Thu, May 11, 2017 at 9:30 PM, Charles Tassell 
> wrote:
>
>> Just as an update, I created a new VM and had the same issue: the disk
>> remains locked.  So I then added a new data store (this one iSCSI not NFS)
>> and create a new VM on that.  Again, the disk remains locked.  So the
>> problem seems to be that any action that sets to modify a disk image on my
>> cluster locks the disk and keeps it locked permanently.
>>
>> I tried restarting the vdsm daemon, but that didn't make a difference.
>> I'm seeing this in my sanlock.log file though, which worries me:
>>
>>
>> 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
>> delta_length 10 last_success 1738508
>> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
>> delta_length 10 last_success 1738508
>>
>> Here's the last 20 lines:
>> 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
>> delta_length 10 last_success 1738508
>> 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
>> 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
>> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10
>> sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-
>> 188c-4e10-9893-588ceb81b354/dom_md/ids
>> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
>> delta_length 10 last_success 1738508
>> 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
>> 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
>> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10
>> sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/
>> 5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
>> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
>> delta_length 10 last_success 1738508
>> 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
>> 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
>> 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
>> 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
>> 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
>> 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
>> 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
>> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
>> mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
>> 588ceb81b354/dom_md/leases:1048576 for 7,21,78395
>> 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time
>> 10 sec
>> 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
>> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
>> mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
>> 588ceb81b354/dom_md/leases:1048576 for 7,21,78395
>> 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
>> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
>> mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
>> 588ceb81b354/dom_md/leases:1048576 for 7,21,112346
>> 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
>> b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-
>> 46e1-bd57-2cf2b2445087/ids:0
>> 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
>> 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
>> 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
>> b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-
>> 46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346
>> 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
>> b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-
>> 46e1-bd57-2cf2b2445087/ids:0
>> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
>> 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
>> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
>> 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
>>
>>
>> On 2017-05-11 10:09 AM, Charles Tassell wrote:
>>
>> Hi Freddy,
>>
>>   Sure, thanks for looking into this.  Here you go:
>>
>> 2017-05-10 11:35:30,249-03 INFO  
>> [org.ovirt.engine.core.bll.aaa.SessionDataContainer]
>> (DefaultQuartzScheduler8) [1c84abac] Not removing session
>> 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPb
>> jYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user
>> 'admin@internal-authz'.[snip]
>> On 2017-05-11 04:30 AM, Fred Rolland wrote:
>>
>> Hi,
>>
>> 

Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-05-11 Thread Charles Tassell
Sure, it's pretty big so I've put it online for download at 
http://krissy.islandadmin.ca/public/engine.log.txt


On 2017-05-11 04:08 PM, Fred Rolland wrote:
The locking is on the engine side and restarting the vdsm will not 
affect it .

Can you send the whole engine log ?
Which exact version are you using ?


On Thu, May 11, 2017 at 9:30 PM, Charles Tassell 
> wrote:


Just as an update, I created a new VM and had the same issue: the
disk remains locked.  So I then added a new data store (this one
iSCSI not NFS) and create a new VM on that.  Again, the disk
remains locked.  So the problem seems to be that any action that
sets to modify a disk image on my cluster locks the disk and keeps
it locked permanently.

I tried restarting the vdsm daemon, but that didn't make a
difference.  I'm seeing this in my sanlock.log file though, which
worries me:


017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508

Here's the last 20 lines:
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read
timeout 10 sec offset 0

/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read
timeout 10 sec offset 0

/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0
match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0
match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0
match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long
write time 10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource

20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource

b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace

b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0


On 2017-05-11 10:09 AM, Charles Tassell wrote:


Hi Freddy,

  Sure, thanks for looking into this.  Here you go:

2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session

'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user 'admin@internal-authz'.[snip]

On 2017-05-11 04:30 AM, Fred Rolland wrote:

Hi,

Can you provide the engine log ?

Thanks,

Freddy

On Wed, May 10, 2017 at 5:57 PM, Charles Tassell


Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-05-11 Thread Fred Rolland
The locking is on the engine side and restarting the vdsm will not affect
it .
Can you send the whole engine log ?
Which exact version are you using ?


On Thu, May 11, 2017 at 9:30 PM, Charles Tassell 
wrote:

> Just as an update, I created a new VM and had the same issue: the disk
> remains locked.  So I then added a new data store (this one iSCSI not NFS)
> and create a new VM on that.  Again, the disk remains locked.  So the
> problem seems to be that any action that sets to modify a disk image on my
> cluster locks the disk and keeps it locked permanently.
>
> I tried restarting the vdsm daemon, but that didn't make a difference.
> I'm seeing this in my sanlock.log file though, which worries me:
>
>
> 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
> delta_length 10 last_success 1738508
> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
> delta_length 10 last_success 1738508
>
> Here's the last 20 lines:
> 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
> delta_length 10 last_success 1738508
> 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
> 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10
> sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-
> 188c-4e10-9893-588ceb81b354/dom_md/ids
> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
> delta_length 10 last_success 1738508
> 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
> 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10
> sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-
> 3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-
> c1653257cf04
> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
> delta_length 10 last_success 1738508
> 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
> 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
> 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
> 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
> 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
> 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
> 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
> center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
> 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395
> 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time
> 10 sec
> 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
> center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
> 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395
> 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
> 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
> center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
> 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346
> 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
> b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-
> 1924-46e1-bd57-2cf2b2445087/ids:0
> 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
> 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
> 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
> b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
> for 7,21,112346
> 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
> b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-
> 1924-46e1-bd57-2cf2b2445087/ids:0
> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
> 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
> 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
>
>
> On 2017-05-11 10:09 AM, Charles Tassell wrote:
>
> Hi Freddy,
>
>   Sure, thanks for looking into this.  Here you go:
>
> 2017-05-10 11:35:30,249-03 INFO  
> [org.ovirt.engine.core.bll.aaa.SessionDataContainer]
> (DefaultQuartzScheduler8) [1c84abac] Not removing session '
> vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
> session has running commands for user 'admin@internal-authz'.[snip]
> On 2017-05-11 04:30 AM, Fred Rolland wrote:
>
> Hi,
>
> Can you provide the engine log ?
>
> Thanks,
>
> Freddy
>
> On Wed, May 10, 2017 at 5:57 PM, Charles Tassell 
> wrote:
>
>> Hi Everyone,
>>
>>   I'm having some issues with my oVirt 4.1 (fully updated to latest
>> release as of yesterday) cluster.  When I clone a VM the disks of both the
>> original and the clone stay in the locked state, and the only way I can
>> resolve it is to go into the database on the engine and run "update images
>> set imagestatus=1 

Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-05-11 Thread Charles Tassell
Just as an update, I created a new VM and had the same issue: the disk 
remains locked.  So I then added a new data store (this one iSCSI not 
NFS) and create a new VM on that.  Again, the disk remains locked.  So 
the problem seems to be that any action that sets to modify a disk image 
on my cluster locks the disk and keeps it locked permanently.


I tried restarting the vdsm daemon, but that didn't make a difference.  
I'm seeing this in my sanlock.log file though, which worries me:



017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 
delta_length 10 last_success 1738508


Here's the last 20 lines:
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 
sec offset 0 
/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 
sec offset 0 
/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 
10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace 
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource 
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace 
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0



On 2017-05-11 10:09 AM, Charles Tassell wrote:


Hi Freddy,

  Sure, thanks for looking into this.  Here you go:

2017-05-10 11:35:30,249-03 INFO 
[org.ovirt.engine.core.bll.aaa.SessionDataContainer] 
(DefaultQuartzScheduler8) [1c84abac] Not removing session 
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', 
session has running commands for user 'admin@internal-authz'.[snip]


On 2017-05-11 04:30 AM, Fred Rolland wrote:

Hi,

Can you provide the engine log ?

Thanks,

Freddy

On Wed, May 10, 2017 at 5:57 PM, Charles Tassell 
> wrote:


Hi Everyone,

  I'm having some issues with my oVirt 4.1 (fully updated to
latest release as of yesterday) cluster.  When I clone a VM the
disks of both the original and the clone stay in the locked
state, and the only way I can resolve it is to go into the
database on the engine and run "update images set imagestatus=1
where imagestatus=2;"

  I'm using NFS4 as a datastore and the disks seem to copy fine
(file sizes match and everything), but the locking worries me. 
To clone the VM I just shut the source VM down and then right

click on it and select "Clone"

  I've attached the full VDSM log from my last attempt, but here

Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-05-11 Thread Charles Tassell

Hi Freddy,

  Sure, thanks for looking into this.  Here you go:

2017-05-10 11:35:30,249-03 INFO 
[org.ovirt.engine.core.bll.aaa.SessionDataContainer] 
(DefaultQuartzScheduler8) [1c84abac] Not removing session 
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', 
session has running commands for user 'admin@internal-authz'.
2017-05-10 11:35:30,249-03 INFO 
[org.ovirt.engine.core.bll.aaa.SessionDataContainer] 
(DefaultQuartzScheduler8) [1c84abac] Not removing session 
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', 
session has running commands for user 'admin@internal-authz'.
2017-05-10 11:35:37,793-03 INFO 
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default 
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=]', 
sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=]'}'
2017-05-10 11:35:37,843-03 INFO 
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default 
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Running command: 
UpdateVmDiskCommand internal: false. Entities affected : ID: 
d73206ed-89ba-48a9-82ff-c107c1af60f0 Type: DiskAction group 
EDIT_DISK_PROPERTIES with role type USER
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] START, 
SetVolumeDescriptionVDSCommand( 
SetVolumeDescriptionVDSCommandParameters:{runAsync='true', 
storagePoolId='0001-0001-0001-0001-0311', 
ignoreFailoverLimit='false', 
storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', 
imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0', 
imageId='bba1813d-9e91-4e37-bed7-40beedda8051'}), log id: 31e5d891
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] -- 
executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ 
spUUID=0001-0001-0001-0001-0311
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ 
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ 
imageGroupGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ 
volUUID=bba1813d-9e91-4e37-bed7-40beedda8051
2017-05-10 11:35:37,846-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ 
description={"DiskAlias":"Ubu16_Disk1","DiskDescription":""}
2017-05-10 11:35:37,937-03 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] FINISH, 
SetVolumeDescriptionVDSCommand, log id: 31e5d891
2017-05-10 11:35:37,981-03 INFO 
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default 
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock freed to object 
'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=]', 
sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=]'}'
2017-05-10 11:35:37,988-03 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] EVENT_ID: 
USER_UPDATE_VM_DISK(88), Correlation ID: 
2c63ae40-b59d-4a80-b348-6c9b9fa176b8, Call Stack: null, Custom Event ID: 
-1, Message: VM Ubu16 Xenial_Disk1 disk was updated by admin@internal-authz.
2017-05-10 11:36:19,931-03 INFO 
[org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) 
[6c38dbf9-6c8f-4390-979f-4031b5b46c98] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[chUbu16Clone=]', 
sharedLocks='[d73206ed-89ba-48a9-82ff-c107c1af60f0=, 
6d51dd95-ba25-4ae9-8bee-303f90e76e62=]'}'
2017-05-10 11:36:19,981-03 INFO 
[org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) 
[6c38dbf9-6c8f-4390-979f-4031b5b46c98] Running command: CloneVmCommand 
internal: false. Entities affected :  ID: 
6d51dd95-ba25-4ae9-8bee-303f90e76e62 Type: VMAction group CREATE_VM with 
role type 

Re: [ovirt-users] Cloning VM on NFS Leads to Locked Disks

2017-05-11 Thread Fred Rolland
Hi,

Can you provide the engine log ?

Thanks,

Freddy

On Wed, May 10, 2017 at 5:57 PM, Charles Tassell 
wrote:

> Hi Everyone,
>
>   I'm having some issues with my oVirt 4.1 (fully updated to latest
> release as of yesterday) cluster.  When I clone a VM the disks of both the
> original and the clone stay in the locked state, and the only way I can
> resolve it is to go into the database on the engine and run "update images
> set imagestatus=1 where imagestatus=2;"
>
>   I'm using NFS4 as a datastore and the disks seem to copy fine (file
> sizes match and everything), but the locking worries me.  To clone the VM I
> just shut the source VM down and then right click on it and select "Clone"
>
>   I've attached the full VDSM log from my last attempt, but here is the
> excerpt of the lines just referencing the two disks
> (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
> 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
>
> 2017-05-10 11:36:20,120-0300 INFO  (jsonrpc/2) [dispatcher] Run and
> protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354',
> spUUID=u'0001-0001-0001-0001-0311', vmUUID='',
> srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0',
> srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051',
> dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224',
> dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'',
> dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8,
> volFormat=5, preallocate=2, postZero=u'false', force=u'false',
> discard=False) (logUtils:51)
> 2017-05-10 11:36:20,152-0300 INFO  (jsonrpc/2) [storage.Image] image
> d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
> 20423d5e-188c-4e10-9893-588ceb81b354 has vollist
> [u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319)
> 2017-05-10 11:36:20,169-0300 INFO  (jsonrpc/2) [storage.Image]
> sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 
> imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
> chain=[] (image:249)
> 2017-05-10 11:36:20,292-0300 INFO  (tasks/0) [storage.Image]
> sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 
> imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
> chain=[] (image:249)
> 2017-05-10 11:36:20,295-0300 INFO  (tasks/0) [storage.Image]
> sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID=
> srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
> srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
> dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
> dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
> dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW
> preallocate=SPARSE force=False postZero=False discard=False (image:765)
> 2017-05-10 11:36:20,305-0300 INFO  (tasks/0) [storage.Image] copy source
> 20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-
> 82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051 vol size 41943040
> destination 20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-
> af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7 apparentsize
> 41943040 (image:815)
> 2017-05-10 11:36:20,306-0300 INFO  (tasks/0) [storage.Image] image
> 670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
> 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319)
> 2017-05-10 11:36:20,306-0300 INFO  (tasks/0) [storage.Image] Create
> placeholder /rhev/data-center/0001-0001-0001-0001-0311/20423
> d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224
> for image's volumes (image:149)
> 2017-05-10 11:36:20,392-0300 INFO  (tasks/0) [storage.Volume] Request to
> create RAW volume /rhev/data-center/0001-000
> 1-0001-0001-0311/20423d5e-188c-4e10-9893-588ceb81b35
> 4/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7
> with size = 20480 sectors (fileVolume:439)
> 2017-05-10 11:37:58,453-0300 INFO  (tasks/0) [storage.VolumeManifest]
> sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 
> imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
> volUUID = 013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL
> (volume:393)
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users