Hi,

Thanks. I've checked vdsm logs on all my hosts but the only entry I can find grepping by Volume.getInfo is like this:


2018-05-17 10:14:54,892+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Volume.getInfo succeeded in 0.30 seconds (__init__:539)

I cannot find a line like yours... any other way on how to obtain those parameters. This is an iSCSI based storage FWIW (both source and destination of the movement).

Thanks.

El 2018-05-17 10:01, Benny Zlotnik escribió:
In the vdsm log you will find the volumeInfo log which looks like
this:

2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain':
'5c4d2216-
2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL', 'description':
'{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
'00000000-0000-0000-
0000-000000000000', 'format': 'RAW', 'generation': 3, 'image':
'b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc', 'ctime': '1526543244',
'disktype': 'DATA', '
legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824',
'children': [], 'pool': '', 'capacity': '1073741824', 'uuid':
u'7190913d-320c-4fc9-
a5b3-c55b26aa30f4', 'truesize': '0', 'type': 'SPARSE', 'lease':
{'path':
u'/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2e
b3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease',
'owners': [1], 'version': 8L, 'o
ffset': 0}} (__init__:355)

The lease path in my case is: 
/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease

Then you can look in /var/log/sanlock.log

2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
5c4d2216-2eb3-4e24-b254-d5f83fde4dbe:7190913d-320c-4fc9-a5b3-c55b26aa30f4:/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
for 2,9,5049

Then you can use this command to unlock, the pid in this case is 5049

sanlock client release -r RESOURCE -p pid

On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik <bzlot...@redhat.com>
wrote:

I believe you've hit this
bug: https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [1]

You can try to release the lease manually using the sanlock client
command (there's an example in the comments on the bug), 
once the lease is free the job will fail and the disk can be unlock

On Thu, May 17, 2018 at 11:05 AM, <nico...@devels.es> wrote:

Hi,

We're running oVirt 4.1.9 (I know it's not the recommended
version, but we can't upgrade yet) and recently we had an issue
with a Storage Domain while a VM was moving a disk. The Storage
Domain went down for a few minutes, then it got back.

However, the disk's state has stuck in a 'Migrating: 10%' state
(see ss-2.png).

I run the 'unlock_entity.sh' script to try to unlock the disk,
with these parameters:

 # PGPASSWORD=...
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t disk -u
engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38

The disk's state changed to 'OK', but the actual state still
states it's migrating (see ss-1.png).

Calling the script with -t all doesn't make a difference either.

Currently, the disk is unmanageable: cannot be deactivated, moved
or copied, as it says there's a copying operation running already.

Could someone provide a way to unlock this disk? I don't mind
modifying a value directly into the database, I just need the
copying process cancelled.

Thanks.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org



Links:
------
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1565040
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org

Reply via email to