[ovirt-users] Re: Cleanup illegal snapshot

2019-05-15 Thread Ala Hino
Hi Markus,

Few errors are expected. Do you still see the snapshot in the GUI?
Can you please send engine logs as well.

Thanks,
Ala

On Sun, Oct 9, 2016 at 8:33 PM, Markus Stockhausen 
wrote:

> Hi Ala,
>
> that did not help. VDSM log tells me that the delta qcow2 file is missing:
>
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
> volUUID=volUUID).getInfo()
>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
> volUUID)
>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
> self.validate()
>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
> self.validateVolumePath()
>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
> validateVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist: (u'c277351d-e2b1-4057-aafb-
> 55d4b607ebae',)
> ...
> Thread-196::ERROR::2016-10-09 19:31:07,037::utils::739::root::(wrapper)
> Unhandled exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 736, in
> wrapper
> return f(*a, **kw)
>   File "/usr/share/vdsm/virt/vm.py", line 5264, in run
> self.update_base_size()
>   File "/usr/share/vdsm/virt/vm.py", line 5257, in update_base_size
> self.drive.imageID, topVolUUID)
>   File "/usr/share/vdsm/virt/vm.py", line 5191, in _getVolumeInfo
> (domainID, volumeID))
> StorageUnavailableError: Unable to get volume info for domain
> 47202573-6e83-42fd-a274-d11f05eca2dd volume c277351d-e2b1-4057-aafb-
> 55d4b607ebae
>
> Do you have any idea?
>
> Markus
> 
>
> *Von:* Ala Hino [ah...@redhat.com]
> *Gesendet:* Donnerstag, 6. Oktober 2016 12:29
> *An:* Markus Stockhausen
>
> *Betreff:* Re: [ovirt-users] Cleanup illegal snapshot
>
> Indeed, retry live merge. There is no harm in retrying live merge. As
> mentioned, if the image deleted at storage side, retrying live merge should
> clean the engine side.
>
> On Thu, Oct 6, 2016 at 1:06 PM, Markus Stockhausen <
> stockhau...@collogia.de> wrote:
>
>> Hi,
>>
>> we are on OVirt 4.0.4. As explained the situation is as follows:
>>
>> - On Disk we have the base image and the delata qcow2 file
>> - Qemu runs only on the base image
>> - The snapshot in Qemu is tagged as illegal
>>
>> So you say: "Just retry a live merge and everything will cleanup."
>> Did I get it right?
>>
>> Markus
>>
>> ---
>>
>> *Von:* Ala Hino [ah...@redhat.com]
>> *Gesendet:* Donnerstag, 6. Oktober 2016 11:21
>> *An:* Markus Stockhausen
>> *Cc:* Ovirt Users; Nir Soffer; Adam Litke
>>
>> *Betreff:* Re: [ovirt-users] Cleanup illegal snapshot
>>
>> Hi Markus,
>>
>> What's the version that you are using?
>> In oVirt 3.6.6, illegal snapshots could be removed by retrying to live
>> merge them again. Assuming the previous live merge of the snapshot
>> successfully completed but the engine failed to get the result, the second
>> live merge should do the necessary cleanups at the engine side. See
>> https://bugzilla.redhat.com/1323629
>>
>> Hope this helps,
>> Ala
>>
>> On Thu, Oct 6, 2016 at 11:53 AM, Markus Stockhausen <
>> stockhau...@collogia.de> wrote:
>>
>>> Hi Ala,
>>>
>>> > Von: Adam Litke [ali...@redhat.com]
>>> > Gesendet: Freitag, 30. September 2016 15:54
>>> > An: Markus Stockhausen
>>> > Cc: Ovirt Users; Ala Hino; Nir Soffer
>>> > Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>>> >
>>> > On 30/09/16 05:47 +, Markus Stockhausen wrote:
>>> > >Hi,
>>> > >
>>> > >if a OVirt snapshot is illegal we might have 2 situations.
>>> > >
>>> > >1) qemu is still using it - lsof shows qemu access to the base raw
>>> and the
>>> > >delta qcow2 file. -> E.g. a previous live merge failed. In the past we
>>> > >successfully solved that situation by setting the status of the delta
>>> ima

[ovirt-users] Re: Cleanup illegal snapshot

2019-05-15 Thread Ala Hino
Hi Markus,

What's the version that you are using?
In oVirt 3.6.6, illegal snapshots could be removed by retrying to live
merge them again. Assuming the previous live merge of the snapshot
successfully completed but the engine failed to get the result, the second
live merge should do the necessary cleanups at the engine side. See
https://bugzilla.redhat.com/1323629

Hope this helps,
Ala

On Thu, Oct 6, 2016 at 11:53 AM, Markus Stockhausen  wrote:

> Hi Ala,
>
> > Von: Adam Litke [ali...@redhat.com]
> > Gesendet: Freitag, 30. September 2016 15:54
> > An: Markus Stockhausen
> > Cc: Ovirt Users; Ala Hino; Nir Soffer
> > Betreff: Re: [ovirt-users] Cleanup illegal snapshot
> >
> > On 30/09/16 05:47 +, Markus Stockhausen wrote:
> > >Hi,
> > >
> > >if a OVirt snapshot is illegal we might have 2 situations.
> > >
> > >1) qemu is still using it - lsof shows qemu access to the base raw and
> the
> > >delta qcow2 file. -> E.g. a previous live merge failed. In the past we
> > >successfully solved that situation by setting the status of the delta
> image
> > >in the database to OK.
> > >
> > >2) qemu is no longer using it. lsof shows qemu access only to the the
> base
> > >raw file -> E.g. a previous live merge succeded in qemu but Ovirt did
> not
> > >recognize.
> > >
> > >How to clean up the 2nd situation?
> >
> > It seems that you will have to first clean up the engine database to
> > remove references to the snapshot that no longer exists.  Then you
> > will need to remove the unused qcow2 volume.
> >
> > Unfortunately I cannot provide safe instructions for modifying the
> > database but maybe Ala Hino (added to CC:) will be able to help with
> > that.
>
> Do you have some tip for me?
>
> >
> > One you have fixed the DB you should be able to delete the volume
> > using a vdsm verb on the SPM host:
> >
> > # vdsClient -s 0 deleteVolume
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O2Z7RUP4GNZ3CY3FIVE3AQZCQ3J4NNOZ/


[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-27 Thread Ala Hino
The listed leases are broken, there should be no affect on the vm

On Wed, Jun 27, 2018, 4:19 PM Enrico Becchetti 
wrote:

> Hi all,
> after update vdsm I run this command:
>
> *[root@infm-vm04 ~]# vdsm-tool -v check-volume-leases*
> *WARNING: Make sure there are no running storage operations.*
>
> *Do you want to check volume leases? [yes,NO] yes*
>
> *Checking active storage domains. This can take several minutes, please
> wait.*
>
> After that I saw some volumes with issue:
>
> *The following volume leases need repair:*
>
> *- domain: 47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5*
>
> *  - image: 4b2a6552-847c-43f4-a180-b037d0b93a30*
> *- volume: 6eb8caf0-b120-40e5-86b5-405f15d1245a*
> *  - image: 871196b2-9d8b-422f-9a3e-be54e100dc5c*
> *- volume: 861ff7dd-a01c-47f7-8a01-95bc766c2607*
> *  - image: 267b8b8c-da09-44ef-81b3-065dfa2e7085*
> *- volume: d5f3158a-87ac-4c02-84ba-fcb86b8688a0*
> *  - image: c5611862-6504-445e-a6c8-f1e1a95b5df7*
> *- volume: e156ac2e-09ac-4e1e-a139-17fa374a96d4*
> *  - image: e765c9c4-2ef9-4c8f-a573-8cd2b0a0a3a2*
> *- volume: 47d81cbe-598a-402a-9795-1d046c45b7b1*
> *  - image: ab88a08c-910d-44dd-bdf8-8242001ba527*
> *- volume: 86c72239-525f-4b0b-9aa6-763fc71340bc*
> *  - image: 5e8c4620-b6a5-4fc6-a5bb-f209173d186c*
> *- volume: 0f52831b-ec35-4140-9a8c-fa24c2647f17*
> *  - image: a67809fc-b830-4ea3-af66-b5b9285b4924*
> *- volume: 26c6bdd7-1382-4e8e-addc-dcd3898b317f*
>
> *Do you want to repair the leases? [yes,NO] *
>
> what happen If I try to repair them ? Is there any impact to
> my running vm ?
>
> Thanks a lot !!!
> Best Regards
> Enrico
>
>
>
>
> Il 26/06/2018 15:32, Ala Hino ha scritto:
>
> You are running vdsm-4.20.17, and the tool introduced in vdsm-4.20.24.
> You will have to upgrade vdsm to be able to use the tool.
>
> On Tue, Jun 26, 2018 at 4:29 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> Hi,
>> I run this command from my SPM , Centos 7.4.1708:
>>
>> [root@infn-vm05 vdsm]# rpm -qa | grep -i vdsm
>> vdsm-hook-vmfex-dev-4.20.17-1.el7.centos.noarch
>> vdsm-python-4.20.17-1.el7.centos.noarch
>> vdsm-hook-fcoe-4.20.17-1.el7.centos.noarch
>> vdsm-common-4.20.17-1.el7.centos.noarch
>> vdsm-jsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-hook-ethtool-options-4.20.17-1.el7.centos.noarch
>> vdsm-hook-openstacknet-4.20.17-1.el7.centos.noarch
>> vdsm-http-4.20.17-1.el7.centos.noarch
>> vdsm-client-4.20.17-1.el7.centos.noarch
>> vdsm-gluster-4.20.17-1.el7.centos.noarch
>> vdsm-hook-vfio-mdev-4.20.17-1.el7.centos.noarch
>> vdsm-api-4.20.17-1.el7.centos.noarch
>> vdsm-network-4.20.17-1.el7.centos.x86_64
>> vdsm-yajsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-4.20.17-1.el7.centos.x86_64
>> vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch
>>
>> Thanks !!!
>> Enrico
>>
>>
>> Il 26/06/2018 15:21, Ala Hino ha scritto:
>>
>> Hi Enrico,
>>
>> What's the vdsm version that you are using?
>>
>> The tool introduced in vdsm 4.20.24.
>>
>> On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>> Dear Ala,
>>> if you have a few minutes for me I'd like to ask you to read my issue.
>>> It's a strange problem because my vm works fine but I can't delete its
>>> snapshoot.
>>> Thanks a lot
>>> Best Regards
>>> Enrico
>>>
>>>
>>>  Messaggio Inoltrato 
>>> Oggetto: [ovirt-users] Re: Cannot acquire Lock  snapshot error
>>> Data: Mon, 25 Jun 2018 14:20:21 +0200
>>> Mittente: Enrico Becchetti 
>>> 
>>> A: Nir Soffer  
>>> CC: users  
>>>
>>>
>>>  Dear Friends ,
>>> to fix my problem I've try vdsm-tool command but it's seem an error:
>>>
>>> [root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
>>> Usage: /usr/bin/vdsm-tool [options]  [arguments]
>>> Valid options:
>>> ..
>>>
>>> as you can see there isn't check-volumes-option  and my ovirt engine is
>>> already at 4.2.
>>> Any other ideas ?
>>> Thanks a lot !
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> Il 22/06/2018 17:46, Nir Soffer ha scritto:
>>>
>>> On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti <
>>> enrico.becche...@pg.infn.it> wrote:
>>>
>>>>  Dear All,
>>>> my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm storage and
>>>> virtiual machine with
>>>> 

[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-26 Thread Ala Hino
You are running vdsm-4.20.17, and the tool introduced in vdsm-4.20.24.
You will have to upgrade vdsm to be able to use the tool.

On Tue, Jun 26, 2018 at 4:29 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> Hi,
> I run this command from my SPM , Centos 7.4.1708:
>
> [root@infn-vm05 vdsm]# rpm -qa | grep -i vdsm
> vdsm-hook-vmfex-dev-4.20.17-1.el7.centos.noarch
> vdsm-python-4.20.17-1.el7.centos.noarch
> vdsm-hook-fcoe-4.20.17-1.el7.centos.noarch
> vdsm-common-4.20.17-1.el7.centos.noarch
> vdsm-jsonrpc-4.20.17-1.el7.centos.noarch
> vdsm-hook-ethtool-options-4.20.17-1.el7.centos.noarch
> vdsm-hook-openstacknet-4.20.17-1.el7.centos.noarch
> vdsm-http-4.20.17-1.el7.centos.noarch
> vdsm-client-4.20.17-1.el7.centos.noarch
> vdsm-gluster-4.20.17-1.el7.centos.noarch
> vdsm-hook-vfio-mdev-4.20.17-1.el7.centos.noarch
> vdsm-api-4.20.17-1.el7.centos.noarch
> vdsm-network-4.20.17-1.el7.centos.x86_64
> vdsm-yajsonrpc-4.20.17-1.el7.centos.noarch
> vdsm-4.20.17-1.el7.centos.x86_64
> vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch
>
> Thanks !!!
> Enrico
>
>
> Il 26/06/2018 15:21, Ala Hino ha scritto:
>
> Hi Enrico,
>
> What's the vdsm version that you are using?
>
> The tool introduced in vdsm 4.20.24.
>
> On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> Dear Ala,
>> if you have a few minutes for me I'd like to ask you to read my issue.
>> It's a strange problem because my vm works fine but I can't delete its
>> snapshoot.
>> Thanks a lot
>> Best Regards
>> Enrico
>>
>>
>>  Messaggio Inoltrato 
>> Oggetto: [ovirt-users] Re: Cannot acquire Lock  snapshot error
>> Data: Mon, 25 Jun 2018 14:20:21 +0200
>> Mittente: Enrico Becchetti 
>> 
>> A: Nir Soffer  
>> CC: users  
>>
>>
>>  Dear Friends ,
>> to fix my problem I've try vdsm-tool command but it's seem an error:
>>
>> [root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
>> Usage: /usr/bin/vdsm-tool [options]  [arguments]
>> Valid options:
>> ..
>>
>> as you can see there isn't check-volumes-option  and my ovirt engine is
>> already at 4.2.
>> Any other ideas ?
>> Thanks a lot !
>> Best Regards
>> Enrico
>>
>>
>>
>> Il 22/06/2018 17:46, Nir Soffer ha scritto:
>>
>> On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>>  Dear All,
>>> my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm storage and
>>> virtiual machine with
>>> ovirt-engine. All works fine but with one vm when I try to remove its
>>> snapshot I have
>>> this error:
>>>
>>> 2018-06-22 07:35:48,155+0200 INFO  (jsonrpc/5) [vdsm.api] START
>>> prepareMerge(spUUID=u'18d57688-6ed4-43b8-bd7c-0665b55950b7',
>>> subchainInfo={u'img_id': u'c5611862-6504-445e-a6c8-f1e1a95b5df7',
>>> u'sd_id': u'47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5', u'top_id':
>>> u'0e6f7512-871d-4645-b9c6-320ba7e3bee7', u'base_id':
>>> u'e156ac2e-09ac-4e1e-a139-17fa374a96d4'}) from=:::10.0.0.46,53304,
>>> flow_id=07011450-2296-4a13-a9ed-5d5d2b91be98,
>>> task_id=87f95d85-cc3d-4f29-9883-a4dbb3808f88 (api:46)
>>> 2018-06-22 07:35:48,406+0200 INFO  (tasks/3) [storage.merge] Preparing
>>> subchain >> img_id=c5611862-6504-445e-a6c8-f1e1a95b5df7,
>>> top_id=0e6f7512-871d-4645-b9c6-320ba7e3bee7,
>>> base_id=e156ac2e-09ac-4e1e-a139-17fa374a96d4 base_generation=None at
>>> 0x7fcf84ae2510> for merge (merge:177)
>>> 2018-06-22 07:35:48,614+0200 INFO  (tasks/3) [storage.SANLock] Acquiring
>>> Lease(name='e156ac2e-09ac-4e1e-a139-17fa374a96d4',
>>> path='/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases',
>>> offset=115343360) for host id 1 (clusterlock:377)
>>> 2018-06-22 07:35:48,634+0200 ERROR (tasks/3) [storage.guarded] Error
>>> acquiring lock >> ns=04_lease_47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5,
>>> name=e156ac2e-09ac-4e1e-a139-17fa374a96d4, mode=exclusive at
>>> 0x7fcfe09ddf90> (guarded:96)
>>> AcquireLockFailure: Cannot obtain lock: 
>>> "id=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5,
>>> rc=-227, out=Cannot acquire 
>>> Lease(name='e156ac2e-09ac-4e1e-a139-17fa374a96d4',
>>> path='/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases',
>>> offset=115343360), err=(-227, 'Sanlock resource not acquired', 'Lease
>>> resource name is incorrect')"
>>> 2018-06-22 07:35:56,881+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
>>> getAllTasks

[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-26 Thread Ala Hino
Hi Enrico,

What's the vdsm version that you are using?

The tool introduced in vdsm 4.20.24.

On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> Dear Ala,
> if you have a few minutes for me I'd like to ask you to read my issue.
> It's a strange problem because my vm works fine but I can't delete its
> snapshoot.
> Thanks a lot
> Best Regards
> Enrico
>
>
>  Messaggio Inoltrato 
> Oggetto: [ovirt-users] Re: Cannot acquire Lock  snapshot error
> Data: Mon, 25 Jun 2018 14:20:21 +0200
> Mittente: Enrico Becchetti 
> 
> A: Nir Soffer  
> CC: users  
>
>
>  Dear Friends ,
> to fix my problem I've try vdsm-tool command but it's seem an error:
>
> [root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
> Usage: /usr/bin/vdsm-tool [options]  [arguments]
> Valid options:
> ..
>
> as you can see there isn't check-volumes-option  and my ovirt engine is
> already at 4.2.
> Any other ideas ?
> Thanks a lot !
> Best Regards
> Enrico
>
>
>
> Il 22/06/2018 17:46, Nir Soffer ha scritto:
>
> On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>>  Dear All,
>> my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm storage and
>> virtiual machine with
>> ovirt-engine. All works fine but with one vm when I try to remove its
>> snapshot I have
>> this error:
>>
>> 2018-06-22 07:35:48,155+0200 INFO  (jsonrpc/5) [vdsm.api] START
>> prepareMerge(spUUID=u'18d57688-6ed4-43b8-bd7c-0665b55950b7',
>> subchainInfo={u'img_id': u'c5611862-6504-445e-a6c8-f1e1a95b5df7',
>> u'sd_id': u'47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5', u'top_id':
>> u'0e6f7512-871d-4645-b9c6-320ba7e3bee7', u'base_id':
>> u'e156ac2e-09ac-4e1e-a139-17fa374a96d4'}) from=:::10.0.0.46,53304,
>> flow_id=07011450-2296-4a13-a9ed-5d5d2b91be98, 
>> task_id=87f95d85-cc3d-4f29-9883-a4dbb3808f88
>> (api:46)
>> 2018-06-22 07:35:48,406+0200 INFO  (tasks/3) [storage.merge] Preparing
>> subchain > img_id=c5611862-6504-445e-a6c8-f1e1a95b5df7, 
>> top_id=0e6f7512-871d-4645-b9c6-320ba7e3bee7,
>> base_id=e156ac2e-09ac-4e1e-a139-17fa374a96d4 base_generation=None at
>> 0x7fcf84ae2510> for merge (merge:177)
>> 2018-06-22 07:35:48,614+0200 INFO  (tasks/3) [storage.SANLock] Acquiring
>> Lease(name='e156ac2e-09ac-4e1e-a139-17fa374a96d4',
>> path='/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases',
>> offset=115343360) for host id 1 (clusterlock:377)
>> 2018-06-22 07:35:48,634+0200 ERROR (tasks/3) [storage.guarded] Error
>> acquiring lock > name=e156ac2e-09ac-4e1e-a139-17fa374a96d4, mode=exclusive at
>> 0x7fcfe09ddf90> (guarded:96)
>> AcquireLockFailure: Cannot obtain lock: 
>> "id=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5,
>> rc=-227, out=Cannot acquire 
>> Lease(name='e156ac2e-09ac-4e1e-a139-17fa374a96d4',
>> path='/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases',
>> offset=115343360), err=(-227, 'Sanlock resource not acquired', 'Lease
>> resource name is incorrect')"
>> 2018-06-22 07:35:56,881+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
>> getAllTasksStatuses return={'allTasksStatus': 
>> {'87f95d85-cc3d-4f29-9883-a4dbb3808f88':
>> {'code': 651, 'message': 'Cannot obtain lock: 
>> "id=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5,
>> rc=-227, out=Cannot acquire 
>> Lease(name=\'e156ac2e-09ac-4e1e-a139-17fa374a96d4\',
>> path=\'/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases\',
>> offset=115343360), err=(-227, \'Sanlock resource not acquired\', \'Lease
>> resource name is incorrect\')"', 'taskState': 'finished', 'taskResult':
>> 'cleanSuccess', 'taskID': '87f95d85-cc3d-4f29-9883-a4dbb3808f88'}}}
>> from=:::10.0.0.46,53136, task_id=d0e2f4e3-90cb-43c6-aa08-98d1f7efb1bd
>> (api:52)
>>
>
> The issue is corrupted lease for this volume:
>
> AcquireLockFailure: Cannot obtain lock: 
> "id=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5,
> rc=-227, out=Cannot acquire Lease(name='e156ac2e-09ac-4e1e-a139-17fa374a96d4',
> path='/dev/47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5/leases',
> offset=115343360), err=(-227, 'Sanlock resource not acquired', 'Lease
> resource name is incorrect')"
>
> This root cause is faulty merge code in ovirt < 4.1, creating volume
> leases with
> incorrect name. These corrupted leases were not detected until you upgrade
> to
> ovirt >= 4.1, because we started to use volume leases for storage
> operations.
>
> The fix is to run
>
> vdsm-tool check-volume-leases
>
> This will check and repair corrupted leases.
>
> Adding Ala to add more info if needed.
>
> Nir
>
>
> --
> ___
>
> Enrico BecchettiServizio di Calcolo e Reti
>
> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
> Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it
> __
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-26 Thread Ala Hino
Hi,

Can you please send us the logs of the engine and the host(s)?
Does the original VM work and only the log message bothers you

On Tue, Jun 26, 2018 at 11:25 AM,  wrote:

> I tried cloning the VM and deleting the original so maybe it would also
> delete the related tasks... but it didn't, instead of:
>
>Failed to delete snapshot '' for VM 'vmname'.
>
> It now shows:
>
>Failed to delete snapshot '' for VM ''.
>
> Any tip on how to solve this, please?
>
> Thanks
>
> El 2018-06-25 13:28, nico...@devels.es escribió:
>
>> Yes, it returned 0 rows:
>>
>> engine=#  select command_parameters from command_entities where
>> command_params_class =
>> 'org.ovirt.engine.core.common.action.RemoveSnapshotParameters' and
>> status = 'ACTIVE';
>>  command_parameters
>> 
>> (0 rows)
>>
>> El 2018-06-25 11:55, Ala Hino escribió:
>>
>>> There is a correlationId field, I marked below.
>>> Can you please the following statement and send the output?
>>>
>>> select command_parameters from command_entities where
>>> command_params_class =
>>> 'org.ovirt.engine.core.common.action.RemoveSnapshotParameters' and
>>> status = 'ACTIVE'
>>>
>>> On Mon, Jun 25, 2018 at 1:21 PM,  wrote:
>>>
>>> There's no such field in the output. There's a similar one called
>>>> commandId (maybe because this is 4.1.9?).
>>>>
>>>> An output has this format:
>>>>
>>>>  {
>>>>
>>>>   +
>>>>"@class" :
>>>>
>>>> "org.ovirt.engine.core.common.action.RemoveDiskSnapshotsParameters",
>>>
>>>> +
>>>>"commandId" : [ "org.ovirt.engine.core.compat.Guid", {
>>>>
>>>>  +
>>>>  "uuid" : "a13f9eb4-1b0b-4ea3-924c-bd519af7853b"
>>>>
>>>>   +
>>>>} ],
>>>>
>>>>  +
>>>>"parametersCurrentUser" : {
>>>>
>>>> +
>>>>  "@class" :
>>>> "org.ovirt.engine.core.common.businessentities.aaa.DbUser",
>>>>+
>>>>  "id" : [ "org.ovirt.engine.core.compat.Guid", {
>>>>
>>>>   +
>>>>"uuid" : "5e0f9455-e9b5-4445-adb0-0058fc604bef"
>>>>
>>>>   +
>>>>  } ],
>>>>
>>>>  +
>>>>  "externalId" : "fdfc627c-d875-11e0-90f0-83df133b58cc",
>>>>
>>>>  +
>>>>  "domain" : "internal",
>>>>
>>>>+
>>>>  "namespace" : "*",
>>>>
>>>>+
>>>>  "loginName" : "admin",
>>>>
>>>>+
>>>>  "firstName" : "admin",
>>>>
>>>>+
>>>>  "lastName" : "",
>>>>
>>>>  +
>>>>  "department" : "",
>>>>
>>>>+
>>>>  "email" : "",
>>>>
>>>>   +
>>>>  "note" : "",
>>>>
>>>>  +
>>>>  "groupNames" : [ "java.util.ArrayList", [ ] ],
>>>>
>>>>+
>>>>  "groupIds" : [ "java.util.ArrayList", [ ] ],
>>>>
>>>>+
>>>>  "admin" : true,
>>>>
>>>> +
>>>>  "group" : false
>>>>
>>>> +
>>>>},
>>>>
>>>>  +
>>>>"compensationEnabled" : false,
>>>>
>>>>+
>>>>"parentCommand" : "Unknown",
>>>>
>>>>  +
>>>>"commandType" : "Unknown",
>>>>
>>>>  +
>>>>"multipleAction" : true,
>>>>
>>>>  +
>>>>"entityInfo" : null,
>>>>
>>>>+
>>>>"taskG

[ovirt-users] Re: snapshot going to locked state and stays with it

2018-06-25 Thread Ala Hino
When creating a snapshot, we do the snapshot until the operation is
completed.
The lock is performed in order to prevent executing operations that may
affect the snapshot creation.

I requested the log in order to see what went wrong and why the lock wasn't
released.

On Mon, Jun 25, 2018 at 2:02 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> I am not able to get the logs, Ovirt version : 4.2.1
>
> But technically could you let me know the reason for snapshot locked
> state?
>
> Thanks,
> Hari
>
> On Mon, Jun 25, 2018 at 1:37 PM, Ala Hino  wrote:
>
>> Hi Hari,
>>
>> Could you please send us the log files of the engine and the hosts?
>> In addition, what is the version that you using?
>>
>> On Mon, Jun 25, 2018 at 10:47 AM, Hari Prasanth Loganathan <
>> hariprasant...@msystechnologies.com> wrote:
>>
>>> Hi Team,
>>>
>>> I took a snapshot using oVirt and it stays in the LOCKED state for half
>>> an hour.
>>>
>>> DateJun 25, 2018, 1:06:58 PMStatus*LOCKED*MemoryfalseDescription
>>> Immediate2018625-13655Defined Memory1024MBPhysical Memory Guaranteed
>>> 1024MBNumber of CPU Cores1 (1:1:1)
>>>
>>>
>>> 1) What could be the reason for this LOCKED state?
>>> 2) How can I recover from it?
>>>
>>> Thanks,
>>> Hari
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archiv
>>> es/list/users@ovirt.org/message/Z7USXN56POKOMIBZJPMDZNL5NGWFZY75/
>>>
>>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTG35OAKFOZ4VWNCJEEJF5EMNT6I35DJ/


[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
0"
>   +
>} ],
>  +
>"forceDelete" : false,
>  +
>"storageDomainId" : [ "org.ovirt.engine.core.compat.Guid", {
>+
>  "uuid" : "fc2a49fc-fc53-4040-9d61-9dc1e47f51f4"
>   +
>} ],
>  +
>"isInternal" : false,
>   +
>"quotaId" : null,
>   +
>"diskProfileId" : null,
>   +
>"imageId" : [ "org.ovirt.engine.core.compat.Guid", {
>+
>  "uuid" : "----"
>   +
>} ],
>  +
>"destinationImageId" : [ "org.ovirt.engine.core.compat.Guid", {
>   +
>  "uuid" : "----"
>   +
>} ],
>  +
>"diskAlias" : "ubuntu-1604-xilinx_Disk1",
>   +
>"description" : null,
>   +
>"oldLastModifiedValue" : null,
>  +
>"vmSnapshotId" : null,
>  +
>"imageGroupID" : [ "org.ovirt.engine.core.compat.Guid", {
>   +
>  "uuid" : "5e7df75c-57f1-4cee-a178-94a9946482a6"
>   +
>} ],
>      +
>"importEntity" : false,
>   +
>"leaveLocked" : false,
>  +
>"wipeAfterDelete" : false,
>  +
>"imageIds" : [ [ "org.ovirt.engine.core.compat.Guid", {
>   +
>  "uuid" : "10f8adad-7b21-4d39-9583-00bbcc42266a"
>   +
>} ] ],
>  +
>"imageIdsSorted" : true,
>  +
>"childImageIds" : null,
>   +
>"snapshotNames" : [ "java.util.LinkedList", [ "Auto-Snapshot para
> clonado" ] ],  +
>"liveMerge" : false,
>  +
>"containerId" : [ "org.ovirt.engine.core.compat.Guid", {
>+
>  "uuid" : "b0f24a9f-b1ae-4357-b06a-06dbb8f440e6"
>   +
>} ],
>  +
>"sessionId" : "Yl+fI6EgRGm1N2WRLK/Kd+nbf0UhV
> lwsSozQnMCnPIFhayyaTATOlhSiDwmkCUHCunNIHcIvi3pM+9wFUe7m5Q==",+
>"shouldBeLogged" : true,
>  +
>"executionReason" : "REGULAR_FLOW",
>   +
>"transactionScopeOption" : "Required"
>   +
>  }
>
> El 2018-06-25 10:33, Ala Hino escribió:
>
>> I am looking for something similar to the following (in the value of
>> command_parameters):
>>
>>   "snapshotId" : [ "org.ovirt.engine.core.compat.Guid", {
>> "uuid" : "d01a50fe-503d-4081-90d3-a71e464c5c6c"
>>   }
>>
>> Do you see this correlationId 91430fc5-284d-4c26-8d8d-b7bf4053a7e4 in
>> any command_parameters?
>>
>> On Mon, Jun 25, 2018 at 12:28 PM,  wrote:
>>
>> Hi Ala,
>>>
>>> All vmSnapshotId fields are null.
>>>
>>> Could it be that this could be solved by running what there's on
>>> comment 19 in [1]?
>>>
>>>   [1]: https://bugzilla.redhat.com/show_bug.cgi?id=145#c19 [1]
>>>
>>> El 2018-06-25 10:20, Ala Hino escribió:
>>> I understand.
>>>
>>> Back to command entities.
>>> For each active command_parameter_class
>>> org.ovirt.engine.core.common.action.RemoveSnapshotParameters, you
>>> will
>>> find the command params in command_parameters column. One of these
>>> params, is the snapshotId. You need to check whether there is a
>>> snapshot with that Id in snapshots table.
>>>
>>> On Mon, Jun 25, 2018 at 11:48 AM,  wrote:
&

[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
I am looking for something similar to the following (in the value of
command_parameters):

  "snapshotId" : [ "org.ovirt.engine.core.compat.Guid", {
"uuid" : "d01a50fe-503d-4081-90d3-a71e464c5c6c"
  }

Do you see this correlationId 91430fc5-284d-4c26-8d8d-b7bf4053a7e4 in any
command_parameters?

On Mon, Jun 25, 2018 at 12:28 PM,  wrote:

> Hi Ala,
>
> All vmSnapshotId fields are null.
>
> Could it be that this could be solved by running what there's on comment
> 19 in [1]?
>
>   [1]: https://bugzilla.redhat.com/show_bug.cgi?id=145#c19
>
> El 2018-06-25 10:20, Ala Hino escribió:
>
>> I understand.
>>
>> Back to command entities.
>> For each active command_parameter_class
>> org.ovirt.engine.core.common.action.RemoveSnapshotParameters, you will
>> find the command params in command_parameters column. One of these
>> params, is the snapshotId. You need to check whether there is a
>> snapshot with that Id in snapshots table.
>>
>> On Mon, Jun 25, 2018 at 11:48 AM,  wrote:
>>
>> Currently there are no merge related commands running, if you mean
>>> that we're aware of some merge should be happening (it shouldn't). I
>>> restarted the engine but the event is still showing up every 10
>>> seconds.
>>>
>>> El 2018-06-25 09:39, Ala Hino escribió:
>>> Do you have any running merge related commands now?
>>> Is it possible for you to restart the engine? I want to see if
>>> restarting the engine, while the merge commands already manually
>>> repaired, may cause that log message to stop appearing.
>>>
>>> On Mon, Jun 25, 2018 at 11:06 AM,  wrote:
>>>
>>> So if I have this entry, for example, what should I do next?
>>>
>>>  cc7764c5-dbc3-4886-bdd6-dabcc756cf6a |  235 |
>>> cc7764c5-dbc3-4886-bdd6-dabcc756cf6a | {
>>>
>>>
>>>   +|
>>> org.ovir
>>> t.engine.core.common.action.Re [1] [1]moveDiskSnapshotsParameters |
>>> 2018-06-15 09:39:16.846795+01 | ACTIVE | t|
>>> f | {
>>>
>>> +|
>>> org.ovirt.engine.core.common.action.VdcReturnValueBase | t
>>>   | 5e0f9455-e9b5-4445-adb0-0058fc604bef |
>>> ---- | { }
>>>   |699165 | {
>>>+
>>>   |
>>> |
>>>   |   "@class" :
>>>
>>>
>>> "org.ovirt.engine.core.common.action.RemoveDiskSnapshotsParameters",
>>
>>> +|
>>>
>>>   |
>>>  ||  |
>>>|   "valid" : true,
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |   |   "jobId" : [
>>> "org.ovirt.engine.core.compat.Guid", {+
>>>   |
>>> |
>>>   |   "commandId" : [ "org.ovirt.engine.core.compat.Guid", {
>>>
>>>
>>>
>>>  +|
>>>
>>>   |
>>>  ||  |
>>>|   "validationMessages" : [ "VAR__ACTION__REMOVE",
>>> "VAR__TYPE__DISK__SNAP
>>> SHOT", "VAR__ACTION__REMOVE", "VAR__TYPE__DISK__SNAPSHOT" ],+|
>>>
>>>
>>>   |  |
>>>   |
>>> |
>>>   |   | "uuid" :
>>> "2f1ee2a3-fa2f-48b1-924b-9bfa6497f3b3" +
>>>   |
>>> |
>>>   | "uuid" : "cc7764c5-dbc3-4886-bdd6-dabcc756cf6a"
>>>
>>>
>>> +|
>>>
>>>   |
>>>  ||  |
>>>|   "succeeded" : true,
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |   |   } ],
>>>   +
>>>   |
>>&

[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
I understand.

Back to command entities.
For each active command_parameter_class
org.ovirt.engine.core.common.action.RemoveSnapshotParameters, you will find
the command params in command_parameters column. One of these params, is
the snapshotId. You need to check whether there is a snapshot with that Id
in snapshots table.

On Mon, Jun 25, 2018 at 11:48 AM,  wrote:

> Currently there are no merge related commands running, if you mean that
> we're aware of some merge should be happening (it shouldn't). I restarted
> the engine but the event is still showing up every 10 seconds.
>
> El 2018-06-25 09:39, Ala Hino escribió:
>
>> Do you have any running merge related commands now?
>> Is it possible for you to restart the engine? I want to see if
>> restarting the engine, while the merge commands already manually
>> repaired, may cause that log message to stop appearing.
>>
>> On Mon, Jun 25, 2018 at 11:06 AM,  wrote:
>>
>> So if I have this entry, for example, what should I do next?
>>>
>>>  cc7764c5-dbc3-4886-bdd6-dabcc756cf6a |  235 |
>>> cc7764c5-dbc3-4886-bdd6-dabcc756cf6a | {
>>>
>>>   +|
>>> org.ovir
>>> t.engine.core.common.action.Re [1]moveDiskSnapshotsParameters |
>>> 2018-06-15 09:39:16.846795+01 | ACTIVE | t|
>>> f | {
>>>
>>> +|
>>> org.ovirt.engine.core.common.action.VdcReturnValueBase | t
>>>   | 5e0f9455-e9b5-4445-adb0-0058fc604bef |
>>> ---- | { }
>>>   |699165 | {
>>>+
>>>   |
>>> |
>>>   |   "@class" :
>>>
>>> "org.ovirt.engine.core.common.action.RemoveDiskSnapshotsParameters",
>>
>>> +|
>>>
>>>   |
>>>  ||  |
>>>|   "valid" : true,
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |   |   "jobId" : [
>>> "org.ovirt.engine.core.compat.Guid", {+
>>>   |
>>> |
>>>   |   "commandId" : [ "org.ovirt.engine.core.compat.Guid", {
>>>
>>>  +|
>>>
>>>   |
>>>  ||  |
>>>|   "validationMessages" : [ "VAR__ACTION__REMOVE",
>>> "VAR__TYPE__DISK__SNAP
>>> SHOT", "VAR__ACTION__REMOVE", "VAR__TYPE__DISK__SNAPSHOT" ],+|
>>>
>>>   |  |
>>>   |
>>> |
>>>   |   | "uuid" :
>>> "2f1ee2a3-fa2f-48b1-924b-9bfa6497f3b3" +
>>>   |
>>> |
>>>   | "uuid" : "cc7764c5-dbc3-4886-bdd6-dabcc756cf6a"
>>>
>>> +|
>>>
>>>   |
>>>  ||  |
>>>|   "succeeded" : true,
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |   |   } ],
>>>   +
>>>   |
>>> |
>>>   |   } ],
>>>
>>>+|
>>>
>>>   |
>>>  ||  |
>>>|   "isSyncronious" : true,
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |   |   "stepId" : null,
>>>   +
>>>   |
>>> |
>>>   |   "parametersCurrentUser" : {
>>>
>>>   +|
>>>
>>>   |
>>>  ||  |
>>>|   "description" : "",
>>>
>>> +|
>>> |  |
>>> |
>>>   |
>>>   |  

[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
Do you have any running merge related commands now?
Is it possible for you to restart the engine? I want to see if restarting
the engine, while the merge commands already manually repaired, may cause
that log message to stop appearing.

On Mon, Jun 25, 2018 at 11:06 AM,  wrote:

> So if I have this entry, for example, what should I do next?
>
>  cc7764c5-dbc3-4886-bdd6-dabcc756cf6a |  235 |
> cc7764c5-dbc3-4886-bdd6-dabcc756cf6a | {
> +|
> org.ovir
> t.engine.core.common.action.RemoveDiskSnapshotsParameters | 2018-06-15
> 09:39:16.846795+01 | ACTIVE | t| f | {
> +|
> org.ovirt.engine.core.common.action.VdcReturnValueBase | t|
> 5e0f9455-e9b5-4445-adb0-0058fc604bef | ----
> | { }
>   |699165 | {
>  +
>   |  |
>   |   "@class" : "org.ovirt.engine.core.common.
> action.RemoveDiskSnapshotsParameters",  +|
>   |
>||  |   |   "valid"
> : true,
> +|
> |  |
>   |  |
>   |   |   "jobId" : [ "org.ovirt.engine.core.compat.Guid",
> {+
>   |  |
>   |   "commandId" : [ "org.ovirt.engine.core.compat.Guid",
> {   +|
>   |
>||  |   |
>  "validationMessages" : [ "VAR__ACTION__REMOVE", "VAR__TYPE__DISK__SNAP
> SHOT", "VAR__ACTION__REMOVE", "VAR__TYPE__DISK__SNAPSHOT" ],+|
> |  |
>   |  |
>   |   | "uuid" : 
> "2f1ee2a3-fa2f-48b1-924b-9bfa6497f3b3"
> +
>   |  |
>   | "uuid" : "cc7764c5-dbc3-4886-bdd6-dabcc756cf6a"
>   +|
>   |
>||  |   |
>  "succeeded" : true,
> +|
> |  |
>   |  |
>   |   |   } ],
>   +
>   |  |
>   |   } ],
>+|
>   |
>||  |   |
>  "isSyncronious" : true,
> +|
> |  |
>   |  |
>   |   |   "stepId" : null,
>   +
>   |  |
>   |   "parametersCurrentUser" : {
> +|
>   |
>||  |   |
>  "description" : "",
> +|
> |  |
>   |  |
>   |   |   "executionMethod" : "AsJob",
>   +
>   |  |
>   | "@class" : 
> "org.ovirt.engine.core.common.businessentities.aaa.DbUser",
>+|
>   |
>||  |   |
>  "taskPlaceHolderIdList" : [ ],
> +|
> |  |
>   |  |
>   |   |   "monitored" : true,
>  +
>   |  |
>   | "id" : [ "org.ovirt.engine.core.compat.Guid", {
>   +|
>   |
>||  |   |
>  "vdsmTaskIdList" : [ ],
> +|
> |  |
>   

[ovirt-users] Re: snapshot going to locked state and stays with it

2018-06-25 Thread Ala Hino
Hi Hari,

Could you please send us the log files of the engine and the hosts?
In addition, what is the version that you using?

On Mon, Jun 25, 2018 at 10:47 AM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Hi Team,
>
> I took a snapshot using oVirt and it stays in the LOCKED state for half an
> hour.
>
> DateJun 25, 2018, 1:06:58 PMStatus*LOCKED*MemoryfalseDescription
> Immediate2018625-13655Defined Memory1024MBPhysical Memory 
> Guaranteed1024MBNumber
> of CPU Cores1 (1:1:1)
>
>
> 1) What could be the reason for this LOCKED state?
> 2) How can I recover from it?
>
> Thanks,
> Hari
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/Z7USXN56POKOMIBZJPMDZNL5NGWFZY75/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2OZ6HKEIC6O7NMTYXX57OZVMNQEEXJF/


[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
No, you cannot remove the events from this table.
You will have to check ACTIVE commands, and find those that are correlated
with the merge snapshots commands that failed and manually fixed.

On Mon, Jun 25, 2018 at 10:46 AM,  wrote:

> engine=# SELECT status FROM command_entities;
>status
> 
>  FAILED
>  ENDED_SUCCESSFULLY
>  FAILED
>  FAILED
>  UNKNOWN
>  UNKNOWN
>  ENDED_WITH_FAILURE
>  ENDED_WITH_FAILURE
>  ENDED_WITH_FAILURE
>  ENDED_WITH_FAILURE
>  ACTIVE
>  ACTIVE
>  SUCCEEDED
>  SUCCEEDED
>  ACTIVE
>  ACTIVE
>  ACTIVE
>  SUCCEEDED
>  SUCCEEDED
>  SUCCEEDED
>  ACTIVE
>  SUCCEEDED
> (22 rows)
>
> Is it safe to just remove the events in this table?
>
> Thanks
>
>
> El 2018-06-25 08:43, Ala Hino escribió:
>
>> I'd look into command_entites table in the database.
>> You will probably see several merge related commands that are not in
>> END_(SUCCESSFULLY/FAILED) status.
>>
>> On Mon, Jun 25, 2018 at 10:30 AM,  wrote:
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9, recently we had an issue with snapshots
>>> so we had to fix them manually. The issue is mostly solved but now
>>> we're seeing a lot of events like this one:
>>>
>>>2018-06-25 07:58:06,637+01 ERROR
>>>
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>
>>> (DefaultQuartzScheduler6) [91430fc5-284d-4c26-8d8d-b7bf4053a7e4]
>>> EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation
>>> ID: 91430fc5-284d-4c26-8d8d-b7bf4053a7e4, Job ID:
>>> da8120a4-9c6d-4379-ad67-a3808db1fd46, Call Stack: null, Custom ID:
>>> null, Custom Event ID: -1, Message: Failed to delete snapshot
>>> '' for VM 'vmname'.
>>>
>>> Which generates the following event in the manager:
>>>
>>>Failed to delete snapshot '' for VM 'vmname'.
>>>
>>> This event is being generated every 10 seconds, so it's kind of
>>> annoying.
>>>
>>> Any way to remove it manually? It doesn't matter if it entails
>>> touching the DB directly.
>>>
>>> Thanks.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ [1]
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/ [2]
>>> List Archives:
>>>
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/messag
>> e/QYCTXTBGPB25LZYVXGKROG6SSKOCODMQ/
>>
>>> [3]
>>>
>>
>>
>>
>> Links:
>> --
>> [1] https://www.ovirt.org/site/privacy-policy/
>> [2] https://www.ovirt.org/community/about/community-guidelines/
>> [3]
>> https://lists.ovirt.org/archives/list/users@ovirt.org/messag
>> e/QYCTXTBGPB25LZYVXGKROG6SSKOCODMQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TEOCNI3FLOH3FC2CWVYNBQ6OQ3DE3N6T/


[ovirt-users] Re: Failed to delete snapshot '' for VM 'vmname'.

2018-06-25 Thread Ala Hino
I'd look into command_entites table in the database.
You will probably see several merge related commands that are not in
END_(SUCCESSFULLY/FAILED) status.

On Mon, Jun 25, 2018 at 10:30 AM,  wrote:

> Hi,
>
> We're running oVirt 4.1.9, recently we had an issue with snapshots so we
> had to fix them manually. The issue is mostly solved but now we're seeing a
> lot of events like this one:
>
>2018-06-25 07:58:06,637+01 ERROR [org.ovirt.engine.core.dal.dbb
> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler6)
> [91430fc5-284d-4c26-8d8d-b7bf4053a7e4] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID:
> 91430fc5-284d-4c26-8d8d-b7bf4053a7e4, Job ID:
> da8120a4-9c6d-4379-ad67-a3808db1fd46, Call Stack: null, Custom ID: null,
> Custom Event ID: -1, Message: Failed to delete snapshot '' for VM
> 'vmname'.
>
> Which generates the following event in the manager:
>
>Failed to delete snapshot '' for VM 'vmname'.
>
> This event is being generated every 10 seconds, so it's kind of annoying.
>
> Any way to remove it manually? It doesn't matter if it entails touching
> the DB directly.
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/QYCTXTBGPB25LZYVXGKROG6SSKOCODMQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W32FDBOFOXGTQ3XMSGBGUVTAIKRNAGMO/


[ovirt-users] Re: General failure

2018-06-19 Thread Ala Hino
Hi,

Did you try to remove the same snapshot while the VM is down?

On Tue, Jun 19, 2018 at 10:44 AM,  wrote:

> Hi Benny,
>
> I used the tool to track one of the illegal volumes:
>
>image:e05874d2-fb8a-4fd2-94ff-2f4bc6438d47
>
>  [...]
>
>  - 887f486b-15cf-4083-9b35-8b7821a7841a
>status: ILLEGAL, voltype: LEAF, format: COW, legality:
> ILLEGAL, type: SPARSE
>
> So I tracked 887f486b-15cf-4083-9b35-8b7821a7841a in the logs and I saw:
>
> 2018-06-16 04:46:20,818+01 INFO  [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.GetVolumeInfoVDSCommand] (pool-5-thread-3)
> [cfc392ec-dc9f-418d-8156-d05c8e7ab9f8] START,
> GetVolumeInfoVDSCommand(HostName = host.domain.es,
> GetVolumeInfoVDSCommandParameters:{expectedEngineErrors='[VolumeDoesNotExist]',
> runAsync='true', hostId='b2dfb945-d767-44aa-a547-2d1a4381f8e3',
> storagePoolId='75bf8f48-970f-42bc-8596-f8ab6efb2b63',
> storageDomainId='110ea376-d789-40a1-b9f6-6b40c31afe01',
> imageGroupId='e05874d2-fb8a-4fd2-94ff-2f4bc6438d47',
> imageId='887f486b-15cf-4083-9b35-8b7821a7841a'}), log id: 2a795424
>
> 2018-06-16 04:46:22,256+01 ERROR 
> [org.ovirt.engine.core.bll.DestroyImageCheckCommand]
> (pool-5-thread-3) [cfc392ec-dc9f-418d-8156-d05c8e7ab9f8] The following
> images were not removed: [887f486b-15cf-4083-9b35-8b7821a7841a]
>
> 2018-06-16 04:47:44,900+01 ERROR [org.ovirt.engine.core.bll.sna
> pshots.RemoveSnapshotSingleDiskLiveCommand] (DefaultQuartzScheduler10)
> [cfc392ec-dc9f-418d-8156-d05c8e7ab9f8] Snapshot
> '7b6f43ac-d3ad-47b2-8882-f5dccd74cf07' images
> '887f486b-15cf-4083-9b35-8b7821a7841a'..'538600a5-31ab-40af-b326-d56bfc92bb0b'
> merged, but volume removal failed. Some or all of the following volumes may
> be orphaned: [887f486b-15cf-4083-9b35-8b7821a7841a]. Please retry Live
> Merge on the snapshot to complete the operation.
>
> Can you provide some additional steps?
>
> Thank you!
>
>
> El 2018-06-18 18:27, Benny Zlotnik escribió:
>
>> We prevent starting VMs with illegal images[1]
>>
>> You can use "$ vdsm-tool dump-volume-chains"
>> to look for illegal images and then look in the engine log for the
>> reason they became illagal,
>>
>> if it's something like this, it usually means you can remove them:
>>
>> 63696:2018-06-15 09:41:58,134+01 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (DefaultQuartzScheduler2) [6fa97ea4-8f61-4a48-8e08-a8bb1b9de826]
>> Merging of snapshot 'e609d6cc-2025-4cf0-ad34-03519131cdd1' images
>> '1d01c6c8-b61e-42bc-a054-f04c3f792b10'..'ef6f732e-2a7a-4a14-
>> a10f-bcc88bdd805f'
>> failed. Images have been marked illegal and can no longer be previewed
>> or reverted to. Please retry Live Merge on the snapshot to complete
>> the operation.
>>
>> On Mon, Jun 18, 2018 at 5:46 PM,  wrote:
>>
>> Indeed, when the problem started I think the SPM was the host I
>>> added as VDSM log in the first e-mail. Currently it is the one I
>>> sent in the second mail.
>>>
>>> FWIW, if it helps to debug more fluently, we can provide VPN access
>>> to our infrastructure so you can access and see whateve you need
>>> (all hosts, DB, etc...).
>>>
>>> Right now the machines that keep running work, but once shut down
>>> they start showing the problem below...
>>>
>>> Thank you
>>>
>>> El 2018-06-18 15:20, Benny Zlotnik escribió:
>>>
>>> I'm having trouble following the errors, I think the SPM changed or
>>> the vdsm log from the right host might be missing.
>>>
>>> However, I believe what started the problems is this transaction
>>> timeout:
>>>
>>> 2018-06-15 14:20:51,378+01 ERROR
>>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (org.ovirt.thread.pool-6-thread-29)
>>> [1db468cb-85fd-4189-b356-d31781461504] [within thread]: endAction
>>> for
>>> action type RemoveSnapshotSingleDisk threw an exception.:
>>> org.springframework.jdbc.CannotGetJdbcConnectionException: Could
>>> not
>>> get JDBC Connection; nested exception is java.sql.SQLException:
>>> javax.resource.ResourceException: IJ000460: Error checking for a
>>> transaction
>>>  at
>>>
>>> org.springframework.jdbc.datasource.DataSourceUtils.getConne
>> ction(DataSourceUtils.java:80)
>>
>>> [spring-jdbc.jar:4.2.4.RELEASE]
>>>  at
>>>
>>> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTempl
>> ate.java:615)
>>
>>> [spring-jdbc.jar:4.2.4.RELEASE]
>>>  at
>>>
>>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680)
>>
>>> [spring-jdbc.jar:4.2.4.RELEASE]
>>>  at
>>>
>>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712)
>>
>>> [spring-jdbc.jar:4.2.4.RELEASE]
>>>  at
>>>
>>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762)
>>
>>> [spring-jdbc.jar:4.2.4.RELEASE]
>>>  at
>>>
>>> org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P
>> ostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDi
>> alect.java:152)
>>
>>> [dal.jar:]
>>>
>>> This looks like a bug
>>>
>>> Regardless, I am not sure restoring a backup would help since you
>>> 

[ovirt-users] Re: How to delete leftover of a failed live storage migration disk

2018-05-21 Thread Ala Hino
You can run try to find the volume on the storage:

find /rhev/data-center/5af30d59-004c-02f2-01c9-00b8/
679c0725-75fb-4af7-bff1-7c447c5d789c/images/
-name d2a89b5e-7d62-4695-96d8-b762ce52b379

The result will be:

/rhev/data-center/5af30d59-004c-02f2-01c9-00b8/
679c0725-75fb-4af7-bff1-7c447c5d789c/images/*imgUUID*/
d2a89b5e-7d62-4695-96d8-b762ce52b379

imgUUID is what you are looking for.


On Fri, May 11, 2018 at 1:22 PM, Gianluca Cecchi 
wrote:

> Hello,
> I had an error during live storage migration of a disk.
> The destination image was created but the process was not completed,
> because of a bug in the original version of sw.
> Then I updated sw but if I try to run again the move of the same disk to
> the same destination storage domain I get
>
> VDSM command HSMGetAllTasksStatusesVDS failed: Cannot create Logical
> Volume: ('679c0725-75fb-4af7-bff1-7c447c5d789c', 'd2a89b5e-7d62-4695-96d8-
> b762ce52b379')
>
> On destination storage domain, that is empty, from web admin gui I see
> only the 2 OVF_STORE disks.
> From OS point of view using lvs I see the leftover LV that oVirt complains
> not able to create (I suppose because already existent due to the former
> error)
>
> # lvs 679c0725-75fb-4af7-bff1-7c447c5d789c/d2a89b5e-7d62-
> 4695-96d8-b762ce52b379
>   LV   VG
>  Attr   LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   d2a89b5e-7d62-4695-96d8-b762ce52b379 679c0725-75fb-4af7-bff1-7c447c5d789c
> -wi--- 55.00g
>
> I know that I should use the "vdsClient -s 0 deleteVolume " command from
> the SPM host.
>
> the syntax should be
> # vdsClient -s 0 deleteVolume --help
> Error using command: list index out of range
>
> deleteVolume
>,...,  []
> Deletes an volume if its a leaf. Else returns error
>
> I have difficulties to do the exact mapping of the various elements.
> Is it right what below?
>
> sdUUID --> VG name
>
> spUUID I can retrieve using:
>
> # vdsClient -s 0 getStorageDomainInfo 679c0725-75fb-4af7-bff1-7c447c5d789c
> uuid = 679c0725-75fb-4af7-bff1-7c447c5d789c
> type = ISCSI
> vguuid = nkoZA2-nQOu-oeXX-Phpa-moqh-FWuR-AFAh4B
> metadataDevice = 36589cfc006dd999f5618bf759d3f
> state = OK
> version = 4
> role = Master
> vgMetadataDevice = 36589cfc006dd999f5618bf759d3f
> class = Data
> pool = ['5af30d59-004c-02f2-01c9-00b8']
> name = ISCSI_400G
>
> so spUUID is the pool --> 5af30d59-004c-02f2-01c9-00b8 in my case
> ?
>
> for imgUUID I don't know a command to retrieve.
> in my case the target storage domain (ISCSI_400G) in this moment is the
> master one and I can see it under  /rhev/data-center/mnt/blockSD/
> and so I find
>
> # ll /rhev/data-center/mnt/blockSD/679c0725-75fb-4af7-bff1-
> 7c447c5d789c/images/
> total 4
> drwxr-xr-x. 2 vdsm kvm 4096 May 10 15:39 530b3e7f-4ce4-4051-9cac-
> 1112f5f9e8b5
>
> So it seems to me in my case imgUUID is 530b3e7f-4ce4-4051-9cac-
> 1112f5f9e8b5
>
> But even if it is right in my particular case, how can I get in general?
>
> volUUID ? Is it the LV name corresponding, so in my case
> d2a89b5e-7d62-4695-96d8-b762ce52b379 ?
>
> The result of the vdsCLient command should be the removal of LV also?
>
> Thanks in advance for any insight or link to details...
>
>
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: delete snapshot error

2018-05-10 Thread Ala Hino
Hi,

Could you please share the full logs of the ovirt manager (engine) and vdsm?
The VM cannot be started probably because there is a volume with illegal
state on the storage.

On Thu, May 10, 2018 at 5:59 AM, 董青龙  wrote:

> Hi all,
> I am using ovirt4.1. I failed deleting a snapshot of a vm. The
> state of the snapshot stayed locked and I could not start the vm. Anyone
> can help? Thanks!
>
> PS: some logs:
> VDSM command ReconcileVolumeChainVDS failed: Could not acquire
> resource. Probably resource factory threw an exception.: ()
> VDSM host command GetVGInfoVDS failed: Volume Group does not
> exist: (u'vg_uuid: nL3wgg-uctH-1lGd-Vyl1-f1P2-fk95-tH5tlj',)
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] problem to create snapshot

2018-05-06 Thread Ala Hino
[Please always CC ovirt-users so other engineer can provide help]

It seems that the storage domain is corrupted.
Can you please run the following command and send the output?

vdsm-client StorageDomain getInfo storagedomainID=

You may need to move the storage to maintenance and re-initialize it.

On Thu, May 3, 2018 at 10:10 PM, Marcelo Leandro <marcelol...@gmail.com>
wrote:

> Hello,
>
> Thank you for reply:
>
> oVirt Version -  4.1.9
> Vdsm Versoin - 4.20.23
>
> attached logs,
>
> Very Thanks.
>
> Marcelo Leandro
>
> 2018-05-03 15:59 GMT-03:00 Ala Hino <ah...@redhat.com>:
>
>> Can you please share more info?
>> - The version you are using
>> - Full log of vdsm and the engine
>>
>> Is the VM running or down while creating the snapshot?
>>
>> On Thu, May 3, 2018 at 8:32 PM, Marcelo Leandro <marcelol...@gmail.com>
>> wrote:
>>
>>> Anyone help me?
>>>
>>> 2018-05-02 17:55 GMT-03:00 Marcelo Leandro <marcelol...@gmail.com>:
>>>
>>>> Hello ,
>>>>
>>>> I am geting error when try do a snapshot:
>>>>
>>>> Error msg in SPM log.
>>>>
>>>> 2018-05-02 17:46:11,235-0300 WARN  (tasks/2) [storage.ResourceManager]
>>>> Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d5
>>>> 4-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling
>>>> request. (resourceManager:543)
>>>> Traceback (most recent call last):
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
>>>> line 539, in registerResource
>>>> obj = namespaceObj.factory.createResource(name, lockType)
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
>>>> line 193, in createResource
>>>> lockType)
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
>>>> line 122, in __getResourceCandidatesList
>>>> imgUUID=resourceName)
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line
>>>> 213, in getChain
>>>> if srcVol.isLeaf():
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>>>> 1430, in isLeaf
>>>> return self._manifest.isLeaf()
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>>>> 138, in isLeaf
>>>> return self.getVolType() == sc.type2name(sc.LEAF_VOL)
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>>>> 134, in getVolType
>>>> self.voltype = self.getMetaParam(sc.VOLTYPE)
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>>>> 118, in getMetaParam
>>>> meta = self.getMetadata()
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
>>>> line 112, in getMetadata
>>>> md = VolumeMetadata.from_lines(lines)
>>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py",
>>>> line 103, in from_lines
>>>> "Missing metadata key: %s: found: %s" % (e, md))
>>>> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing
>>>> metadata key: 'DOMAIN': found: {'NONE': '#
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> #'}",)
>>>> 2018-05-02 17:46:11,286-0300 WARN  (tasks/2)
>>>> [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438
>>>> -4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6',
>>>> ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a
>>>> processed request (resourceManager:187)
>>>> 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task]
>>>> (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error
>

Re: [ovirt-users] problem to create snapshot

2018-05-03 Thread Ala Hino
Can you please share more info?
- The version you are using
- Full log of vdsm and the engine

Is the VM running or down while creating the snapshot?

On Thu, May 3, 2018 at 8:32 PM, Marcelo Leandro 
wrote:

> Anyone help me?
>
> 2018-05-02 17:55 GMT-03:00 Marcelo Leandro :
>
>> Hello ,
>>
>> I am geting error when try do a snapshot:
>>
>> Error msg in SPM log.
>>
>> 2018-05-02 17:46:11,235-0300 WARN  (tasks/2) [storage.ResourceManager]
>> Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d5
>> 4-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling request.
>> (resourceManager:543)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
>> line 539, in registerResource
>> obj = namespaceObj.factory.createResource(name, lockType)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
>> line 193, in createResource
>> lockType)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
>> line 122, in __getResourceCandidatesList
>> imgUUID=resourceName)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line
>> 213, in getChain
>> if srcVol.isLeaf():
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>> 1430, in isLeaf
>> return self._manifest.isLeaf()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>> 138, in isLeaf
>> return self.getVolType() == sc.type2name(sc.LEAF_VOL)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>> 134, in getVolType
>> self.voltype = self.getMetaParam(sc.VOLTYPE)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
>> 118, in getMetaParam
>> meta = self.getMetadata()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
>> line 112, in getMetadata
>> md = VolumeMetadata.from_lines(lines)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py",
>> line 103, in from_lines
>> "Missing metadata key: %s: found: %s" % (e, md))
>> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing
>> metadata key: 'DOMAIN': found: {'NONE': '#
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> #'}",)
>> 2018-05-02 17:46:11,286-0300 WARN  (tasks/2)
>> [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438
>> -4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6',
>> ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a
>> processed request (resourceManager:187)
>> 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task]
>> (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error (task:875)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
>> 882, in _run
>> return fn(*args, **kargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
>> 336, in run
>> return self.cmd(*self.argslist, **self.argsdict)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
>> line 79, in wrapper
>> return method(self, *args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1938,
>> in createVolume
>> with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
>> line 1025, in acquireResource
>> return _manager.acquireResource(namespace, name, lockType,
>> timeout=timeout)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
>> line 475, in acquireResource
>> raise se.ResourceAcqusitionFailed()
>> ResourceAcqusitionFailed: Could not acquire resource. Probably resource
>> factory threw an exception.: ()
>>
>>
>> Anyone help?
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no available host to migrate on - for some of my VM's

2018-04-24 Thread Ala Hino
Can you please share the the manager (engine) log?
Engine log is under: /var/log/ovirt-engine/engine.log

On Tue, Apr 24, 2018 at 3:32 PM, Nico De Ranter  wrote:

>
> Hi,
>
> I've installed a small test setup with 2 ovirt 4.2.2.6 nodes.  I've
> created a number of VM's which are happily running distributed over both
> hosts.  Now I want to try migrating them between the hosts. However for
> some reason I can only migrate some of the VM's.  When I click on 'migrate'
> for some of the VM's I get the message 'No available host to migrate on'.
>
> I tried comparing the settings via the GUI for a VM that can be migrated
> and one that cannot be migrated, but I do not see any difference.
> Note: all VM's are running Debian.
>
> Any ideas what could be the reason?  Is there any other location or log
> file where I can get more details?
>
> Thanks in advance
>
> Nico
>
> --
>
> Nico De Ranter
>
> Operations Engineer
>
> T. +32 16 38 72 10
>
>
> 
>
> 
>
>
> eSATURNUS
> Romeinse straat 12
> 3001 Leuven – Belgium
>
> T. +32 16 40 12 82
> F. +32 16 40 84 77
> www.esaturnus.com
>
> 
>
> *For Service & Support *
>
> Support Line: +32 16 387210 or via email : supp...@esaturnus.com
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python-SDK4: Knowing snapshot status?

2018-04-09 Thread Ala Hino
Hi,

After issuing the remove operation, you can fetch the VM snapshots and
check the status of the snapshot.
Here is an example how to fetch VM snapshots:

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/list_vm_
snapshots.py

You'd want to wait until the status is OK.

On Mon, Apr 9, 2018 at 2:42 PM,  wrote:

> Hi,
>
> I'm running ovirt-engine-sdk-python 4.2.4 and I'm performing some
> snapshot-related tasks. I'd like to somehow control the status of the
> snapshot in order to know when I'll be able to run the next
> snapshot-related operation.
>
> For example, I'd like to create a new snapshot and then delete X oldest
> snapshots. After creating the snapshot I have to make sure the snapshot
> operation has concluded to run the deletion.
>
> However, I'm unable to find a native way to get the status of a snapshot.
>
> In [1]: snap = conn.follow_link(vm.snapshots)[3]   # This returns one
> snapshot
>
> In [2]: snap.status
>
> In [3]: snap.status_detail
>
> So both status-related properties return None. I've managed to find a
> "poorman's" way by doing this:
>
> while True:
> try:
> snaps_service.service(snap.id).remove()
> except Error, e:
> if e.code == 409:
> sleep(30)
> continue
> else:
> break
>
> Which works but is quite "tricky".
>
> Is there a better way to do this?
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot or not?

2017-11-16 Thread Ala Hino
Hi Tibor,

I am not sure I completely understand the scenario.

You have a VM with two disks and then you create a snapshot including the
two disks?
Before creating the snapshot, did the VM recognize the two disks?

On Mon, Nov 13, 2017 at 10:36 PM, Demeter Tibor  wrote:

> Dear Users,
>
> I have a disk of a vm, that is have a snapshot. It is very interesting,
> because there are two other disk of that VM, but there are no snapshots of
> them.
> I found this while I've try to migrate a storage-domain between two
> datacenter.
> Because, I didn't import that vm from the storage domain, I did an another
> similar VM with exactly same sized thin-provisioned disks. I have renamed,
> copied to here my originals.
>
> The VM started successfully, but the disk that contain a snapshot did not
> recognized by the os. I can see the whole disk as raw. (disk id, format in
> ovirt, filenames of images, etc) . I think ovirt don't know that is a
> snapshotted image and use as raw. Is it possible?
> I don't see any snapshot in snapshots. Also I have try to list snapshots
> with qemu-img info and qemu-img snapshot -l , but it does not see any
> snapshots in the image.
>
> Really, I don't know how is possible this.
>
> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
> image: 5974fd33-af4c-4e3b-aadb-bece6054eb6b
> file format: qcow2
> virtual size: 13T (13958643712000 bytes)
> disk size: 12T
> cluster_size: 65536
> backing file: ../8d815282-6957-41c0-bb3e-6c8f4a23a64b/723ad5aa-02f6-
> 4067-ac75-0ce0a761627f
> backing file format: raw
> Format specific information:
> compat: 0.10
>
> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
> 723ad5aa-02f6-4067-ac75-0ce0a761627f
> image: 723ad5aa-02f6-4067-ac75-0ce0a761627f
> file format: raw
> virtual size: 2.0T (2147483648000 bytes)
> disk size: 244G
>
> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# ll
> total 13096987560 <(309)%20698-7560>
> -rw-rw. 1 36 36 13149448896512 Nov 13 13:42 5974fd33-af4c-4e3b-aadb-
> bece6054eb6b
> -rw-rw. 1 36 361048576 Nov 13 19:34 5974fd33-af4c-4e3b-aadb-
> bece6054eb6b.lease
> -rw-r--r--. 1 36 36262 Nov 13 19:54 5974fd33-af4c-4e3b-aadb-
> bece6054eb6b.meta
> -rw-rw. 1 36 36  2147483648000 Jul  8  2016 723ad5aa-02f6-4067-ac75-
> 0ce0a761627f
> -rw-rw. 1 36 361048576 Jul  7  2016 723ad5aa-02f6-4067-ac75-
> 0ce0a761627f.lease
> -rw-r--r--. 1 36 36335 Nov 13 19:52 723ad5aa-02f6-4067-ac75-
> 0ce0a761627f.meta
>
> qemu-img snapshot -l 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>
> (nothing)
>
> Because it is a very big (13 TB) disk I can't migrate to an another image,
> because I don't have enough free space. So I just would like to use it in
> ovirt like in the past.
>
> I have a very old ovirt (3.5)
>
> How can I use this disk?
>
> Thanks in advance,
>
> Regards,
>
> Tibor
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Engine CI unstable

2017-09-26 Thread Ala Hino
Hi,

In the following build, I see that the build passes on Fedora but is
unstable on RHEL. However, checking CI console, I see that the build
succeeded.

Links to builds:

http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/30692/
http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/30720/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Ala Hino
On Sep 22, 2017 3:54 PM, "Troels Arvin"  wrote:

Hello,

Ala wrote:
> What's the version of the manager (engine)?

4.1.1



> Could you please provide the link or the SPM and the host
> running the VM?

I don't understand that. I cannot provide intimate details about the
installation, nor a link to it.


Typo. I meant logs, not links.



--
Regards,
Troels

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Ala Hino
Hello,

What's the version of the manager (engine)?

Could you please provide the link or the SPM and the host running the VM?

Thanks,
Ala

On Sep 22, 2017 1:19 PM, "Troels Arvin"  wrote:

> Hello,
>
> I have a RHV 4.1 virtualized guest-server with a number of rather large
> VirtIO virtual disks attached. The virtual disks are allocated from a
> fibre channel (block) storage domain. The hypervisor servers run RHEL 7.4.
>
> When I take a snapshot of the guest, then it takes a long time to remove
> the snapshots again, when the guest is powered off (a snapshot of a 2 TiB
> disk takes around 3 hours to remove). However, when the guest is running,
> then snapshot removal is very quick (perhaps around five minutes per
> snapshot). The involved disks have not been written much to while they
> had snapshots.
>
> I would expect the opposite: I.e., when the guest is turned off, then I
> would assume that oVirt can handle snapshot removal in a much more
> aggressive fashion than when performing a live snapshot removal?
>
> When performing offline snapshot removal, then on the hypervisor having
> the SPM role, I see the following in output from "ps xauw":
>
> vdsm 10255 8.3 0.0 389144 27196 ? S convert -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/xxx/
> images/yyy/zzz -O raw /rhev/data-center/mnt/blockSD/xxx/images/yyy/
> zzz_MERGE
>
> I don't see the same kind of process running on a guest's hypervisor when
> online snapshot removal is in progress.
>
> I've read most of https://www.ovirt.org/develop/release-management/
> features/storage/remove-snapshot/
> My interpretation from that document is that I should expect to see "qemu-
> img commit" commands instead of "qemu-img convert" processes. Or?
>
> The RHV system involved is somewhat old, having been upgrade many times
> from 3.x through 4.1. Could it be that it carries around old left-overs
> which results in obsolete snapshot removal behavior?
>
> --
> Regards,
> Troels Arvin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.

2017-07-16 Thread Ala Hino
Please note that bug 1461029
 is about live merge
(the VM is up) while, at least according to the logs, the issue described
here is related to cold merge (the VM is down).

On Thu, Jul 13, 2017 at 7:22 PM, richard anthony falzini <
richardfalz...@gmail.com> wrote:

> Hi,
> i have the same problem with gluster.
> this is a bug that i opened  https://bugzilla.redhat.com/
> show_bug.cgi?id=1461029 .
> In the bug i used single disk vm but i start to notice the problem with
> multiple disk vm.
>
>
> 2017-07-13 0:07 GMT+02:00 Devin Acosta :
>
>> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
>> question has multiple Disks (4 to be exact). It snapshotted OK while on
>> iSCSI however when I went to delete the single snapshot that existed it
>> went into Locked state and never came back. The deletion has been going for
>> well over an hour, and I am not convinced since the snapshot is less than
>> 12 hours old that it’s really doing anything.
>>
>> I have seen that doing some Googling indicates there might be some known
>> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>>
>> In the logs on the engine it shows:
>>
>> 2017-07-12 21:59:42,473Z INFO  [org.ovirt.engine.core.bll.Se
>> rialChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>> type:'PrepareMerge' to complete
>> 2017-07-12 21:59:52,480Z INFO  [org.ovirt.engine.core.bll.Co
>> ncurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
>> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
>> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
>> to complete
>> 2017-07-12 21:59:52,483Z INFO  [org.ovirt.engine.core.bll.Se
>> rialChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>> type:'PrepareMerge' to complete
>> 2017-07-12 22:00:02,490Z INFO  [org.ovirt.engine.core.bll.Co
>> ncurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler6)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
>> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
>> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
>> to complete
>> 2017-07-12 22:00:02,493Z INFO  [org.ovirt.engine.core.bll.Se
>> rialChildCommandsExecutionCallback] (DefaultQuartzScheduler6)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>> type:'PrepareMerge' to complete
>> 2017-07-12 22:00:12,498Z INFO  [org.ovirt.engine.core.bll.Co
>> ncurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler3)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
>> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
>> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
>> to complete
>> 2017-07-12 22:00:12,501Z INFO  [org.ovirt.engine.core.bll.Se
>> rialChildCommandsExecutionCallback] (DefaultQuartzScheduler3)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>> type:'PrepareMerge' to complete
>> 2017-07-12 22:00:22,508Z INFO  [org.ovirt.engine.core.bll.Co
>> ncurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler5)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
>> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
>> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
>> to complete
>> 2017-07-12 22:00:22,511Z INFO  [org.ovirt.engine.core.bll.Se
>> rialChildCommandsExecutionCallback] (DefaultQuartzScheduler5)
>> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>> type:'PrepareMerge' to complete
>>
>> This is what I seen on the SPM when I grep’d the Snapshot ID.
>>
>> 2017-07-12 14:22:18,773-0700 INFO  (jsonrpc/6) [vdsm.api] START
>> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>> spUUID=u'0001-0001-0001-0001-0311',
>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
>> volFormat=4, preallocate=2, diskType=2, 
>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845',
>> desc=u'', 

Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Ala Hino
On Tue, Jun 27, 2017 at 1:37 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> > Hello,
> > I have a storage domain that I have to empty, moving its disks to
> > another storage domain,
> >
> > Both source and target domains are iSCSI
> > What is the behavior in case of preallocated and thin provisioned disk?
> > Are they preserved with their initial configuration?
>
> yes, they stay within their initial configuration
>
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
>

I believe you are referring to the "Auto-generated" snapshot created during
live storage migration. This behavior is reported in
https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.

>
> >
> > Thanks,
> > Gianluca
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0.6 - live disk migration fails

2016-11-24 Thread Ala Hino
Thanks!
Please add Vdsm and Engine logs.

On Nov 24, 2016 4:03 PM, "Maton, Brett" <mat...@ltresources.co.uk> wrote:

> Sure when I get 5 minutes :)
>
> Which logs would you be interested in ?
>
> On 24 November 2016 at 13:33, Ala Hino <ah...@redhat.com> wrote:
>
>> Hi Brett,
>>
>> I apologize but I confused this with a different issue in this area.
>> Would appreciate if you could provide logs. If possible and actually might
>> be simpler to open a bug and the logs there.
>>
>> Thanks!
>>
>> On Thu, Nov 24, 2016 at 12:13 PM, Maton, Brett <mat...@ltresources.co.uk>
>> wrote:
>>
>>> Ok thanks
>>>
>>> On 24 November 2016 at 10:00, Ala Hino <ah...@redhat.com> wrote:
>>>
>>>> It is a known issue and Maor Lipchuk (mlipchuk) is working on a fix.
>>>>
>>>> On Thu, Nov 24, 2016 at 11:57 AM, Maton, Brett <
>>>> mat...@ltresources.co.uk> wrote:
>>>>
>>>>> If I try to migrate a disk of a running VM to another storage domain
>>>>> it fails with the following message:
>>>>>
>>>>> Operation Cancelled
>>>>>
>>>>> Error while executing action: User is not logged in.
>>>>>
>>>>>
>>>>> Migrating disks of stopped VM's continues to work.
>>>>>
>>>>> Probably a bug ?
>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0.6 - live disk migration fails

2016-11-24 Thread Ala Hino
Hi Brett,

I apologize but I confused this with a different issue in this area. Would
appreciate if you could provide logs. If possible and actually might be
simpler to open a bug and the logs there.

Thanks!

On Thu, Nov 24, 2016 at 12:13 PM, Maton, Brett <mat...@ltresources.co.uk>
wrote:

> Ok thanks
>
> On 24 November 2016 at 10:00, Ala Hino <ah...@redhat.com> wrote:
>
>> It is a known issue and Maor Lipchuk (mlipchuk) is working on a fix.
>>
>> On Thu, Nov 24, 2016 at 11:57 AM, Maton, Brett <mat...@ltresources.co.uk>
>> wrote:
>>
>>> If I try to migrate a disk of a running VM to another storage domain it
>>> fails with the following message:
>>>
>>> Operation Cancelled
>>>
>>> Error while executing action: User is not logged in.
>>>
>>>
>>> Migrating disks of stopped VM's continues to work.
>>>
>>> Probably a bug ?
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0.6 - live disk migration fails

2016-11-24 Thread Ala Hino
It is a known issue and Maor Lipchuk (mlipchuk) is working on a fix.

On Thu, Nov 24, 2016 at 11:57 AM, Maton, Brett 
wrote:

> If I try to migrate a disk of a running VM to another storage domain it
> fails with the following message:
>
> Operation Cancelled
>
> Error while executing action: User is not logged in.
>
>
> Migrating disks of stopped VM's continues to work.
>
> Probably a bug ?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cleanup illegal snapshot

2016-10-09 Thread Ala Hino
Hi Markus,

Few errors are expected. Do you still see the snapshot in the GUI?
Can you please send engine logs as well.

Thanks,
Ala

On Sun, Oct 9, 2016 at 8:33 PM, Markus Stockhausen <stockhau...@collogia.de>
wrote:

> Hi Ala,
>
> that did not help. VDSM log tells me that the delta qcow2 file is missing:
>
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
> volUUID=volUUID).getInfo()
>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
> volUUID)
>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
> self.validate()
>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
> self.validateVolumePath()
>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
> validateVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist: (u'c277351d-e2b1-4057-aafb-
> 55d4b607ebae',)
> ...
> Thread-196::ERROR::2016-10-09 19:31:07,037::utils::739::root::(wrapper)
> Unhandled exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 736, in
> wrapper
> return f(*a, **kw)
>   File "/usr/share/vdsm/virt/vm.py", line 5264, in run
> self.update_base_size()
>   File "/usr/share/vdsm/virt/vm.py", line 5257, in update_base_size
> self.drive.imageID, topVolUUID)
>   File "/usr/share/vdsm/virt/vm.py", line 5191, in _getVolumeInfo
> (domainID, volumeID))
> StorageUnavailableError: Unable to get volume info for domain
> 47202573-6e83-42fd-a274-d11f05eca2dd volume c277351d-e2b1-4057-aafb-
> 55d4b607ebae
>
> Do you have any idea?
>
> Markus
> 
>
> *Von:* Ala Hino [ah...@redhat.com]
> *Gesendet:* Donnerstag, 6. Oktober 2016 12:29
> *An:* Markus Stockhausen
>
> *Betreff:* Re: [ovirt-users] Cleanup illegal snapshot
>
> Indeed, retry live merge. There is no harm in retrying live merge. As
> mentioned, if the image deleted at storage side, retrying live merge should
> clean the engine side.
>
> On Thu, Oct 6, 2016 at 1:06 PM, Markus Stockhausen <
> stockhau...@collogia.de> wrote:
>
>> Hi,
>>
>> we are on OVirt 4.0.4. As explained the situation is as follows:
>>
>> - On Disk we have the base image and the delata qcow2 file
>> - Qemu runs only on the base image
>> - The snapshot in Qemu is tagged as illegal
>>
>> So you say: "Just retry a live merge and everything will cleanup."
>> Did I get it right?
>>
>> Markus
>>
>> ---
>>
>> *Von:* Ala Hino [ah...@redhat.com]
>> *Gesendet:* Donnerstag, 6. Oktober 2016 11:21
>> *An:* Markus Stockhausen
>> *Cc:* Ovirt Users; Nir Soffer; Adam Litke
>>
>> *Betreff:* Re: [ovirt-users] Cleanup illegal snapshot
>>
>> Hi Markus,
>>
>> What's the version that you are using?
>> In oVirt 3.6.6, illegal snapshots could be removed by retrying to live
>> merge them again. Assuming the previous live merge of the snapshot
>> successfully completed but the engine failed to get the result, the second
>> live merge should do the necessary cleanups at the engine side. See
>> https://bugzilla.redhat.com/1323629
>>
>> Hope this helps,
>> Ala
>>
>> On Thu, Oct 6, 2016 at 11:53 AM, Markus Stockhausen <
>> stockhau...@collogia.de> wrote:
>>
>>> Hi Ala,
>>>
>>> > Von: Adam Litke [ali...@redhat.com]
>>> > Gesendet: Freitag, 30. September 2016 15:54
>>> > An: Markus Stockhausen
>>> > Cc: Ovirt Users; Ala Hino; Nir Soffer
>>> > Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>>> >
>>> > On 30/09/16 05:47 +, Markus Stockhausen wrote:
>>> > >Hi,
>>> > >
>>> > >if a OVirt snapshot is illegal we might have 2 situations.
>>> > >
>>> > >1) qemu is still using it - lsof shows qemu access to the base raw
>>> and the
>>> > >delta qcow2 file. -> E.g. a previous live merge failed. In the past we
>>> > >successfully solved that situation by setting the status of

Re: [ovirt-users] Cleanup illegal snapshot

2016-10-06 Thread Ala Hino
Hi Markus,

What's the version that you are using?
In oVirt 3.6.6, illegal snapshots could be removed by retrying to live
merge them again. Assuming the previous live merge of the snapshot
successfully completed but the engine failed to get the result, the second
live merge should do the necessary cleanups at the engine side. See
https://bugzilla.redhat.com/1323629

Hope this helps,
Ala

On Thu, Oct 6, 2016 at 11:53 AM, Markus Stockhausen <stockhau...@collogia.de
> wrote:

> Hi Ala,
>
> > Von: Adam Litke [ali...@redhat.com]
> > Gesendet: Freitag, 30. September 2016 15:54
> > An: Markus Stockhausen
> > Cc: Ovirt Users; Ala Hino; Nir Soffer
> > Betreff: Re: [ovirt-users] Cleanup illegal snapshot
> >
> > On 30/09/16 05:47 +, Markus Stockhausen wrote:
> > >Hi,
> > >
> > >if a OVirt snapshot is illegal we might have 2 situations.
> > >
> > >1) qemu is still using it - lsof shows qemu access to the base raw and
> the
> > >delta qcow2 file. -> E.g. a previous live merge failed. In the past we
> > >successfully solved that situation by setting the status of the delta
> image
> > >in the database to OK.
> > >
> > >2) qemu is no longer using it. lsof shows qemu access only to the the
> base
> > >raw file -> E.g. a previous live merge succeded in qemu but Ovirt did
> not
> > >recognize.
> > >
> > >How to clean up the 2nd situation?
> >
> > It seems that you will have to first clean up the engine database to
> > remove references to the snapshot that no longer exists.  Then you
> > will need to remove the unused qcow2 volume.
> >
> > Unfortunately I cannot provide safe instructions for modifying the
> > database but maybe Ala Hino (added to CC:) will be able to help with
> > that.
>
> Do you have some tip for me?
>
> >
> > One you have fixed the DB you should be able to delete the volume
> > using a vdsm verb on the SPM host:
> >
> > # vdsClient -s 0 deleteVolume
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot deletion failure

2016-09-28 Thread Ala Hino
Hi Marcelo,

This error indicates that the image you are trying to delete doesn't exist.
When do you get this error? When running Live Merge or Live Storage
Migration (LSM)?

Please note that we fixed in LSM area where the VM went down while we tried
to delete the auto-generated snapshot. See
https://bugzilla.redhat.com/1368203 .

-Ala

On Wed, Sep 28, 2016 at 3:07 PM, Marcelo Leandro 
wrote:

> Hello, I have the same problem but i use the ovirt version
> 4.0.4.4-1.el7.centos .
>
> My logs.
>
>
> Engine.log
>
> 2016-09-28 08:18:00,947 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmJobsMonitoring]
> (DefaultQuartzScheduler1) [7013b545] VM Job 
> [4dd2b885-2452-4520-b20a-928edea50836]:
> In progress (no change)
> 2016-09-28 08:18:08,010 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-54) [5a27e364] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[eb73a967-1908-46e9-9de2-9706bf29643a= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2016-09-28 08:18:09,169 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-54) [5a27e364] Running command: RemoveSnapshotCommand
> internal: false. Entities affected :  ID: eb73a967-1908-46e9-9de2-9706bf29643a
> Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
> 2016-09-28 08:18:09,185 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-54) [5a27e364] Lock freed to object
> 'EngineLock:{exclusiveLocks='[eb73a967-1908-46e9-9de2-9706bf29643a= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2016-09-28 08:18:09,265 INFO  [org.ovirt.engine.core.bll.snapshots.
> RemoveSnapshotSingleDiskLiveCommand] (pool-7-thread-3) [354939e9] Running
> command: RemoveSnapshotSingleDiskLiveCommand internal: true. Entities
> affected :  ID: ---- Type: Storage
> 2016-09-28 08:18:09,302 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-54) []
> Correlation ID: 5a27e364, Job ID: 661f8f55-30c6-4735-bb0a-fdcd3ac4004a,
> Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Backup the VM'
> deletion for VM 'SRV-ActPrint' was initiated by admin@internal.
> 2016-09-28 08:18:10,197 INFO  [org.ovirt.engine.core.bll.snapshots.
> RemoveSnapshotSingleDiskLiveCommand] (DefaultQuartzScheduler4) [354939e9]
> Executing Live Merge command step 'EXTEND'
> 2016-09-28 08:18:10,254 INFO  [org.ovirt.engine.core.bll.MergeExtendCommand]
> (pool-7-thread-7) [57c94fc3] Running command: MergeExtendCommand internal:
> true. Entities affected :  ID: 6e5cce71-3438-4045-9d54-607123e0557e Type:
> Storage
> 2016-09-28 08:18:10,255 INFO  [org.ovirt.engine.core.bll.MergeExtendCommand]
> (pool-7-thread-7) [57c94fc3] Refreshing volume 
> c08d86ed-46f1-44bc-9476-0cc2c6aed367
> on host f22d87b9-4449-4a71-8529-58095dd81b6f
> 2016-09-28 08:18:10,275 INFO  [org.ovirt.engine.core.bll.RefreshVolumeCommand]
> (pool-7-thread-7) [47625ba4] Running command: RefreshVolumeCommand
> internal: true.
> 2016-09-28 08:18:10,275 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.RefreshVolumeVDSCommand]
> (pool-7-thread-7) [47625ba4] START, RefreshVolumeVDSCommand(HostName =
> Host04, RefreshVolumeVDSCommandParameters:{runAsync='true',
> hostId='f22d87b9-4449-4a71-8529-58095dd81b6f',
> storagePoolId='77e24b20-9d21-4952-a089-3c5c592b4e6d',
> storageDomainId='6e5cce71-3438-4045-9d54-607123e0557e',
> imageGroupId='9fc0b2f6-d786-4a21-8f5c-b22b23df4aaa',
> imageId='c08d86ed-46f1-44bc-9476-0cc2c6aed367'}), log id: 77b9ded4
> 2016-09-28 08:18:11,245 INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler8)
> [354939e9] Command 'RemoveSnapshot' (id: 
> '18613dc9-d8c8-45c4-9fbe-a298e701ead5')
> waiting on child command id: 'fd866748-3211-4d48-9908-12eb6078a69e' 
> type:'RemoveSnapshotSingleDiskLive'
> to complete
> 2016-09-28 08:18:11,810 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.RefreshVolumeVDSCommand]
> (pool-7-thread-7) [47625ba4] FINISH, RefreshVolumeVDSCommand, log id:
> 77b9ded4
> 2016-09-28 08:18:11,810 INFO  [org.ovirt.engine.core.bll.RefreshVolumeCommand]
> (pool-7-thread-7) [47625ba4] Successfully refreshed volume
> 'c08d86ed-46f1-44bc-9476-0cc2c6aed367' on host 'f22d87b9-4449-4a71-8529-
> 58095dd81b6f'
> 2016-09-28 08:18:12,267 INFO  [org.ovirt.engine.core.bll.snapshots.
> RemoveSnapshotSingleDiskLiveCommand] (DefaultQuartzScheduler10)
> [354939e9] Waiting on Live Merge command step 'EXTEND' to finalize
> 2016-09-28 08:18:14,294 INFO  [org.ovirt.engine.core.bll.snapshots.
> RemoveSnapshotSingleDiskLiveCommand] (DefaultQuartzScheduler9) [354939e9]
> Executing Live Merge command step 'MERGE'
> 2016-09-28 08:18:14,347 INFO  [org.ovirt.engine.core.bll.MergeCommand]
> (pool-7-thread-2) [15ef379f] Running command: MergeCommand internal: true.
> Entities affected :  ID: 6e5cce71-3438-4045-9d54-607123e0557e Type:
> Storage
> 2016-09-28 

Re: [ovirt-users] Live migraton failed

2016-04-08 Thread Ala Hino
My bad, I was referring to live merge of snapshot rather than live
migration of the vm.

On Fri, Apr 8, 2016 at 2:08 PM, Marcin Michta <marcin.mic...@codilime.com>
wrote:

> By merge you mean retry migration?
>
> - Marcin
>
>
> On 08.04.2016 12:40, Ala Hino wrote:
>
> Thank you.
>
> Before digging into hte logs, would like to check what happen whens you
> retry merge - can you please retry and send the logs?
>
> -Ala
>
> On Fri, Apr 8, 2016 at 1:14 PM, Marcin Michta <marcin.mic...@codilime.com>
> wrote:
>
>> Hi Ala,
>>
>> There are logs:
>>
>> engine.log:
>>
>> https://drive.google.com/file/d/0B3gTEK3v8F4bazZGLWZVdlUtVFU/view?pref=2=1
>>
>> vdsm-dest.log:
>>
>> https://drive.google.com/file/d/0B3gTEK3v8F4bRnhmNjhlQ1FNY2M/view?pref=2=1
>>
>> vdsm-source.log
>>
>> https://drive.google.com/file/d/0B3gTEK3v8F4bMmlQelZSejlBY2s/view?pref=2=1
>>
>>
>>
>> On 08.04.2016 10:30, Ala Hino wrote:
>>
>> Hi Marcin,
>>
>> Please attach engine log (/var/log/ovirt-engine/engine.log)
>> and vdsm log (/var/log/vdsm/vdsm.log).
>>
>> Thank you,
>> Ala
>>
>> On Fri, Apr 8, 2016 at 11:09 AM, Marcin Michta <
>> <marcin.mic...@codilime.com>marcin.mic...@codilime.com> wrote:
>>
>>> Hi all,
>>>
>>> I have problem with live migration which failed every time. But some
>>> time ago it worked.
>>> Which logs should I attach to get any help?
>>>
>>> oVirt 3.6.3.4 at CentOS 7.2
>>>
>>> Thanks in advance!
>>> Marcin
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> --
>>
>>
>> --
>> Marcin Michta
>> Systems & Network Administrator
>>
>> [image: codilime_logo]
>> -
>> E: <marcin.mic...@codilime.com>marcin.mic...@codilime.com
>> -
>>
>> CodiLime Sp. z o.o. - Ltd. company with its registered office in Poland,
>> 01-167 Warsaw, ul. Zawiszy 14/97. Registered by The District Court for the
>> Capital City of Warsaw, XII Commercial Department of the National Court
>> Register. Entered into National Court Register under No. KRS 388871.
>> Tax identification number (NIP) 5272657478. Statistical number (REGON)
>> 142974628.
>>
>> -
>>
>> The information in this email is confidential and may be legally
>> privileged, it may contain information that is confidential in CodiLime Sp.
>> z o.o. It is intended solely for the addressee. Any access to this email by
>> third parties is unauthorized. If you are not the intended recipient of
>> this message, any disclosure, copying, distribution or any action
>> undertaken or neglected in reliance thereon is prohibited and may result in
>> your liability for damages.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migraton failed

2016-04-08 Thread Ala Hino
Thank you.

Before digging into hte logs, would like to check what happen whens you
retry merge - can you please retry and send the logs?

-Ala

On Fri, Apr 8, 2016 at 1:14 PM, Marcin Michta <marcin.mic...@codilime.com>
wrote:

> Hi Ala,
>
> There are logs:
>
> engine.log:
>
> https://drive.google.com/file/d/0B3gTEK3v8F4bazZGLWZVdlUtVFU/view?pref=2=1
>
> vdsm-dest.log:
>
> https://drive.google.com/file/d/0B3gTEK3v8F4bRnhmNjhlQ1FNY2M/view?pref=2=1
>
> vdsm-source.log
>
> https://drive.google.com/file/d/0B3gTEK3v8F4bMmlQelZSejlBY2s/view?pref=2=1
>
>
>
> On 08.04.2016 10:30, Ala Hino wrote:
>
> Hi Marcin,
>
> Please attach engine log (/var/log/ovirt-engine/engine.log)
> and vdsm log (/var/log/vdsm/vdsm.log).
>
> Thank you,
> Ala
>
> On Fri, Apr 8, 2016 at 11:09 AM, Marcin Michta <marcin.mic...@codilime.com
> > wrote:
>
>> Hi all,
>>
>> I have problem with live migration which failed every time. But some time
>> ago it worked.
>> Which logs should I attach to get any help?
>>
>> oVirt 3.6.3.4 at CentOS 7.2
>>
>> Thanks in advance!
>> Marcin
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> --
>
>
> --
> Marcin Michta
> Systems & Network Administrator
>
> [image: codilime_logo]
> -
> E: <marcin.mic...@codilime.com>marcin.mic...@codilime.com
> -
>
> CodiLime Sp. z o.o. - Ltd. company with its registered office in Poland,
> 01-167 Warsaw, ul. Zawiszy 14/97. Registered by The District Court for the
> Capital City of Warsaw, XII Commercial Department of the National Court
> Register. Entered into National Court Register under No. KRS 388871.
> Tax identification number (NIP) 5272657478. Statistical number (REGON)
> 142974628.
>
> -
>
> The information in this email is confidential and may be legally
> privileged, it may contain information that is confidential in CodiLime Sp.
> z o.o. It is intended solely for the addressee. Any access to this email by
> third parties is unauthorized. If you are not the intended recipient of
> this message, any disclosure, copying, distribution or any action
> undertaken or neglected in reliance thereon is prohibited and may result in
> your liability for damages.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migraton failed

2016-04-08 Thread Ala Hino
Hi Marcin,

Please attach engine log (/var/log/ovirt-engine/engine.log)
and vdsm log (/var/log/vdsm/vdsm.log).

Thank you,
Ala

On Fri, Apr 8, 2016 at 11:09 AM, Marcin Michta 
wrote:

> Hi all,
>
> I have problem with live migration which failed every time. But some time
> ago it worked.
> Which logs should I attach to get any help?
>
> oVirt 3.6.3.4 at CentOS 7.2
>
> Thanks in advance!
> Marcin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Ala Hino
Hi Marcelo,

Is it cold (the VM is down) or live (the VM is up) merge (snapshot
deletion)?
What version are you running?
Can you please share engine and vdsm logs?

Please note that at some point we try to verify that image was removed by
running getVolumeInfo hence, the volume not found is expected. The thing
is, that you say that volume does exist.
Can you run following command on the host:

vdsClient -s 0 getVolumeInfo

Thank you,
Ala


On Sat, Mar 12, 2016 at 3:35 PM, Marcelo Leandro 
wrote:

> I see the log error:
> Mar 12, 2016 10:33:40 AM
> VDSM Host04 command failed: Volume does not exist:
> (u'948d0453-1992-4a3c-81db-21248853a88a',)
>
> but the volume exist :
> 948d0453-1992-4a3c-81db-21248853a88a
>
> 2016-03-12 10:10 GMT-03:00 Marcelo Leandro :
> > Good morning
> >
> > I have a doubt, when i do a snapshot, a new lvm is generated, however
> > when I delete this snapshot the lvm not off, that's right?
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > 3fba372c-4c39-4843-be9e-b358b196331d
> b47f58e0-d576-49be-b8aa-f30581a0373a
> > 5097df27-c676-4ee7-af89-ecdaed2c77be
> c598bb22-a386-4908-bfa1-7c44bd764c96
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
> > total 0
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
> > 3fba372c-4c39-4843-be9e-b358b196331d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
> > b47f58e0-d576-49be-b8aa-f30581a0373a ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
> > c598bb22-a386-4908-bfa1-7c44bd764c96 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
> >
> >
> >
> > disks snapshot:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 3fba372c-4c39-4843-be9e-b358b196331d
> > image: 3fba372c-4c39-4843-be9e-b358b196331d
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > disk base:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > b47f58e0-d576-49be-b8aa-f30581a0373a
> > image: b47f58e0-d576-49be-b8aa-f30581a0373a
> > file format: raw
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> >
> >
> > Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disabling migration when there is a problem with storage

2016-02-03 Thread Ala Hino
Hi Mark,

You can disable VM migration:
When editing a vm, in the 'Host' tab there is 'Migration option' drop-down 
where you can choose 'Do not allow migration' or 'Allow manual migration only'.

Hope this helps.

Regards,
Ala

- Original Message -
> From: "Mark Steele" 
> To: users@ovirt.org
> Sent: Wednesday, February 3, 2016 4:39:43 PM
> Subject: [ovirt-users] Disabling migration when there is a problem with   
> storage
> 
> Hello,
> 
> I'd like to know if there is a way to configure ovirt ( oVirt Engine Version:
> 3.5.0.1-1.el6) to NOT migrate VM's when there is a problem with attached
> storage.
> 
> Scenario - our storage array that hosts all our VM disks failed. When that
> happened, ovirt attempted to migrate running VM's from one host to another.
> Unfortunately this left many of the VM's in a 'migrating to' state that
> could only be resolved by restarting the HV's
> 
> I'd prefer to have ovirt simply hang the VM's until disk IO returns.
> 
> Is this possible?
> 
> ***
> Mark Steele
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virtual Disk

2015-12-22 Thread Ala Hino
Hello,

Are referring to vm disk size?
If so and I am referring here to the I:
I just tried both options and found that even though it is possible to change 
the size field, the size isn't changed. Probably this field should be graded 
out and only display the current size.
However, the "Extend size by(GB)" field does extend the disk size.

- Original Message -
> From: "Taste-Of-IT" 
> To: users@ovirt.org
> Sent: Monday, December 21, 2015 11:06:49 PM
> Subject: [ovirt-users] virtual Disk
> 
> Hello,
> i testing oVirt 3.6 as Self-Hosted-Engine and create a virtual machine.
> Now i want to change the size of the disk and found the possibilities to
> change the size of the disk and a field to grow the disk. In ovirt
> manual it is descripted to change the value of the grow field. My
> question is what is the difference and what are the results of that. E.g
> what happend if i only change the disk size from 8 to 10? is it the same
> like i change the grow size from 0 to 2?
> 
> thx for technical explanation.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users