Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.

2017-07-13 Thread Benny Zlotnik
Hi,

Can you please attach full engine and vdsm logs?

On Thu, Jul 13, 2017 at 1:07 AM, Devin Acosta  wrote:
> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
> question has multiple Disks (4 to be exact). It snapshotted OK while on
> iSCSI however when I went to delete the single snapshot that existed it went
> into Locked state and never came back. The deletion has been going for well
> over an hour, and I am not convinced since the snapshot is less than 12
> hours old that it’s really doing anything.
>
> I have seen that doing some Googling indicates there might be some known
> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>
> In the logs on the engine it shows:
>
> 2017-07-12 21:59:42,473Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 21:59:52,480Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 21:59:52,483Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:02,490Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:02,493Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:12,498Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:12,501Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:22,508Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:22,511Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
>
> This is what I seen on the SPM when I grep’d the Snapshot ID.
>
> 2017-07-12 14:22:18,773-0700 INFO  (jsonrpc/6) [vdsm.api] START
> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
> spUUID=u'0001-0001-0001-0001-0311',
> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
> volFormat=4, preallocate=2, diskType=2,
> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
> srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
> srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
> from=:::10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
> (api:46)
> 2017-07-12 14:22:19,095-0700 WARN  (tasks/6) [root] File:
> /rhev/data-center/0001-0001-0001-0001-0311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
> already removed (utils:120)
> 2017-07-12 14:22:19,096-0700 INFO  (tasks/6) [storage.Volume] Request to
> create snapshot
> 6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac139

Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.

2017-07-16 Thread Benny Zlotnik
[Adding ovirt-users]

On Sun, Jul 16, 2017 at 12:58 PM, Benny Zlotnik  wrote:
> We can see a lot of related errors in the engine log but we are unable
> to correlate to the vdsm log. Do you have more hosts? If yes, please
> attach their logs as well.
> And just to be sure you were attempting to perform cold merge?
>
> On Fri, Jul 14, 2017 at 7:32 PM, Devin Acosta  wrote:
>>
>> You can get my logs from:
>>
>> https://files.linuxstack.cloud/s/NjoyMF11I38rJpH
>>
>> They were a little to big to attach to this e-mail. Would like to know if
>> this is the similar bug that Richard indicated is a possibility.
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || de...@linuxguru.co
>>
>> On July 14, 2017 at 9:18:08 AM, Devin Acosta (de...@pabstatencio.com) wrote:
>>
>> I have attached the logs.
>>
>>
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || de...@linuxguru.co
>>
>> On July 13, 2017 at 9:22:03 AM, richard anthony falzini
>> (richardfalz...@gmail.com) wrote:
>>
>> Hi,
>> i have the same problem with gluster.
>> this is a bug that i opened
>> https://bugzilla.redhat.com/show_bug.cgi?id=1461029 .
>> In the bug i used single disk vm but i start to notice the problem with
>> multiple disk vm.
>>
>>
>> 2017-07-13 0:07 GMT+02:00 Devin Acosta :
>>>
>>> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
>>> question has multiple Disks (4 to be exact). It snapshotted OK while on
>>> iSCSI however when I went to delete the single snapshot that existed it went
>>> into Locked state and never came back. The deletion has been going for well
>>> over an hour, and I am not convinced since the snapshot is less than 12
>>> hours old that it’s really doing anything.
>>>
>>> I have seen that doing some Googling indicates there might be some known
>>> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>>>
>>> In the logs on the engine it shows:
>>>
>>> 2017-07-12 21:59:42,473Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 21:59:52,480Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 21:59:52,483Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:02,490Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:02,493Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:12,498Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting

Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-30 Thread Benny Zlotnik
Hi,

Can please you provide the versions of vdsm, qemu, libvirt?

On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson  wrote:

> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the engine:
> VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
> copy failed
>
> I can copy it inside the host. And i can use dd to copy. Haven't tried
> to run qemu-img manually yet.
>
>
> This is from vdsm.log on the host:
> 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
> 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
> run
> self._run()
>   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
> self._operation.wait_for_completion()
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
> wait_for_completion
> self.poll(timeout)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
> poll
> self.error)
> QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
> 43
> 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
> 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865
> a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
> reading sector 12197886: No data available
> , message=None
>
>
> The storage domains are all based on gluster. The storage domains that
> we see this on is configured as dispersed volumes.
>
> Found a way to "fix" the problem. And that is to run dd if=/dev/vda
> of=/dev/null bs=1M  inside the virtual guest. After that we can copy an
> image or use storage livemigration.
>
> Is this a gluster problem or an vdsm problem? Or could it be something
> with qemu-img?
>
> /Johan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-31 Thread Benny Zlotnik
Forgot to add there is a bug for this issue[1]
Please add your gluster mount and brick logs to the bug entry

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1458846

On Sun, Jul 30, 2017 at 3:02 PM, Johan Bernhardsson  wrote:

> OS Version:
> RHEL - 7 - 3.1611.el7.centos
> OS Description:
> CentOS Linux 7 (Core)
> Kernel Version:
> 3.10.0 - 514.16.1.el7.x86_64
> KVM Version:
> 2.6.0 - 28.el7_3.9.1
> LIBVIRT Version:
> libvirt-2.0.0-10.el7_3.9
> VDSM Version:
> vdsm-4.19.15-1.el7.centos
> SPICE Version:
> 0.12.4 - 20.el7_3
> GlusterFS Version:
> glusterfs-3.8.11-1.el7
> CEPH Version:
> librbd1-0.94.5-1.el7
>
> qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright (c)
> 2004-2008 Fabrice Bellard
>
> This is what i have on the hosts.
>
> /Johan
>
> On Sun, 2017-07-30 at 13:56 +0300, Benny Zlotnik wrote:
>
> Hi,
>
> Can please you provide the versions of vdsm, qemu, libvirt?
>
> On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson 
> wrote:
>
> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the engine:
> VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
> copy failed
>
> I can copy it inside the host. And i can use dd to copy. Haven't tried
> to run qemu-img manually yet.
>
>
> This is from vdsm.log on the host:
> 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
> 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
> run
> self._run()
>   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
> self._operation.wait_for_completion()
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
> wait_for_completion
> self.poll(timeout)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
> poll
> self.error)
> QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
> 43
> 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
> 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865
> a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
> reading sector 12197886: No data available
> , message=None
>
>
> The storage domains are all based on gluster. The storage domains that
> we see this on is configured as dispersed volumes.
>
> Found a way to "fix" the problem. And that is to run dd if=/dev/vda
> of=/dev/null bs=1M  inside the virtual guest. After that we can copy an
> image or use storage livemigration.
>
> Is this a gluster problem or an vdsm problem? Or could it be something
> with qemu-img?
>
> /Johan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk attachment to VM

2017-09-05 Thread Benny Zlotnik
Hi,

Look at [1], however there are caveats so be sure to pay close attention to
the warning section.

[1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README

On Tue, Sep 5, 2017 at 4:52 PM, Benny Zlotnik  wrote:

> Hi,
>
> Look at [1], however there are caveats so be sure to pay close attention
> to the warning section.
>
> [1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/
> localdisk/README
>
>
> On Tue, Sep 5, 2017 at 4:40 PM, Erekle Magradze <
> erekle.magra...@recogizer.de> wrote:
>
>> Hey Guys,
>> Is there a way to attach an SSD directly to the oVirt VM?
>> Thanks in advance
>> Cheers
>> Erekle
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot delete the snapshot

2017-09-05 Thread Benny Zlotnik
Accidentally replied without cc-ing the list

On Sun, Sep 3, 2017 at 12:21 PM, Benny Zlotnik  wrote:

> Hi,
>
> Could you provide full engine and vdsm logs?
>
> On Sat, Sep 2, 2017 at 4:23 PM, wai chun hung 
> wrote:
>
>> Dear all,
>> This is my first time to ask question in this place. Thank you for time
>> to read my question first.
>>
>> Yesterday, i wanted to delete the snapshot on oVirt Engine Web
>> Administration. However, it took so long to delete a snapshot and it still
>> in process.
>>
>> Due to the "snapshot error", i cannot run the virtual machine as i shut
>> it down before.
>>
>> Is there anyone has some idea how to fix it?
>> The following information maybe useful for debugging .
>>
>> On web administration:
>> Removing Snapshot  Sep 1, 2017 6:06:58 PM N/A > snapshot id>
>> Validating Sep 1, 2017 6:06:58 PM until Sep 1, 2017 6:06:58
>> PM
>>  Executing Sep 1, 2017 6:06:58 PM N/A
>>
>> On Engine log.
>> 2017-09-02 20:57:10,700+08 INFO  [org.ovirt.engine.core.bll.Co
>> ncurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler6)
>> [70471886-a5cb-404c-80db-b8469092aa8e] Command 'RemoveSnapshot' (id:
>> '78ee1e62-26ea-403d-b518-9e3d3be995c2') waiting on child command id:
>> '5f2e8197-704f-453c-bea8-74186e5ca95c' type:'RemoveSnapshotSingleDiskLive'
>> to complete
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot create snapshot_illegal disk

2017-09-06 Thread Benny Zlotnik
Hi Terry,

The disk in the snapshot appears to be in an illegal state. How long has it
been like this? Do you have logs from when it happened?

On Tue, Sep 5, 2017 at 8:52 PM, Terry hey  wrote:

> Dear all,
> Thank you for your time to read this post first.
> In the same host, there are four virtual machines. Only one virtual
> machine that i cannot create a snapshot for backup usage. I have no idea
> what is happening and how to solve it. Would you guys help me and give some
> suggestion to me?
>
> Please kindly check the following links. I think those information
> including two picture of oVirt web manager, engine.log and vdsm.log, would
> be useful for finding the issue.
>
> *engine.log*
> https://drive.google.com/open?id=0B8dxamAkVEYdeFBOdlFUbnY1cFU
> *vdsm.log*
> https://drive.google.com/open?id=0B8dxamAkVEYddFh6dzc0VkVRMk0
> *Two pictures of oVirt web manger:*
> *Before created the snapshot:*
> https://drive.google.com/open?id=0B8dxamAkVEYda2d0SzVvc3ZMTFU
> *After created the snapshot:*
> https://drive.google.com/open?id=0B8dxamAkVEYdV3lUVUJ0Zy1LeGc
>
> I am really looking forward for you guys' reply. Thank you again for your
> time to assist me.
>
> Regards,
> Terry
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re: Low disk space on Storage

2019-11-12 Thread Benny Zlotnik
This was fixed in 4.3.6, I suggest upgrading

On Tue, Nov 12, 2019 at 12:45 PM  wrote:
>
> Hi,
>
> I'm running ovirt Version:4.3.4.3-1.el7
> My filesystem disk has 30 GB free space.
> Cannot start a VM due to an I/O error storage.
> When tryng to move the disk to another storage domain get this error:
> Error while executing action: Cannot move Virtual Disk. Low disk space on 
> Storage Domain DATA4.
>
> The sum of pre-allocated disk is the total of the storage domain disk.
>
> Any idea what can I do to move a disk to other storage domain?
>
> Many thanks
>
> --
> 
> Jose Ferradeira
> http://www.logicworks.pt
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQFPGUPB43I7OO7FXEPLG4XSG5X2INLJ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q4TYQ3CBVTHITX7PSVHM6QBYIEGZKT6E/


[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-11-24 Thread Benny Zlotnik
The current plan to integrate ceph is via cinderlib integration[1]
(currently in tech preview mode) because we still have no packaging
ready, there are some manual installation steps required, but there is
no need to install and configure openstack/cinder


>1. Does this require you to install OpenStack, or will a vanilla Ceph 
>installation work?
a vanilla installation will work with cinderlib

>2. Is it possible to deploy Ceph on the same nodes that run oVirt? (i.e. is a 
>3-node oVirt + Ceph cluster possible?)
I haven't tried it, but should be possible

>3. Is there any monitoring/management of Ceph from within oVirt? (Guessing no?)
No, cinderlib is storage agnostic

>4. Are all the normal VM features working yet, or is this planned?
Most features (starting/stopping/snapshots/live migration)  are
working, but not all are fully tested (specifically snapshots)

>5. Is making Ceph a first-class citizen (like Gluster) on oVirt on the roadmap?
Not at the moment, maybe once cinderlib integration matures and we
have more feedback and users for the feature

[1] 
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html


On Sun, Nov 24, 2019 at 12:41 PM  wrote:
>
> Hi,
>
> I currently have a 3-node HA cluster running Proxmox (with integrated Ceph). 
> oVirt looks pretty neat, however, and I'm excited to check it out.
>
> One of the things I love about Proxmox is the integrated Ceph support.
>
> I saw on the mailing lists that there is some talk of Ceph support earlier, 
> but it was via OpenStack/Cinder. What exactly does this mean?
>
> 1. Does this require you to install OpenStack, or will a vanilla Ceph 
> installation work?
> 2. Is it possible to deploy Ceph on the same nodes that run oVirt? (i.e. is a 
> 3-node oVirt + Ceph cluster possible?)
> 3. Is there any monitoring/management of Ceph from within oVirt? (Guessing 
> no?)
> 4. Are all the normal VM features working yet, or is this planned?
> 5. Is making Ceph a first-class citizen (like Gluster) on oVirt on the 
> roadmap?
>
> Thanks,
> Victor
>
> https://www.reddit.com/r/ovirt/comments/ci38zp/ceph_rbd_support_in_ovirt_for_storing_vm_disks/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMPXXQQOMKEQJIJVXRUYKTSHQBRZPBQ6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QR5FBGAEHOSRV7AEL5HUFSR4JZW2P3I6/


[ovirt-users] Re: oVirt Admin Portal unaccessible via chrome (firefox works)

2019-11-24 Thread Benny Zlotnik
Works fine for me, anything interesting in the browser console?

On Sat, Nov 23, 2019 at 7:04 PM Strahil Nikolov  wrote:
>
> Hello Community,
>
> I have a constantly loading chrome on my openSuSE 15.1 (and my android 
> phone), while firefox has no issues .
> Can someone test accessing the oVirt Admin portal via chrome on x86_64 Linux ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S6ET7C74PFOCKIFXPXB4PQDA6LHMDEC4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L7T4R6HYQ4IZQUZUKND3RX4QN4I2HDAD/


[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-12-03 Thread Benny Zlotnik
> We are using Ceph with oVirt (via standalone Cinder) extensively in a
> production environment.
> I tested oVirt cinderlib integration in our dev environment, gave some
> feedback here on the list and am currently waiting for the future
> development. IMHO cinderlib in oVirt is currently not fit for production
> use, I think this matches your assessment.

> What can we do to help advance Ceph integration in oVirt?
> What are the plans for oVirt 4.4?
> Will standalone Cinder storage domains still be supported in oVirt 4.4?
> Will there be a migration scenario from standalone Cinder to cinderlib?
TBH, to help, bug reports are probably the most useful. Having
feedback from users with "real world" setups
and usage will help us improve. As stated before, our biggest issue at
the moment is packaging, once it's handled
it will be significantly easier to test and develop.
Standalone cinder domains were never actually supported (never left
tech preview), we do not have immediate plans
for an upgrade path, but you can definitely submit an RFE for this

> Accidentally just yesterday I had an incident in our test environment
> where migration of a VM with MANAGED_BLOCK_STORAGE (=cinderlib) disks
> failed (causes are known and unrelated to cinderlib). Restarting the VM
> failed because of leftover rbdmapped devices. This is similar to the
> case I reported in https://bugzilla.redhat.com/show_bug.cgi?id=1697496.
> I don't clearly see if this fixed or not. Shall I report my recent problem?
We added better cleanup for migration cleanup to handle the bug.
There is another issue which was not known at the time with multipath
preventing unmapping rbd devices[1], and it might be what you
experienced.
This can be fixed manually by blacklisting rbd devices in multipath
conf, but once the bug is fixed vdsm will handle configuring it.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RIHZCN3HT4Z3YUOFSF7A2SJHKQDHPYP/


[ovirt-users] Re: VM Import Fails

2019-12-23 Thread Benny Zlotnik
Please attach engine and vdsm logs and specify the versions

On Mon, Dec 23, 2019 at 10:08 AM Vijay Sachdeva
 wrote:
>
> Hi All,
>
>
>
> I am trying to import a VM from export domain, but import fails.
>
>
>
> Setup:
>
>
>
> Source DC has a NFS shared storage with two Hosts
> Destination DC has a local storage configured using LVM.
>
>
>
> Note: Used common export domain to export the VM.
>
>
>
>
>
> Anyone, please help me on this case to understand why it’s failing.
>
>
>
> Thanks
>
> Vijay Sachdeva
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4NTGBIKTYGJVGNAYOFYRBTXFMV3GRQU/


[ovirt-users] Re: what is "use host" field in storage domain creation??

2019-12-30 Thread Benny Zlotnik
One host has to connect and setup the storage (mount the path, create
the files, etc) so you are given the choice which host to use for this

On Mon, Dec 30, 2019 at 11:07 AM  wrote:
>
> hello and happy new year~
>
> I am wondering the role of "use host" field in storage domain creation.
>
> https://www.ovirt.org/documentation/install-guide/chap-Configuring_Storage.html
>
> above link says all communication to the storage domain is through "use host".
> but I can't understand inefficiently passing through that "use host" even 
> though every host can directly access to all domains.
>
> it seems like i am misunderstanding something.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKZHTE3CIV6VIZAN7762GCVHEG3VS2J6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGE4LI4NZVBOMWUOJW6JMQYKI5HRB54J/


[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well?


On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
> volume specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',

[ovirt-users] Re: Recover VM if engine down

2020-02-03 Thread Benny Zlotnik
you can attach the storage domain to another engine and import it

On Mon, Feb 3, 2020 at 11:45 PM matteo fedeli  wrote:
>
> Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data 
> storage domain.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVSJPYVBTQOQGGKT4HNETW453ZUPDL2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSUAEXSX3WP5XGI32NMD2RBOSA2ZWM6C/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
Is the VM running? Can you remove it when the VM is down?
Can you find the reason for illegal status in the logs?

On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
wrote:

> Hey Guys,
>
> Any help on it ?
>
> Thanks
>
> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh 
> wrote:
>
>>
>>   Hi Team,
>>
>> I am trying to delete a old snapshot of a virtual machine and getting
>> below error :-
>>
>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>> índ-co-ora-ee-02'
>>
>>
>>
>> [image: image.png]
>>
>> Thanks
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4IWIYIHGD3FEQ52Z4P5KHDDA424MIWK/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
back to my question then, can you check what made the snapshot illegal? and
attach the vdsm and engine logs from the occurrence so we can assess the
damage

also run `dump-volume-chains ` where the image resides so we can see
what's the status of the image on vdsm

On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
wrote:

> Hi,
>
> Yes VM is running but i scared if i shutdown the VM and it not came back.
> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
> persists. I am also unable to take snapshot of the the same VM as the new
> snapshot failing. Please help.
>
> Thanks
> Shashank
>
>
>
> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik  wrote:
>
>> Is the VM running? Can you remove it when the VM is down?
>> Can you find the reason for illegal status in the logs?
>>
>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
>> wrote:
>>
>>> Hey Guys,
>>>
>>> Any help on it ?
>>>
>>> Thanks
>>>
>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>> shashank123rast...@gmail.com> wrote:
>>>
>>>>
>>>>   Hi Team,
>>>>
>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>> below error :-
>>>>
>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>> índ-co-ora-ee-02'
>>>>
>>>>
>>>>
>>>> [image: image.png]
>>>>
>>>> Thanks
>>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34YWQHVGTXSZZR6DKGE477AS7GDRHJ2Y/


[ovirt-users] Re: Recover VM if engine down

2020-02-04 Thread Benny Zlotnik
you need to go to the "import vm" tab on the storage domain and import them

On Tue, Feb 4, 2020 at 7:30 PM matteo fedeli  wrote:
>
> it does automatically when I attach or should I execute particular operations?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVC674C7RF3JZXCOW4SRJL5OQRBE5RZD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4O4YN5RDOQEGBGD4DEHXFY7R72WGQYB/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-05 Thread Benny Zlotnik
The vdsm logs are not the correct ones.
I assume this is the failure:
2020-02-04 22:04:53,631+05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-9)
[1e9f5492-095c-48ed-9aa0-1a899eedeab7] Command 'MergeVDSCommand(HostName =
iondelsvr72.iontrading.com,
MergeVDSCommandParameters:{hostId='22502af7-f157-40dc-bd5c-6611951be729',
vmId='4957c5d4-ca5e-4db7-8c78-ae8f4b694646',
storagePoolId='c5e0f32e-0131-11ea-a48f-00163e0fe800',
storageDomainId='70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419',
imageGroupId='737b5628-e9fe-42ec-9bce-38db80981107',
imageId='31c5e807-91f1-4f73-8a60-f97a83c6f471',
baseImageId='e4160ffe-2734-4305-8bf9-a7217f3049b6',
topImageId='31c5e807-91f1-4f73-8a60-f97a83c6f471', bandwidth='0'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
MergeVDS, error = Drive image file could not be found, code = 13

please find the vdsm logs containing flow_id
1e9f5492-095c-48ed-9aa0-1a899eedeab7 and provide output for `vdsm-tool
dump-volume-chains 70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419` so we can see the
status of the chain on vdsm
As well as `virsh -r dumpxml ind-co-ora-ee-02` (assuming ind-co-ora-ee-02
is the VM with the issue)

Changing the snapshot status with unlock_entity will likely work only if
the chain is fine on the storage



On Tue, Feb 4, 2020 at 7:40 PM Crazy Ayansh 
wrote:

> please find the attached the logs.
>
> On Tue, Feb 4, 2020 at 10:23 PM Benny Zlotnik  wrote:
>
>> back to my question then, can you check what made the snapshot illegal?
>> and attach the vdsm and engine logs from the occurrence so we can assess
>> the damage
>>
>> also run `dump-volume-chains ` where the image resides so we can
>> see what's the status of the image on vdsm
>>
>> On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
>> wrote:
>>
>>> Hi,
>>>
>>> Yes VM is running but i scared if i shutdown the VM and it not came back.
>>> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
>>> persists. I am also unable to take snapshot of the the same VM as the new
>>> snapshot failing. Please help.
>>>
>>> Thanks
>>> Shashank
>>>
>>>
>>>
>>> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik 
>>> wrote:
>>>
>>>> Is the VM running? Can you remove it when the VM is down?
>>>> Can you find the reason for illegal status in the logs?
>>>>
>>>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh <
>>>> shashank123rast...@gmail.com> wrote:
>>>>
>>>>> Hey Guys,
>>>>>
>>>>> Any help on it ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>>>> shashank123rast...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>   Hi Team,
>>>>>>
>>>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>>>> below error :-
>>>>>>
>>>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>>>> índ-co-ora-ee-02'
>>>>>>
>>>>>>
>>>>>>
>>>>>> [image: image.png]
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>>>
>>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YF3J3K66N5HORUYZP3HZEJWOU64IDNAS/


[ovirt-users] Re: iSCSI Domain Addition Fails

2020-02-23 Thread Benny Zlotnik
anything in the vdsm or engine logs?

On Sun, Feb 23, 2020 at 4:23 PM Robert Webb  wrote:
>
> Also, I did do the “Login” to connect to the target without issue, from what 
> I can tell.
>
>
>
> From: Robert Webb
> Sent: Sunday, February 23, 2020 9:06 AM
> To: users@ovirt.org
> Subject: iSCSI Domain Addition Fails
>
>
>
> So I am messing around with FreeNAS and iSCSI. FreeNAS has a target 
> configured, it is discoverable in oVirt, but then I click “OK” nothing 
> happens.
>
>
>
> I have a name for the domain defined and have expanded the advanced features, 
> but cannot find it anything showing an error.
>
>
>
> oVirt 4.3.8
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMAXFDMNHVGTMJUGU5FK26K6PNBAW3FP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KSLKO6ZP55ZSFCSXRONAPVCEOMZTE24M/


[ovirt-users] Re: oVirt behavior with thin provision/deduplicated block storage

2020-02-24 Thread Benny Zlotnik
we use the stats API in the engine, currently only to check if the
backend is accessible, we have plans to use it for monitoring and
validations but it is not implemented yet

On Mon, Feb 24, 2020 at 3:35 PM Nir Soffer  wrote:
>
> On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor  wrote:
> >
> > On 22/02, Nir Soffer wrote:
> > > On Sat, Feb 22, 2020, 13:02 Alan G  wrote:
> > > >
> > > > I'm not really concerned about the reporting aspect, I can look in the 
> > > > storage vendor UI to see that. My concern is: will oVirt stop 
> > > > provisioning storage in the domain because it *thinks* the domain is 
> > > > full. De-dup is currently running at about 2.5:1 so I'm concerned that 
> > > > oVirt will think the domain is full way before it actually is.
> > > >
> > > > Not clear if this is handled natively in oVirt or by the underlying lvs?
> > >
> > > Because oVirt does not know about deduplication or actual allocation
> > > on the storage side,
> > > it will let you allocate up the size of the LUNs that you added to the
> > > storage domain, minus
> > > the size oVirt uses for its own metadata.
> > >
> > > oVirt uses about 5G for its own metadata on the first LUN in a storage
> > > domain. The rest of
> > > the space can be used by user disks. Disks are LVM logical volumes
> > > created in the VG created
> > > from the LUN.
> > >
> > > If you create a storage domain with 4T LUN, you will be able to
> > > allocate about 4091G on this
> > > storage domain. If you use preallocated disks, oVirt will stop when
> > > you allocated all the space
> > > in the VG. Actually it will stop earlier based on the minimal amount
> > > of free space configured for
> > > the storage domain when creating the storage domain.
> > >
> > > If you use thin disks, oVirt will allocate only 1G per disk (by
> > > default), so you can allocate
> > > more storage than you actually have, but when VMs will write to the
> > > disk, oVirt will extend
> > > the disks. Once you use all the available space in this VG, you will
> > > not be able to allocate
> > > more without extending the storage domain with new LUN, or resizing
> > > the  LUN on storage.
> > >
> > > If you use Managed Block Storage (cinderlib) every disk is a LUN with
> > > the exact size you
> > > ask when you create the disk. The actual allocation of this LUN
> > > depends on your storage.
> > >
> > > Nir
> > >
> >
> > Hi,
> >
> > I don't know anything about the oVirt's implementation, so I'm just
> > going to provide some information from cinderlib's point of view.
> >
> > Cinderlib was developed as a dumb library to abstract access to storage
> > backends, so all the "smart" functionality is pushed to the user of the
> > library, in this case oVirt.
> >
> > In practice this means that cinderlib will NOT limit the number of LUNs
> > or over-provisioning done in the backend.
> >
> > Cinderlib doesn't care if we are over-provisioning because we have dedup
> > and decompression or because we are using thin volumes where we don't
> > consume all the allocated space, it doesn't even care if we cannot do
> > over-provisioning because we are using thick volumes.  If it gets a
> > request to create a volume, it will try to do so.
> >
> > From oVirt's perspective this is dangerous if not controlled, because we
> > could end up consuming all free space in the backend and then running
> > VMs will crash (I think) when they could no longer write to disks.
> >
> > oVirt can query the stats of the backend [1] to see how much free space
> > is available (free_capacity_gb) at any given time in order to provide
> > over-provisioning limits to its users.  I don't know if oVirt is already
> > doing that or something similar.
> >
> > If is important to know that stats gathering is an expensive operation
> > for most drivers, and that's why we can request cached stats (cache is
> > lost as the process exits) to help users not overuse it.  It probably
> > shouldn't be gathered more than once a minute.
> >
> > I hope this helps.  I'll be happy to answer any cinderlib questions. :-)
>
> Thanks Gorka, good to know we already have API to get backend
> allocation info. Hopefully we will use this in future version.
>
> Nir
>
> >
> > Cheers,
> > Gorka.
> >
> > [1]: https://docs.openstack.org/cinderlib/latest/topics/backends.html#stats
> >
> > > >  On Fri, 21 Feb 2020 21:35:06 + Nir Soffer  
> > > > wrote 
> > > >
> > > >
> > > >
> > > > On Fri, Feb 21, 2020, 17:14 Alan G  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I have an oVirt cluster with a storage domain hosted on a FC storage 
> > > > array that utilises block de-duplication technology. oVirt reports the 
> > > > capacity of the domain as though the de-duplication factor was 1:1, 
> > > > which of course is not the case. So what I would like to understand is 
> > > > the likely behavior of oVirt when the used space approaches the 
> > > > reported capacity. Particularly around the critical action space 
> > > > blocker.
> > > >
> > > >

[ovirt-users] Re: does SPM still exist?

2020-03-24 Thread Benny Zlotnik
it hasn't disappeared, there has been work done to move operations
that used to run only on SPM to run on regular hosts as well
(copy/move disk)
Currently the main operations performed by SPM are
create/delete/extend volume and more[1]


[1] 
https://github.com/oVirt/ovirt-engine/tree/master/backend/manager/modules/vdsbroker/src/main/java/org/ovirt/engine/core/vdsbroker/irsbroker






On Tue, Mar 24, 2020 at 11:14 AM yam yam  wrote:
>
> Hello,
>
> I heard some say SPM disappeared since 3.6.
> nevertheless, SPM still exists in oVirt admin portal or even in RHV's manual.
> So, I am wondering whether SPM still exists now.
>
> And could I know how to get more detailed information for oVirt internals??
> is the code review the best way?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNZ4KGZTWHFSUNDDVVPBMYK3U7Y3QZPF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGB6C4OLF4SH3PJCR5F4TEAHN4LGHSPL/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
anything in the logs (engine,vdsm)?
if there's nothing on the storage, removing from the database should
be safe, but it's best to check why it failed

On Mon, Apr 20, 2020 at 5:39 PM Strahil Nikolov  wrote:
>
> Hello All,
>
> did anyone observe the following behaviour:
>
> 1. Create a new disk from the VM -> disks UI tab
> 2. Disk creation failes , but stays in locked state
> 3. Gluster storage has no directory with that uuid
> 4. /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh doesn't find 
> anything:
> [root@engine ~]# /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t 
> all
>
> Locked VMs
>
>
>
> Locked templates
>
>
>
> Locked disks
>
>
>
> Locked snapshots
>
>
>
> Illegal images
>
>
> Should I just delete the entry from the DB or I have another option ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6E4RJM7I3BT33CU3CAB74C2Q4QNBS5BW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THYLVMX65VAZ2YTA5GL2SR2LKHF2KRJC/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
> 1. The engine didn't clean it up itself - after all , no mater the reason, 
> the operation has failed?
can't really answer without looking at the logs, engine should cleanup
in case of a failure, there can be numerous reasons for cleanup to
fail (connectivity issues, bug, etc)
> 2. Why the query fail to see the disk , but I have managed to unlock it?
could be a bug, but it would need some way to reproduce
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TV5LJU6URKS2D5FZ5BOFVYV2EAJRBJGN/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host

On Wed, May 27, 2020 at 9:02 AM David Sekne  wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live storage migration a VM got stuck with snapshot. Checking 
> the engine logs I can see that the snapshot removal task is waiting for Merge 
> to complete and vice versa.
>
> 2020-05-26 18:34:04,826+02 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40') waiting on child command id: 
> 'f7d1de7b-9e87-47ba-9ba0-ee04301ba3b1' type:'Merge' to complete
> 2020-05-26 18:34:04,827+02 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Waiting on merge command to complete 
> (jobId = f694590a-1577-4dce-bf0c-3a8d74adf341)
> 2020-05-26 18:34:04,845+02 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshot' (id: 
> '47c9a847-5b4b-4256-9264-a760acde8275') waiting on child command id: 
> '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40' type:'RemoveSnapshotSingleDiskLive' to 
> complete
> 2020-05-26 18:34:14,277+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmJobsMonitoring] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM Job 
> [f694590a-1577-4dce-bf0c-3a8d74adf341]: In progress (no change)
>
> I cannot see any runnig tasks on the SPM (vdsm-client Host getAllTasksInfo). 
> I also cannot find the task ID in any of the other node's logs.
>
> I already tried restarting the Engine (didn't help).
>
> To start I'm puzzled as to where this task is queueing?
>
> Any Ideas on how I could resolve this?
>
> Thank you.
> Regards,
> David
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJBI3SMVXTPSGGJ66P55MU2ERN3HBCTH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZILERZCGSPOGPOSPM3GHVURC5CVVBVZU/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
 sda --info` (if it's still running)




On Wed, May 27, 2020 at 2:03 PM David Sekne  wrote:
>
> Hello,
>
> Thank you for the reply.
>
> Unfortunately I cant see the task on any on the hosts:
>
> vdsm-client Task getInfo taskID=f694590a-1577-4dce-bf0c-3a8d74adf341
> vdsm-client: Command Task.getInfo with args {'taskID': 
> 'f694590a-1577-4dce-bf0c-3a8d74adf341'} failed:
> (code=401, message=Task id unknown: 
> (u'f694590a-1577-4dce-bf0c-3a8d74adf341',))
>
> I can see it starting in VDSM log on the host runnig the VM:
>
> /var/log/vdsm/vdsm.log.2:2020-05-26 12:15:09,349+0200 INFO  (jsonrpc/6) 
> [virt.vm] (vmId='e113ff18-5687-4e03-8a27-b12c82ad6d6b') Starting merge with 
> jobUUID=u'f694590a-1577-4dce-bf0c-3a8d74adf341', original 
> chain=a78c7505-a949-43f3-b3d0-9d17bdb41af5 < 
> aabf3788-8e47-4f8b-84ad-a7eb311659fa (top), disk='sda', base='sda[1]', 
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> Also running vdsm-client Host getAllTasks I don't see any runnig tasks (on 
> any host).
>
> Am I missing something?
>
> Regards,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBTD3HLXPK7F7MBJCQEQV6E2KA3H7FZK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4HOFIS26PTTT56HNOUCG4MTOFFFAXSK/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging

taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LCPJ2C2MW76MKVFBC4QAMRPSRRQQDC3U/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Sorry, by overloaded I meant in terms of I/O, because this is an
active layer merge, the active layer
(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
it as the active layer. So if there is constantly additional data
written to the current active layer, vdsm may have trouble finishing
the synchronization


On Wed, May 27, 2020 at 4:55 PM David Sekne  wrote:
>
> Hello,
>
> Yes, no problem. XML is attached (I ommited the hostname and IP).
>
> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not overloaded. We 
> have multiple servers with the same specs with no issues.
>
> Regards,
>
> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik  wrote:
>>
>> Can you share the VM's xml?
>> Can be obtained with `virsh -r dumpxml `
>> Is the VM overloaded? I suspect it has trouble converging
>>
>> taskcleaner only cleans up the database, I don't think it will help here
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-01 Thread Benny Zlotnik
Sorry for the late reply, but you may have hit this bug[1], I forgot about it.
The bug happens when you live migrate a VM in post-copy mode, vdsm
stops monitoring the VM's jobs.
The root cause is an issue in libvirt, so it depends on which libvirt
version you have

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1774230

On Fri, May 29, 2020 at 3:54 PM David Sekne  wrote:
>
> Hello,
>
> I tried the live migrate as well and it didn't help (it failed).
>
> The VM disks were in a illegal state so I ended up restoring the VM from 
> backup (It was least complex solution for my case).
>
> Thank you both for the help.
>
> Regards,
>
> On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov  wrote:
>>
>> I used  to have a similar issue and when I live migrated  (from 1  host to 
>> another)  it  automatically completed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik  
>> написа:
>> >Sorry, by overloaded I meant in terms of I/O, because this is an
>> >active layer merge, the active layer
>> >(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
>> >(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
>> >it as the active layer. So if there is constantly additional data
>> >written to the current active layer, vdsm may have trouble finishing
>> >the synchronization
>> >
>> >
>> >On Wed, May 27, 2020 at 4:55 PM David Sekne 
>> >wrote:
>> >>
>> >> Hello,
>> >>
>> >> Yes, no problem. XML is attached (I ommited the hostname and IP).
>> >>
>> >> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not
>> >overloaded. We have multiple servers with the same specs with no
>> >issues.
>> >>
>> >> Regards,
>> >>
>> >> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik 
>> >wrote:
>> >>>
>> >>> Can you share the VM's xml?
>> >>> Can be obtained with `virsh -r dumpxml `
>> >>> Is the VM overloaded? I suspect it has trouble converging
>> >>>
>> >>> taskcleaner only cleans up the database, I don't think it will help
>> >here
>> >>>
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQWZXFW622OIZLB27AHULO52CWYTVL2S/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Benny Zlotnik
I've used successfully rocky with 4.3 in the past, the main caveat
with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip
install cinderlib==0.9.0).
Let me know if you have any issues.

Hopefully during 4.4 we will have the repositories with the RPMs and
installation will be much easier


On Thu, Jun 4, 2020 at 10:00 PM Mathias Schwenke
 wrote:
>
> At 
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>  ist described the cinderlib integration into oVirt:
> Installation:
> - install centos-release-openstack-pike on engine and all hosts
> - install openstack-cinder and python-pip on engine
> - pip install cinderlib on engine
> - install python2-os-brick on all hosts
> - install ceph-common on engine and on all hosts
>
> Which software versions do you use on CentOS 7 whith oVirt 4.3.10?
> The package centos-release-openstack-pike, as described at the 
> above-mentioned Managed Block Storage feature page, doesn't exist anymore in 
> the CentOS repositories, so I have to switch to 
> centos-release-openstack-queens or newer (rocky, stein, train). So I get (for 
> using with ceph luminous 12):
> - openstack-cinder 12.0.10
> - cinderlib 1.0.1
> - ceph-common 12.2.11
> - python2-os-brick 2.3.9
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5BRKSYAHJBLI65G6JEDZIWSQ72OCF3S/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FELJ2X2N74Q3SM2ZC3MV4ERWZWUM5ZUO/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-07 Thread Benny Zlotnik
yes, it looks like a configuration issue, you can use plain `rbd` to
check connectivity.
regarding starting vms and live migration, are there bug reports for these?
there is an issue we're aware of with live migration[1], it can be
worked around by blacklisting rbd devices in the multipath.conf

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801


On Thu, Jun 4, 2020 at 11:49 PM Mathias Schwenke
 wrote:
>
> Thanks vor your replay.
> Yes, I have some issues. In some cases starting or migrating a virtual 
> machine failed.
>
> At the moment it seems that I have a misconfiguration of my ceph connection:
> 2020-06-04 22:44:07,685+02 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution 
> failed: Traceback (most recent call last):
>   File "./cinderlib-client.py", line 179, in main
> args.command(args)
>   File "./cinderlib-client.py", line 232, in connect_volume
> backend = load_backend(args)
>   File "./cinderlib-client.py", line 210, in load_backend
> return cl.Backend(**json.loads(args.driver))
>   File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in 
> __init__
> self.driver.check_for_setup_error()
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 295, in check_for_setup_error
> with RADOSClient(self):
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 177, in __init__
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 353, in _connect_to_rados
> return _do_conn(pool, remote, timeout)
>   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in 
> _wrapper
> return r.call(f, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
> raise attempt.get()
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
> six.reraise(self.value[0], self.value[1], self.value[2])
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
> attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 351, in _do_conn
> raise exception.VolumeBackendAPIException(data=msg)
> VolumeBackendAPIException: Bad or unexpected response from the storage volume 
> backend API: Error connecting to ceph cluster.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4BMALG7MPMPS3JJU23OCQUMOCSO2D27/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YZPGW7IAUZMTNWY5FP5KOEWAGVBPVFE/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-08 Thread Benny Zlotnik
yes, that's because cinderlib uses KRBD, so it has less features, I
should add this to the documentation.
I was told cinderlib has plans to add support for rbd-nbd, this would
eventually allow use of newer features

On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke
 wrote:
>
> > It looks like a configuration issue, you can use plain `rbd` to check 
> > connectivity.
> Yes, it was a configuration error. I fixed it.
> Also, I had to adapt different rbd feature sets between ovirt nodes and ceph 
> images. Now it seems to work.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/72OOSCUSTZAGYIDTEDIINDO47EBL2GLM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JHFAZNGY3OM2EIAMISABNOVBRGUDS4H/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Benny Zlotnik
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939

On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>  wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them 
> > to Google Drive. Please go to the link below to get them:
> >
> > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>
> I did get them, but not engine logs. Can you please attach them as well? 
> Thanks.
>
> vdsm.log.61 has:
>
> 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
> bandwidth, flags)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirt.libvirtError: internal error: qemu block name
> 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> "filename": 
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> doesn't match expected
> '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>
> Adding Eyal. Eyal, can you please have a look? Thanks.
>
> >
> > Best regards,
> >
> > Minnie Du--Presales & Business Development
> >
> > Mob  : +86-15244932162
> > Tel: +86-28-85530156
> > Skype :minnie...@vinchin.com
> > Email: minnie...@vinchin.com
> > Website: www.vinchin.com
> >
> > F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> > Road, Hi-Tech Zone, Chengdu, China
> >
> >
> > From: Yedidyah Bar David
> > Date: 2020-06-15 15:42
> > To: minnie.du
> > CC: users
> > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > >
> > > We have met a problem when testing oVirt 4.4.
> > >
> > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
> > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
> > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
> > > state of corresponding disk became illegal. Removing the snapshot in this 
> > > state requires a lot of risky work in the background, leading to the 
> > > inability to free up snapshot space. Long-term backups will cause the 
> > > target VM to create a large number of unrecoverable snapshots, thus 
> > > taking up a large amount of production storage. So we need your help.
> >
> > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > open a bug and attach all of them, just in case.
> >
> > Thanks!
> > --
> > Didi
> >
> >
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYVFRUWNYE2NFRZAYSIL2WQN72TYROT3/


Re: [ovirt-users] Issue migrating hard drive to new vm store

2017-11-14 Thread Benny Zlotnik
Can you please provide full vdsm logs (only the engine log is attached) and
the versions of the engine, vdsm, gluster?

On Tue, Nov 14, 2017 at 6:16 PM, Bryan Sockel  wrote:

> Having an issue moving a hard disk from one vm data store new a newly
> created gluster data store.  I can shut down the machine and copy the hard
> drive, detach the old hard drive and attach the new hard drive, but i would
> prefer to keep the vm on line when moving the disk.
>
> I have attached a portion of the vdsm.log file.
>
>
>
> Thanks
> Bryan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not retrieve LUNs, please check your storage.

2017-11-15 Thread Benny Zlotnik
Hi,

This looks like a bug. Can you please file a report with the steps and full
logs on https://bugzilla.redhat.com?
>From looking at the logs it looks like its related to the user field being
empty

On Wed, Nov 15, 2017 at 1:40 PM,  wrote:

> Hi,
>
> I'm trying to connect a new oVirt Engine Version: 4.1.2.2-1.el7.centos to
> a Dell MD3200i SAN.
> I can discover the SAN from the storage new domain window but when I
> login, instead of seeing the + symbol I get the "Could not retrieve LUNs,
> please check your storage" error message.
> I tried with both a raw LUN and with a partitioned LUN but still no luck.
> Any idea what could cause the problem?
>
> Here some commands and logs
>
> [root@ov1 vdsm]# multipath -ll
> 36d4ae52000662da46cbd59f95a20 dm-3 DELL,MD32xxi
> size=20G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
> rdac' wp=rw
> `-+- policy='round-robin 0' prio=11 status=active
>   |- 66:0:0:0 sdb 8:16 active ready running
>   `- 67:0:0:0 sdc 8:32 active ready running
>
> [root@ov1 vdsm]# lsblk
> NAMEMAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sda   8:00 136.1G  0 disk
> ├─sda18:10 1G  0 part  /boot
> └─sda28:20 135.1G  0 part
>   ├─rhel_ov1-root   253:0050G  0 lvm   /
>   ├─rhel_ov1-swap   253:10 4G  0 lvm   [SWAP]
>   └─rhel_ov1-home   253:20  81.1G  0 lvm   /home
> sdb   8:16   020G  0 disk
> └─36d4ae52000662da46cbd59f95a20 253:3020G  0 mpath
> sdc   8:32   020G  0 disk
> └─36d4ae52000662da46cbd59f95a20 253:3020G  0 mpath
>
>
> [root@ov1 vdsm]# dmsetup ls
> rhel_ov1-home(253:2)
> rhel_ov1-swap(253:1)
> rhel_ov1-root(253:0)
> 36d4ae52000662da46cbd59f95a20   (253:3)
>
> [root@ov1 vdsm]# dmsetup table
> rhel_ov1-home: 0 170123264 linear 8:2 8390656
> rhel_ov1-swap: 0 8388608 linear 8:2 2048
> rhel_ov1-root: 0 104857600 linear 8:2 178513920
> 36d4ae52000662da46cbd59f95a20: 0 41943040 multipath 3
> queue_if_no_path pg_init_retries 50 1 rdac 1 1 round-robin 0 2 1 8:16 1
> 8:32 1
>
>
>
> engine.log
>
> 2017-11-15 11:22:32,243Z INFO  [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.DiscoverSendTargetsVDSCommand] (default task-70)
> [e7e6c1fb-27f3-46a5-9863-e454bdcd1898] START,
> DiscoverSendTargetsVDSCommand(HostName = ov1.foo.bar.org,
> DiscoverSendTargetsVDSCommandParameters:{runAsync='true',
> hostId='f1e4fdad-1cf1-473a-97cb-79e2641c2c86',
> connection='StorageServerConnections:{id='null', connection='10.1.8.200',
> iqn='null', vfsType='null', mountOptions='null', nfsVersion='null',
> nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}'}),
> log id: 19df6841
> 2017-11-15 11:22:33,436Z INFO  [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.DiscoverSendTargetsVDSCommand] (default task-70)
> [e7e6c1fb-27f3-46a5-9863-e454bdcd1898] FINISH,
> DiscoverSendTargetsVDSCommand, return: [StorageServerConnections:{id='null',
> connection='10.1.8.200', iqn='iqn.1984-05.com.dell:powe
> rvault.md3200i.6d4ae5200063fd454f0c6e44', vfsType='null',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}, StorageServerConnections:{id='null',
> connection='10.1.8.201', iqn='iqn.1984-05.com.dell:powe
> rvault.md3200i.6d4ae5200063fd454f0c6e44', vfsType='null',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}, StorageServerConnections:{id='null',
> connection='10.1.8.202', iqn='iqn.1984-05.com.dell:powe
> rvault.md3200i.6d4ae5200063fd454f0c6e44', vfsType='null',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}, StorageServerConnections:{id='null',
> connection='10.1.8.203', iqn='iqn.1984-05.com.dell:powe
> rvault.md3200i.6d4ae5200063fd454f0c6e44', vfsType='null',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'}], log id: 19df6841
>
> 2017-11-15 11:22:46,654Z INFO  [org.ovirt.engine.core.bll.sto
> rage.connection.ConnectStorageToVdsCommand] (default task-78)
> [bc8ed6d7-264a-43bf-a076-b15f05ef34b8] Running command:
> ConnectStorageToVdsCommand internal: false. Entities affected :  ID:
> aaa0----123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
> 2017-11-15 11:22:46,657Z INFO  [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.ConnectStorageServerVDSCommand] (default task-78)
> [bc8ed6d7-264a-43bf-a076-b15f05ef34b8] START,
> ConnectStorageServerVDSCommand(HostName = ov1.foo.bar.org,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='f1e4fdad-1cf1-473a-97cb-79e2641c2c86',
> storagePoolId='----

Re: [ovirt-users] Snapshot or not?

2017-11-16 Thread Benny Zlotnik
Hi Tibor,

Can you please explain this part: "After this I just wondered, I will make
a new VM with same disk and I will copy the images (really just rename)
from original to recreated."
What were the exact steps you took?

Thanks

On Thu, Nov 16, 2017 at 4:19 PM, Demeter Tibor  wrote:

> Hi,
>
> Thank you for your reply.
>
> So. I have a disk with snapshot. Or - really - I just think that is a
> snapshot. It was attached originally to a VM (with other two disks, that is
> not have snapshot) I did a detach-attach-storage procedure, but after
> attach, - I don't know why -  ovirt could not import the VM and disks from
> this  (ovirt said it is not possible).  After this I just wondered, I will
> make a new VM with same disk and I will copy the images (really just
> rename) from original to recreated.
>
> It was partial success because the VM can boot, but the disk, where there
> is a snapshot I can't read the LVM table. I see just it seems to corrupt.
> Now this disk in a very interested state: I can see the snapshot datas
> from the vm as raw disk.
> I think Ovirt don't know that is a snapshotted image and attach to VM as
> raw disk.
>
> So my really question, how can I add this disk image as good to ovirt?
>
> Please help me, it is very important me.:(
>
> Thanks in advance,
>
> Have a nice day,
>
> Tibor
>
>
> - 2017. nov.. 16., 11:55, Ala Hino  írta:
>
> Hi Tibor,
> I am not sure I completely understand the scenario.
>
> You have a VM with two disks and then you create a snapshot including the
> two disks?
> Before creating the snapshot, did the VM recognize the two disks?
>
> On Mon, Nov 13, 2017 at 10:36 PM, Demeter Tibor 
> wrote:
>
>> Dear Users,
>>
>> I have a disk of a vm, that is have a snapshot. It is very interesting,
>> because there are two other disk of that VM, but there are no snapshots of
>> them.
>> I found this while I've try to migrate a storage-domain between two
>> datacenter.
>> Because, I didn't import that vm from the storage domain, I did an
>> another similar VM with exactly same sized thin-provisioned disks. I have
>> renamed, copied to here my originals.
>>
>> The VM started successfully, but the disk that contain a snapshot did not
>> recognized by the os. I can see the whole disk as raw. (disk id, format in
>> ovirt, filenames of images, etc) . I think ovirt don't know that is a
>> snapshotted image and use as raw. Is it possible?
>> I don't see any snapshot in snapshots. Also I have try to list snapshots
>> with qemu-img info and qemu-img snapshot -l , but it does not see any
>> snapshots in the image.
>>
>> Really, I don't know how is possible this.
>>
>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
>> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>> image: 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>> file format: qcow2
>> virtual size: 13T (13958643712000 bytes)
>> disk size: 12T
>> cluster_size: 65536
>> backing file: ../8d815282-6957-41c0-bb3e-6c8f4a23a64b/723ad5aa-02f6-
>> 4067-ac75-0ce0a761627f
>> backing file format: raw
>> Format specific information:
>> compat: 0.10
>>
>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
>> 723ad5aa-02f6-4067-ac75-0ce0a761627f
>> image: 723ad5aa-02f6-4067-ac75-0ce0a761627f
>> file format: raw
>> virtual size: 2.0T (2147483648000 bytes)
>> disk size: 244G
>>
>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# ll
>> total 13096987560 <(309)%20698-7560>
>> -rw-rw. 1 36 36 13149448896512 Nov 13 13:42 5974fd33-af4c-4e3b-aadb-
>> bece6054eb6b
>> -rw-rw. 1 36 361048576 Nov 13 19:34 5974fd33-af4c-4e3b-aadb-
>> bece6054eb6b.lease
>> -rw-r--r--. 1 36 36262 Nov 13 19:54 5974fd33-af4c-4e3b-aadb-
>> bece6054eb6b.meta
>> -rw-rw. 1 36 36  2147483648000 Jul  8  2016 723ad5aa-02f6-4067-ac75-
>> 0ce0a761627f
>> -rw-rw. 1 36 361048576 Jul  7  2016 723ad5aa-02f6-4067-ac75-
>> 0ce0a761627f.lease
>> -rw-r--r--. 1 36 36335 Nov 13 19:52 723ad5aa-02f6-4067-ac75-
>> 0ce0a761627f.meta
>>
>> qemu-img snapshot -l 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>>
>> (nothing)
>>
>> Because it is a very big (13 TB) disk I can't migrate to an another
>> image, because I don't have enough free space. So I just would like to use
>> it in ovirt like in the past.
>>
>> I have a very old ovirt (3.5)
>>
>> How can I use this disk?
>>
>> Thanks in advance,
>>
>> Regards,
>>
>> Tibor
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-19 Thread Benny Zlotnik
Hi,

Please attach full engine and vdsm logs

On Sun, Nov 19, 2017 at 12:26 PM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

>
> Hello, oVirt guru`s!
>
> oVirt Engine Version: 4.1.6.2-1.el7.centos
>
> Some time ago the problems started with the oVirt administrative web
> console.
> When I try to open the sup-tab "Template import" for Export domain on tab
> "Storage" I get the error in sub-tub "Alerts"
>
> VDSM command GetVmsInfoVDS failed: Missing OVF file from VM:
> (u'f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6',)
>
> All storages on tab "Storage" mark as down in web console.
> The SPM-role begins frantically be transmitted from one host to another.
> Screenshot attached.
>
> All virtual machines at the same time working without stop
> But I can't get a list of VMS stored on Export domain storage.
>
> Recently this problem appeared and I deleted Export domain storage.
> I completely deleted the Export domain storage from oVirt, formatted it,
> and then attached again to the oVirt
> The problem is repeated again.
>
> Please help to solve this problem.
>
> --
> With best wishes,
> Aleksey.I.Maksimov
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-19 Thread Benny Zlotnik
+ ovirt-users

On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik  wrote:

> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure the export domain was formatted
> properly? Couldn't find it in the engine.log
> Looking at the logs it seems VMs were found on the export domain
> (id=3a514c90-e574-4282-b1ee-779602e35f24)
>
> 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain]
> vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9',
> u'03c9e965-710d-4fc8-be06-583abbd1d7a9', 
> u'07dab4f6-d677-4faa-9875-97bd6d601f49',
> u'0b94a559-b31a-475d-9599-36e0dbea579a', 
> u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196',
> u'151a4e75-d67a-4603-8f52-abfb46cb74c1', 
> u'177479f5-2ed8-4b6c-9120-ec067d1a1247',
> u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', 
> u'1e72be16-f540-4cfd-b0e9-52b66220a98b',
> u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', 
> u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7',
> u'25fa96d1-6083-4daa-9755-026e632553d9', 
> u'273ffd05-6f93-4e4a-aac9-149360b5f0b4',
> u'28188426-ae8b-4999-8e31-4c04fbba4dac', 
> u'28e9d5f2-4312-4d0b-9af9-ec1287bae643',
> u'2b7093dc-5d16-4204-b211-5b3a1d729872', 
> u'32ecfcbb-2678-4f43-8d59-418e03920693',
> u'3376ef0b-2af5-4a8b-9987-18f28f6bb334', 
> u'34d1150f-7899-44d9-b8cf-1c917822f624',
> u'383bbfc6-6841-4476-b108-a1878ed9ce43', 
> u'388e372f-b0e8-408f-b21b-0a5c4a84c457',
> u'39396196-42eb-4a27-9a57-a3e0dad8a361', 
> u'3fc02ca2-7a03-4d5e-bc21-688f138a914f',
> u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf', 
> u'44e10588-8047-4734-81b3-6a98c229b637',
> u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86', 
> u'47a83986-d3b8-4905-b017-090276e967f5',
> u'49d83471-a312-412e-b791-8ee0badccbb5', 
> u'4b1b9360-a48a-425b-9a2e-19197b167c99',
> u'4d783e2a-2d81-435a-98c4-f7ed862e166b', 
> u'51976b6e-d93f-477e-a22b-0fa84400ff84',
> u'56b77077-707c-4949-9ea9-3aca3ea912ec', 
> u'56dc5c41-6caf-435f-8146-6503ea3eaab9',
> u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d', 
> u'5873f804-b992-4559-aff5-797f97bfebf7',
> u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8', 
> u'590d1adb-52e4-4d29-af44-c9aa5d328186',
> u'5c79f970-6e7b-4996-a2ce-1781c28bff79', 
> u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3',
> u'63749307-4486-4702-ade9-4324f5bfe80c', 
> u'6555ac11-7b20-4074-9d71-f86bc10c01f9',
> u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728', 
> u'672c4e12-628f-4dcd-a57e-b4ff822a19f3',
> u'679c0445-512c-4988-8903-64c0c08b5fab', 
> u'6ae337d0-e6a0-489f-82e6-57a85f63176a',
> u'6d713cb9-993d-4822-a030-ac7591794050', 
> u'72a50ef0-945d-428a-a336-6447c4a70b99',
> u'751dfefc-9e18-4f26-bed6-db412cdb258c', 
> u'7587db59-e840-41bc-96f3-b212b7b837a4',
> u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', 
> u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2',
> u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', 
> u'7a7d814e-4586-40d5-9750-8896b00a6490',
> u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', 
> u'7d781e21-6613-41f4-bcea-8b57417e1211',
> u'7da51499-d7db-49fd-88f6-bcac30e5dd86', 
> u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6',
> u'85169fe8-8198-492f-b988-b8e24822fd01', 
> u'87839926-8b84-482b-adec-5d99573edd9e',
> u'8a7eb414-71fa-4f91-a906-d70f95ccf995', 
> u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b',
> u'8b73e593-8513-4a8e-b051-ce91765b22bd', 
> u'8cbd5615-4206-4e4a-992d-8705b2f2aac2',
> u'92e9d966-c552-4cf9-b84a-21dda96f3f81', 
> u'95209226-a9a5-4ada-8eed-a672d58ba72c',
> u'986ce2a5-9912-4069-bfa9-e28f7a17385d', 
> u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c',
> u'9ff87197-d089-4b2d-8822-b0d6f6e67292', 
> u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957',
> u'a46d5615-8d9f-4944-9334-2fca2b53c27e', 
> u'a6a50244-366b-4b7c-b80f-04d7ce2d8912',
> u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', 
> u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd',
> u'b09e5783-6765-4514-a5a3-86e5e73b729b', 
> u'b1ecfe29-7563-44a9-b814-0faefac5465b',
> u'baa542e1-492a-4b1b-9f54-e9566a4fe315', 
> u'bb91f9f5-98df-45b1-b8ca-9f67a92eef03',
> u'bd11f11e-be3d-4456-917c-f93ba9a19abe', 
&

Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-20 Thread Benny Zlotnik
Yes, you can remove it

On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> I found an empty directory in the Export domain storage:
>
> # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-
> vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik" :
>
> + ovirt-users
>
> On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik 
> wrote:
>
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure the export domain was formatted
> properly? Couldn't find it in the engine.log
> Looking at the logs it seems VMs were found on the export domain
> (id=3a514c90-e574-4282-b1ee-779602e35f24)
>
> 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain]
> vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9',
> u'03c9e965-710d-4fc8-be06-583abbd1d7a9', 
> u'07dab4f6-d677-4faa-9875-97bd6d601f49',
> u'0b94a559-b31a-475d-9599-36e0dbea579a', 
> u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196',
> u'151a4e75-d67a-4603-8f52-abfb46cb74c1', 
> u'177479f5-2ed8-4b6c-9120-ec067d1a1247',
> u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', 
> u'1e72be16-f540-4cfd-b0e9-52b66220a98b',
> u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', 
> u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7',
> u'25fa96d1-6083-4daa-9755-026e632553d9', 
> u'273ffd05-6f93-4e4a-aac9-149360b5f0b4',
> u'28188426-ae8b-4999-8e31-4c04fbba4dac', 
> u'28e9d5f2-4312-4d0b-9af9-ec1287bae643',
> u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e
> 03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334',
> u'34d1150f-7899-44d9-b8cf-1c917822f624', 
> u'383bbfc6-6841-4476-b108-a1878ed9ce43',
> u'388e372f-b0e8-408f-b21b-0a5c4a84c457', 
> u'39396196-42eb-4a27-9a57-a3e0dad8a361',
> u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', 
> u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf',
> u'44e10588-8047-4734-81b3-6a98c229b637', 
> u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86',
> u'47a83986-d3b8-4905-b017-090276e967f5', 
> u'49d83471-a312-412e-b791-8ee0badccbb5',
> u'4b1b9360-a48a-425b-9a2e-19197b167c99', 
> u'4d783e2a-2d81-435a-98c4-f7ed862e166b',
> u'51976b6e-d93f-477e-a22b-0fa84400ff84', 
> u'56b77077-707c-4949-9ea9-3aca3ea912ec',
> u'56dc5c41-6caf-435f-8146-6503ea3eaab9', 
> u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d',
> u'5873f804-b992-4559-aff5-797f97bfebf7', 
> u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8',
> u'590d1adb-52e4-4d29-af44-c9aa5d328186', 
> u'5c79f970-6e7b-4996-a2ce-1781c28bff79',
> u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', 
> u'63749307-4486-4702-ade9-4324f5bfe80c',
> u'6555ac11-7b20-4074-9d71-f86bc10c01f9', 
> u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728',
> u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', 
> u'679c0445-512c-4988-8903-64c0c08b5fab',
> u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac75
> 91794050', u'72a50ef0-945d-428a-a336-6447c4a70b99',
> u'751dfefc-9e18-4f26-bed6-db412cdb258c', 
> u'7587db59-e840-41bc-96f3-b212b7b837a4',
> u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', 
> u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2',
> u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', 
> u'7a7d814e-4586-40d5-9750-8896b00a6490',
> u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', 
> u'7d781e21-6613-41f4-bcea-8b57417e1211',
> u'7da51499-d7db-49fd-88f6-bcac30e5dd86', 
> u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6',
> u'85169fe8-8198-492f-b988-b8e24822fd01', 
> u'87839926-8b84-482b-adec-5d99573edd9e',
> u'8a7eb414-71fa-4f91-a906-d70f95ccf995', 
> u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b',
> u'8b73e593-8513-4a8e-b051-ce91765b22bd', 
> u'8cbd5615-4206-4e4a-992d-8705b2f2aac2',
> u'92e9d966-c552-4cf9-b84a-21dda96f3f81', 
> u'95209226-a9a5-4ada-8eed-a672d58ba72c',
> u'986ce2a5-9912-4069-bfa9-e28f7a17385d', 
> u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c',
> 

Re: [ovirt-users] Error during delete snapshot

2017-11-21 Thread Benny Zlotnik
Please attach engine and vdsm logs

On Tue, Nov 21, 2017 at 2:11 PM, Arthur Melo  wrote:

> Can someone help me with this error?
>
>
> Failed to delete snapshot '' for VM 'proxy03'.
>
>
>
> Atenciosamente,
> Arthur Melo
> Linux User #302250
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error during delete snapshot

2017-11-21 Thread Benny Zlotnik
Please attach the full engine log (at least from the moment you attempted
the to delete).
Do you have access to the host the VM is running on? The vdsm log is
available at /var/log/vdsm/vdsm.log

On Tue, Nov 21, 2017 at 2:17 PM, Arthur Melo  wrote:

> engine.log
> --
> 2017-11-21 10:15:02,536-02 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2)
> [70cc2ffa-2414-4a00-9e24-6b6378408a9d] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID:
> 70cc2ffa-2414-4a00-9e24-6b6378408a9d, Job ID: 
> ec197072-7c38-42b6-9aef-99635d4ee135,
> Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to
> delete snapshot '' for VM 'proxy03'.
> 2017-11-21 10:15:02,537-02 ERROR 
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (DefaultQuartzScheduler2) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Failed
> invoking callback end method 'onFailed' for command
> 'a84519fe-6b23-4084-84a2-b7964cbcde26' with exception 'null', the
> callback is marked for end method retries
> 2017-11-21 10:15:12,551-02 ERROR 
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (DefaultQuartzScheduler5) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Ending
> command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with
> failure.
> 2017-11-21 10:15:12,555-02 INFO  
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (DefaultQuartzScheduler5) [70cc2ffa-2414-4a00-9e24-6b6378408a9d]
> transaction rolled back
> 2017-11-21 10:15:12,567-02 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5)
> [70cc2ffa-2414-4a00-9e24-6b6378408a9d] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID:
> 70cc2ffa-2414-4a00-9e24-6b6378408a9d, Job ID: 
> ec197072-7c38-42b6-9aef-99635d4ee135,
> Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to
> delete snapshot '' for VM 'proxy03'.
> 2017-11-21 10:15:12,567-02 ERROR 
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (DefaultQuartzScheduler5) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Failed
> invoking callback end method 'onFailed' for command
> 'a84519fe-6b23-4084-84a2-b7964cbcde26' with exception 'null', the
> callback is marked for end method retries
> 2017-11-21 10:15:22,582-02 ERROR 
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (DefaultQuartzScheduler10) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Ending
> command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with
> failure.
> 2017-11-21 10:15:22,585-02 INFO  
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (DefaultQuartzScheduler10) [70cc2ffa-2414-4a00-9e24-6b6378408a9d]
> transaction rolled back
> 2017-11-21 10:15:22,599-02 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler10)
> [70cc2ffa-2414-4a00-9e24-6b6378408a9d] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID:
> 70cc2ffa-2414-4a00-9e24-6b6378408a9d, Job ID: 
> ec197072-7c38-42b6-9aef-99635d4ee135,
> Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to
> delete snapshot '' for VM 'proxy03'.
> 2017-11-21 10:15:22,600-02 ERROR 
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (DefaultQuartzScheduler10) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Failed
> invoking callback end method 'onFailed' for command
> 'a84519fe-6b23-4084-84a2-b7964cbcde26' with exception 'null', the
> callback is marked for end method retries
> --
>
> I don't have vdsm log. (I don't know why).
>
> Atenciosamente,
> Arthur Melo
> Linux User #302250
>
>
> 2017-11-21 10:14 GMT-02:00 Benny Zlotnik :
>
>> Please attach engine and vdsm logs
>>
>> On Tue, Nov 21, 2017 at 2:11 PM, Arthur Melo  wrote:
>>
>>> Can someone help me with this error?
>>>
>>>
>>> Failed to delete snapshot '' for VM 'proxy03'.
>>>
>>>
>>>
>>> Atenciosamente,
>>> Arthur Melo
>>> Linux User #302250
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Benny Zlotnik
Hi, glad to hear it helped.

https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
The component is BLL.Storage
and the team is Storage

Thanks

On Wed, Nov 22, 2017 at 3:51 PM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> Hello, Benny.
>
> I deleted the empty directory and the problem disappeared.
> Thank you for your help.
>
> PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/
> Don't know which option to choose (https://bugzilla.redhat.com/
> enter_bug.cgi?classification=oVirt).
> Maybe you can open a bug and attach my logs?
>
> 20.11.2017, 13:08, "Benny Zlotnik" :
>
> Yes, you can remove it
>
> On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
> aleksey.i.maksi...@yandex.ru> wrote:
>
> I found an empty directory in the Export domain storage:
>
> # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-
> vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik" :
>
> + ovirt-users
>
> On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik 
> wrote:
>
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure the export domain was formatted
> properly? Couldn't find it in the engine.log
> Looking at the logs it seems VMs were found on the export domain
> (id=3a514c90-e574-4282-b1ee-779602e35f24)
>
> 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain]
> vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9',
> u'03c9e965-710d-4fc8-be06-583abbd1d7a9', 
> u'07dab4f6-d677-4faa-9875-97bd6d601f49',
> u'0b94a559-b31a-475d-9599-36e0dbea579a', 
> u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196',
> u'151a4e75-d67a-4603-8f52-abfb46cb74c1', 
> u'177479f5-2ed8-4b6c-9120-ec067d1a1247',
> u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', 
> u'1e72be16-f540-4cfd-b0e9-52b66220a98b',
> u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', 
> u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7',
> u'25fa96d1-6083-4daa-9755-026e632553d9', 
> u'273ffd05-6f93-4e4a-aac9-149360b5f0b4',
> u'28188426-ae8b-4999-8e31-4c04fbba4dac', 
> u'28e9d5f2-4312-4d0b-9af9-ec1287bae643',
> u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e
> 03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334',
> u'34d1150f-7899-44d9-b8cf-1c917822f624', 
> u'383bbfc6-6841-4476-b108-a1878ed9ce43',
> u'388e372f-b0e8-408f-b21b-0a5c4a84c457', 
> u'39396196-42eb-4a27-9a57-a3e0dad8a361',
> u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', 
> u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf',
> u'44e10588-8047-4734-81b3-6a98c229b637', 
> u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86',
> u'47a83986-d3b8-4905-b017-090276e967f5', 
> u'49d83471-a312-412e-b791-8ee0badccbb5',
> u'4b1b9360-a48a-425b-9a2e-19197b167c99', 
> u'4d783e2a-2d81-435a-98c4-f7ed862e166b',
> u'51976b6e-d93f-477e-a22b-0fa84400ff84', 
> u'56b77077-707c-4949-9ea9-3aca3ea912ec',
> u'56dc5c41-6caf-435f-8146-6503ea3eaab9', 
> u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d',
> u'5873f804-b992-4559-aff5-797f97bfebf7', 
> u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8',
> u'590d1adb-52e4-4d29-af44-c9aa5d328186', 
> u'5c79f970-6e7b-4996-a2ce-1781c28bff79',
> u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', 
> u'63749307-4486-4702-ade9-4324f5bfe80c',
> u'6555ac11-7b20-4074-9d71-f86bc10c01f9', 
> u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728',
> u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', 
> u'679c0445-512c-4988-8903-64c0c08b5fab',
> u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac75
> 91794050', u'72a50ef0-945d-428a-a336-6447c4a70b99',
> u'751dfefc-9e18-4f26-bed6-db412cdb258c', 
> u'7587db59-e840-41bc-96f3-b212b7b837a4',
> u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', 
> u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2',
> u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', 
> u'7a7d814e-4586-40d5-9750-8896b00a6490',
> u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', 
> u'7d781e21-66

Re: [ovirt-users] two questions about 4.2 feature

2017-12-22 Thread Benny Zlotnik
Regarding the first question: there is a bug open for this issue [1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1513987

On Fri, Dec 22, 2017 at 1:42 PM, Nathanaël Blanchet 
wrote:

> Hi all,
>
> On 4.2, it seems that it is not possible anymore to move a disk to an
> other storage domain through the vm disk tab (still possible from the
> storage disk tab).
>
> Secondly, while the new design is great, is there a possibility to keep
> the old one for any needs?
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Need input please.

2018-01-06 Thread Benny Zlotnik
Can you please provide the log with the error?

On Sat, Jan 6, 2018 at 5:09 PM, carl langlois 
wrote:

> Hi again,
>
> I manage to go a little bit further.. I was not able to set one host to
> maintenance because they had running vm.. so i force it to mark it as
> reboot  and flush any vm and now i can try to reinstall the host.. but now
> i am getting error when the installation try to enroll certificate..
>
> Any idea?
>
> Thanks
> Carl
>
>
> On Sat, Jan 6, 2018 at 9:41 AM, carl langlois 
> wrote:
>
>> Hi,
>>
>> Thanks for the quick reply.
>>
>> Version before the update was the latest 4.1.
>> The storage is NFS.
>> The host are plain 7.4 Centos.i keep them up to date
>> if i remenber the initail deploy of the engine vm was full OS
>>
>> The step to upgrade was:
>>
>> SSH to hosted engine vm ,
>> yum install 4.2 release
>> run engine-setup.
>> this where i think something when wrong.
>> the engine setup was not able to update some certificate.. a quick search
>> on the internet and found that moving the ca.pem from the
>> /etc/pki/ovrit-engine made the engine-setup work..
>>
>> from that point the hosted_engine seem to work fine but all hosts and
>> data center are not operational.
>>
>> One of my main concern is losing all the template and users vm..
>>
>>
>>
>> Engine and vdsm log of one host are here..
>>
>> In the engine log there is a lot of this error..
>>
>> 2018-01-06 08:31:40,459-05 ERROR [org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.GetCapabilitiesVDSCommand] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-9)
>> [] Command 'GetCapabilitiesVDSCommand(HostName = hosted_engine_2,
>> VdsIdAndVdsVDSCommandParametersBase:{hostId='7d2b3f49-0fbb-493b-8f3c-5283566e830d',
>> vds='Host[hosted_engine_2,7d2b3f49-0fbb-493b-8f3c-5283566e830d]'})'
>> execution failed: VDSGenericException: VDSNetworkException: General
>> SSLEngine problem.
>>
>> https://drive.google.com/drive/folders/1tZHLIMV0ctGyeDPcGMlJ
>> a6evk6fkrCz8?usp=sharing
>>
>> Thanks for your support
>>
>> Carl
>>
>>
>>
>> On Sat, Jan 6, 2018 at 8:50 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Sat, Jan 6, 2018 at 2:42 PM, carl langlois 
>>> wrote:
>>>
 Hi again all,

 I really need your input on recovering my failed upgrade to 4.2.



>>> First two things to do to get appropriate help would be, in my opinion:
>>>
>>> 1) provide a detailed description of your environment
>>> exact version before the update; kind of storage (Gluster, NFS, iSCSI,
>>> FC), kind of hosts (plain OS and which version or node-ng and which version
>>> before and after), kind of deploy executed for hosted engine VM at initial
>>> install time (appliance, full OS).
>>> Describe steps used for upgrading.
>>>
>>> 2) upload to a shared file services all the possible logs:
>>> engine logs before and after the upgrade and logs of engine-setup run
>>> during update
>>> vdsm logs before and after the upgrade
>>>
>>>
>>> Gianluca
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] non operational node

2018-01-12 Thread Benny Zlotnik
Can you please attach engine and vdsm logs?

On 12 Jan 2018 11:43, "Tomeu Sastre Cabanellas"  wrote:

> hi there,
>
> i'm testing ovirt 4.2 because I want to migrate all our VMs from
> XenServer, I have set a engine and a node, when conecting to the node I
> receive a "non-operational" and I cannot set it to ON.
>
> I'm an experienced engineer, but I'm new with oVirt, any clue where do I
> have to start to check ?
>
> thanks a lot.
>
> [image: Inline images 1]
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM is down with error: Bad volume specification

2018-01-23 Thread Benny Zlotnik
Hi,

Can you please attach engine and vdsm logs?

On Tue, Jan 23, 2018 at 1:55 PM, Chris Boot  wrote:

> Hi all,
>
> I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far
> it has been working fine, until this morning. Once of my VMs seems to
> have had a snapshot created that I can't delete.
>
> I noticed when the VM failed to migrate to my other hosts, so I just
> shut it down to allow the host to go into maintenance. Now I can't start
> the VM with the snapshot nor can I delete the snapshot.
>
> Please let me know what further information you need to help me diagnose
> the issue and recover the VM.
>
> Best regards,
> Chris
>
>  Forwarded Message 
> Subject: alertMessage (ovirt.boo.tc), [VM morse is down with error. Exit
> message: Bad volume specification {'address': {'bus': '0', 'controller':
> '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> 'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '12386304', 'cache': 'none', 'imageID':
> 'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
> 'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
> '0', 'format': 'cow', 'poolID': '0001-0001-0001-0001-0311',
> 'device': 'disk', 'path':
> '/rhev/data-center/0001-0001-0001-0001-0311/
> 23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-
> 52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
> 'specParams': {}, 'discard': True}.]
> Date: Tue, 23 Jan 2018 11:32:21 + (GMT)
> From: eng...@ovirt.boo.tc
> To: bo...@bootc.net
>
> Time:2018-01-23 11:30:39.677
> Message:VM morse is down with error. Exit message: Bad volume
> specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> 'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '12386304', 'cache': 'none', 'imageID':
> 'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
> 'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
> '0', 'format': 'cow', 'poolID': '0001-0001-0001-0001-0311',
> 'device': 'disk', 'path':
> '/rhev/data-center/0001-0001-0001-0001-0311/
> 23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-
> 52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
> 'specParams': {}, 'discard': True}.
> Severity:ERROR
> VM Name: morse
> Host Name: ovirt2.boo.tc
> Template Name: Blank
>
> --
> Chris Boot
> bo...@boo.tc
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multi OVF_STORE entries

2018-01-24 Thread Benny Zlotnik
Hi,

By default there two OVF_STORE disks per domain. It can be changed with the
StorageDomainOvfStoreCount config value.

On Wed, Jan 24, 2018 at 1:58 PM, Stefano Danzi  wrote:

> Hello,
>
> I'm checking Storage -> Disks in my oVirt test site. I can find:
>
> - 4 disks for my 4 VM
> - 1 disk for HostedEngine
> - 4 OVF_STORE entries, sharables and without size.
>
> I can't manage, move or remove OVF_STORE entries.
> I think that they are somethig ported during some upgrade...
>
> Does anyone have any ideas?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 vdsclient

2018-02-06 Thread Benny Zlotnik
It was replaced by vdsm-client[1]

[1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/

On Tue, Feb 6, 2018 at 10:17 AM, Alex K  wrote:

> Hi all,
>
> I have a stuck snapshot removal from a VM which is blocking the VM to
> start.
> In ovirt 4.1 I was able to cancel the stuck task by running within SPM
> host:
>
> vdsClient -s 0 getAllTasksStatuses
> vdsClient -s 0 stopTask 
>
> Is there a similar way to do at ovirt 4.2?
>
> Thanx,
> Alex
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sparsify in 4.2 - where it moved ?

2018-02-15 Thread Benny Zlotnik
Under the 3 dots as can be seen in the attached screenshot

On Thu, Feb 15, 2018 at 7:07 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> > On 15 Feb 2018, at 14:17, Andrei V  wrote:
> >
> > Hi !
> >
> >
> > I can’t locate “Sparsify” disk image command anywhere in oVirt 4.2.
> > Where it have been moved ?
>
> good question:)
> Was it lost in GUI redesign?
>
> >
> >
> > Thanks
> > Andrei
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Templatexs

2018-04-03 Thread Benny Zlotnik
Hi Bryan,

You can go into the template -> storage tab -> select the disk and remove
it there

On Fri, Mar 30, 2018 at 4:50 PM, Bryan Sockel 
wrote:

> Hi,
>
>
> We are in the process of re-doing one of our storage domains.  As part of
> the process I needed to relocate my templates over to a temporary domain.
> To do this, I copy the disk from one domain to another.  In the past I have
> been able to go into disk’s and remove the template disk from the storage
> domain I no longer want it on.  Now when I go in to storage -> Disks ->
>  -> Storage and select the storage domain I wish to
> remove it from, the box is grayed out.
>
>
>
> Currently running Ovirt version 4.2.2.5-1.el7.centos
>
>
>
>
> Thank You,
>
>
> *Bryan Sockel*
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python-SDK4: Knowing snapshot status?

2018-04-09 Thread Benny Zlotnik
You can do that using something like:
snapshot_service = snapshots_service.snapshot_service(snapshot.id)
snapshot = snapshot_service.get()
if snapshot.snapshot_status == types.SnapshotStatus.OK:
  ...

But counting on the snapshot status is race prone, so in 4.2 a search by
correlation id was introduced and you can do something like this (taken
from ovirt-system-tests[1]):
correlation_id = uuid.uuid4()

vm1_snapshots_service.add(dead_snap1_params,
  query={'correlation_id': correlation_id})

testlib.assert_true_within_long(
lambda:
test_utils.all_jobs_finished(engine, correlation_id)
)

Where all jobs finished does:
try:
jobs = engine.jobs_service().list(
search='correlation_id=%s' % correlation_id
)
except:
jobs = engine.jobs_service().list()
return all(job.status != types.JobStatus.STARTED for job in jobs)


[1] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L360

On Mon, Apr 9, 2018 at 2:42 PM,  wrote:

> Hi,
>
> I'm running ovirt-engine-sdk-python 4.2.4 and I'm performing some
> snapshot-related tasks. I'd like to somehow control the status of the
> snapshot in order to know when I'll be able to run the next
> snapshot-related operation.
>
> For example, I'd like to create a new snapshot and then delete X oldest
> snapshots. After creating the snapshot I have to make sure the snapshot
> operation has concluded to run the deletion.
>
> However, I'm unable to find a native way to get the status of a snapshot.
>
> In [1]: snap = conn.follow_link(vm.snapshots)[3]   # This returns one
> snapshot
>
> In [2]: snap.status
>
> In [3]: snap.status_detail
>
> So both status-related properties return None. I've managed to find a
> "poorman's" way by doing this:
>
> while True:
> try:
> snaps_service.service(snap.id).remove()
> except Error, e:
> if e.code == 409:
> sleep(30)
> continue
> else:
> break
>
> Which works but is quite "tricky".
>
> Is there a better way to do this?
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Playing with ISCSI, added a ISCSI target, decided to remove it.

2018-04-09 Thread Benny Zlotnik
Can you provide the full engine and vdsm logs?

On Mon, 9 Apr 2018, 22:08 Scott Walker,  wrote:

> Log file error is:
>
> 2018-04-09 15:05:09,576-04 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
> (default task-28) [5f605594-423e-43f6-9e42-e47453518701] Validation of
> action 'RunVm' failed for user admin@internal-authz. Reasons:
> VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_DISKS_ON_BACKUP_STORAGE
>
> On 9 April 2018 at 15:03, Scott Walker  wrote:
>
>> Now suddenly I'm getting
>>
>> All my original storage domains are still there and are local ones. I
>> added the ISCSI domain just to see how it worked (and removed it).
>>
>> What can I do to fix this?
>>
>> "Cannot run VM. Running VM can not contain disks which are stored on a
>> backup storage domain."
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Playing with ISCSI, added a ISCSI target, decided to remove it.

2018-04-10 Thread Benny Zlotnik
Is the storage domain marked as backup?
If it is, you cannot use its disks in an active VM. You can remove the flag
and try again

On Mon, Apr 9, 2018 at 10:52 PM, Scott Walker 
wrote:

> All relevant log files.
>
> On 9 April 2018 at 15:21, Benny Zlotnik  wrote:
>
>> Can you provide the full engine and vdsm logs?
>>
>> On Mon, 9 Apr 2018, 22:08 Scott Walker,  wrote:
>>
>>> Log file error is:
>>>
>>> 2018-04-09 15:05:09,576-04 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (default task-28) [5f605594-423e-43f6-9e42-e47453518701] Validation of
>>> action 'RunVm' failed for user admin@internal-authz. Reasons:
>>> VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_DISKS_
>>> ON_BACKUP_STORAGE
>>>
>>> On 9 April 2018 at 15:03, Scott Walker  wrote:
>>>
>>>> Now suddenly I'm getting
>>>>
>>>> All my original storage domains are still there and are local ones. I
>>>> added the ISCSI domain just to see how it worked (and removed it).
>>>>
>>>> What can I do to fix this?
>>>>
>>>> "Cannot run VM. Running VM can not contain disks which are stored on a
>>>> backup storage domain."
>>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm's are diskless

2018-04-18 Thread Benny Zlotnik
Can you attach engine and vdsm logs?
Also, which version are you using?


On Wed, 18 Apr 2018, 19:23 ,  wrote:

> Hello All,
>
> after an update and a reboot, 3 vm's are indicated as diskless.
> When I try to add disks I indeed see 3 available disks, but I also see that
> all 3 are indicated to be smaller then 1GB
> Also I do not know what disk goes with which vm.
>
> The version I'm running is now users@ovirt.org;
> I apologize if this question was raised ( many ) times before.
>
> Greetings, J.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Install Fail over WebUI

2018-04-19 Thread Benny Zlotnik
Looks like you hit this: https://bugzilla.redhat.com/show_bug.cgi?id=1569420

On Thu, Apr 19, 2018 at 3:25 PM, Roger Meier 
wrote:

> Hi all,
>
> I wanted to add a new host to our current oVirt 4.2.2 setup and the
> install of the host fail with the following error message:
>
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-
> ansible-20180419123531-192.168.1.20-4b80801e.log
> ...
> > 2018-04-19 12:42:57,241 p=17471 u=ovirt |  TASK [hc-gluster-cgroups :
> > Set CPU quota] **
> > 2018-04-19 12:42:57,271 p=17471 u=ovirt |  [DEPRECATION WARNING]:
> > Using tests as filters is deprecated. Instead of using
> > `result|abs` instead use `result is abs`. This feature will be removed in
> > version 2.9. Deprecation warnings can be disabled by setting
> > deprecation_warnings=False in ansible.cfg.
> > 2018-04-19 12:42:57,343 p=17471 u=ovirt |  An exception occurred
> > during task execution. To see the full traceback, use -vvv. The error
> > was: AttributeError: 'int' object has no attribute 'startswith'
> > 2018-04-19 12:42:57,345 p=17471 u=ovirt |  fatal: [192.168.1.20]:
> > FAILED! => {}
> >
> > MSG:
> >
> > Unexpected failure during module execution.
> >
> > 2018-04-19 12:42:57,346 p=17471 u=ovirt |  RUNNING HANDLER
> > [hc-gluster-cgroups : Restart glusterd] 
> > 2018-04-19 12:42:57,348 p=17471 u=ovirt |  PLAY RECAP
> > *
> > 2018-04-19 12:42:57,348 p=17471 u=ovirt |  192.168.1.20
> > : ok=31   changed=11   unreachable=0failed=1
> ...
>
>
>
> Best regards
> Roger Meier
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Postgresql tables

2018-04-19 Thread Benny Zlotnik
It is in the disk_image_dynamic table

On Thu, Apr 19, 2018 at 3:36 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Hi Team,
>
> I am trying to get the disk level statistics using oVirt with the
> following API,
>
> /ovirt-engine/api/disks/{unique_disk_id}/statistics/
>
>
> *and I get this response : *
>  {
> "statistic": [
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "bytes_per_second",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "disk": {
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c",
> "id": "a570a8a4-d4ff-4826-bc39-50bb0b42785c"
> },
> *"name": "data.current.read",*
> "description": "Read data rate",
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c/statistics/33b9212b-f9cb-3fd0-b364-248fb61e1272",
> "id": "33b9212b-f9cb-3fd0-b364-248fb61e1272"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "bytes_per_second",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "disk": {
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c",
> "id": "a570a8a4-d4ff-4826-bc39-50bb0b42785c"
> },
>* "name": "data.current.write",*
> "description": "Write data rate",
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c/statistics/2f23addd-4ebd-3d82-a449-c28778bc33eb",
> "id": "2f23addd-4ebd-3d82-a449-c28778bc33eb"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "seconds",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "disk": {
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c",
> "id": "a570a8a4-d4ff-4826-bc39-50bb0b42785c"
> },
>  *   "name": "disk.read.latency",*
> "description": "Read latency",
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c/statistics/3a7b3f72-d035-3bb9-b196-e86a4eb34993",
> "id": "3a7b3f72-d035-3bb9-b196-e86a4eb34993"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "seconds",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "disk": {
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c",
> "id": "a570a8a4-d4ff-4826-bc39-50bb0b42785c"
> },
>   *  "name": "disk.write.latency",*
> "description": "Write latency",
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c/statistics/b1e75c7b-cea4-37d2-8459-f7d68efc69a3",
> "id": "b1e75c7b-cea4-37d2-8459-f7d68efc69a3"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "seconds",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "disk": {
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c",
> "id": "a570a8a4-d4ff-4826-bc39-50bb0b42785c"
> },
> "name": "disk.flush.latency",
> "description": "Flush latency",
> "href": "/ovirt-engine/api/disks/a570a8a4-d4ff-4826-bc39-
> 50bb0b42785c/statistics/9c17ad7b-9ef1-3e8d-ad0a-ff8bee3925f0",
> "id": "9c17ad7b-9ef1-3e8d-ad0a-ff8bee3925f0"
> }
> ]
> }
>
> I am able to get the disk write and read latency, write and read
> bandwidth.
>
>
> *But I am not able to find the postgresql tables used for these statistics
> in oVirt? *
> *Could somebody let me know the statistics table for the disk?*
>
> Any help is much Appreciated.
>
> Thanks,
> Harry
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help debugging VM import error

2018-04-23 Thread Benny Zlotnik
Looks like a bug. Can you please file a report:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

On Mon, Apr 23, 2018 at 9:38 PM, ~Stack~  wrote:

> Greetings,
>
> After my rebuild, I have imported my VM's. Everything went smooth and
> all of them came back, except one. One VM gives me the error "General
> command validation failure." which isn't helping me when I search for
> the problem.
>
> The oVirt engine logs aren't much better at pointing to what the failure
> is (posted below).
>
> Can someone help me figure out why this VM isn't importing, please?
>
> Thanks!
> ~Stack~
>
>
> 2018-04-23 13:31:44,313-05 INFO
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigurationCommand]
> (default task-72) [6793fe73-7cda-4cb5-a806-7104a05c3c1b] Lock Acquired
> to object 'EngineLock:{exclusiveLocks='[infra01=VM_NAME,
> 0b64ced5-7e4b-48cd-9d0d-24e8b905758c=VM]',
> sharedLocks='[0b64ced5-7e4b-48cd-9d0d-24e8b905758c=REMOTE_VM]'}'
> 2018-04-23 13:31:44,349-05 ERROR
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigurationCommand]
> (default task-72) [6793fe73-7cda-4cb5-a806-7104a05c3c1b] Error during
> ValidateFailure.: java.lang.NullPointerException
> at
> org.ovirt.engine.core.bll.validator.ImportValidator.
> validateStorageExistsForMemoryDisks(ImportValidator.java:140)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigurationComma
> nd.isValidDisks(ImportVmFromConfigurationCommand.java:151)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigurationComma
> nd.validate(ImportVmFromConfigurationCommand.java:103)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.CommandBase.internalValidate(
> CommandBase.java:779)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:368)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu
> nner.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu
> nner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(
> PrevalidatingMultipleActionsRunner.java:76)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:596)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:566)
> [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor914.invoke(Unknown Source)
> [:1.8.0_161]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_161]
> at java.lang.reflect.Method.invoke(Method.java:498)
> [rt.jar:1.8.0_161]
> at
> org.jboss.as.ee.component.ManagedReferenceMethodIntercep
> tor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
> at
> org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:509)
> at
> org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.
> delegateInterception(Jsr299BindingsInterceptor.java:78)
> at
> org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.
> doMethodInterception(Jsr299BindingsInterceptor.java:88)
> at
> org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.
> processInvocation(Jsr299BindingsInterceptor.java:101)
> at
> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.
> processInvocation(UserInterceptorFactory.java:63)
> at
> org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:509)
> at
> org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerIntercepto
> r.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
> [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source)
> [:1.8.0_161]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_161]
> at java.lang.reflect.Method.invoke(Method.java:498)
> [rt.jar:1.8.0_161]
> at
> org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.
> processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
> at
> org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.
> processInvocation(ExecutionTimeInterceptor.java:43)
> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:422)
> at
> org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(
> ConcurrentContextInterceptor.java:45)
> [wildfly-

[ovirt-users] Re: strange issue: vm lost info on disk

2018-05-11 Thread Benny Zlotnik
Can you provide the logs? engine and vdsm.
Did you perform a live migration (the VM is running) or cold?

On Fri, May 11, 2018 at 2:49 PM, Juan Pablo 
wrote:

> Hi! , Im strugled about an ongoing problem:
>  after migrating a vm's disk from an iscsi domain to a nfs and ovirt
> reporting the migration was successful, I see there's no data 'inside' the
> vm's disk. we never had this issues with ovirt so Im stranged about the
> root cause and if theres a chance of recovering the information.
>
> can you please help me out troubleshooting this one? I would really
> appreciate it =)
> running ovirt 4.2.1 here!
>
> thanks in advance,
> JP
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: strange issue: vm lost info on disk

2018-05-11 Thread Benny Zlotnik
I see here a failed attempt:
2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-67)
[bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
USER_MOVED_DISK_FINISHED_FAILURE(2,011),
User admin@internal-authz have failed to move disk mail02-int_Disk1 to
domain 2penLA.

Then another:
2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-34)
[] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User
admin@internal-authz have failed to move disk mail02-int_Disk1 to domain
2penLA.

Here I see a successful attempt:
2018-05-09 21:58:42,628-03 INFO  [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (default task-50)
[940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: USER_MOVED_DISK(2,008),
User admin@internal-authz moving disk mail02-int_Disk1 to domain 2penLA.


Then, in the last attempt I see the attempt was successful but live merge
failed:
2018-05-11 03:37:59,509-03 ERROR
[org.ovirt.engine.core.bll.MergeStatusCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in
volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f,
52532d05-970e-4643-9774-96c31796062c]
2018-05-11 03:38:01,495-03 INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id:
'115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id:
'26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete
2018-05-11 03:38:01,501-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id:
'4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for step
'MERGE_STATUS'
2018-05-11 03:38:01,501-03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command
'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364'
child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3,
1c320f4b-7296-43c4-a3e6-8a868e23fc35,
a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' executions were completed, status
'FAILED'
2018-05-11 03:38:02,513-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot
'319e8bbb-9efe-4de4-a9a6-862e3deb891f' images
'52532d05-970e-4643-9774-96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f'
failed. Images have been marked illegal and can no longer be previewed or
reverted to. Please retry Live Merge on the snapshot to complete the
operation.
2018-05-11 03:38:02,519-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
with failure.
2018-05-11 03:38:03,530-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-37)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id:
'26bc52a4-4509-4577-b342-44a679bc628f' child commands
'[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed, status
'FAILED'
2018-05-11 03:38:04,548-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-66)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
2018-05-11 03:38:04,557-03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-66)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Lock freed to object
'EngineLock:{exclusiveLocks='[4808bb70-c9cc-4286-aa39-16b5798213ac=LIVE_STORAGE_MIGRATION]',
sharedLocks=''}'

I do not see the merge attempt in the vdsm.log, so please send vdsm logs
for node02.phy.eze.ampgn.com.ar from that time.

Also, did you use the auto-generated snapshot to start the vm?


On Fri, May 11, 2018 at 6:11 PM, Juan Pablo 
wrote:

> after the xfs_repair, it says: sorry I could not find valid secondary
> superblock
>
> 2018-05-11 12:09 GMT-03:00 Juan Pablo :
>
>> hi,
>> Alias:
>> mail02-int_Disk1
>> Description:
>> ID:
>> 65ec515e-0aae-4fe6-a561-387929c7fb4d
>> Alignment:
>> Unknown

[ovirt-users] Re: strange issue: vm lost info on disk

2018-05-12 Thread Benny Zlotnik
Using the auto-generated snapshot is generally a bad idea as it's
inconsistent, you should remove it before moving further

On Fri, May 11, 2018 at 7:25 PM, Juan Pablo 
wrote:

> I rebooted it with no luck, them I used the auto-gen snapshot , same luck.
> attaching the logs in gdrive
>
> thanks in advance
>
> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik :
>
>> I see here a failed attempt:
>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-67)
>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have
>> failed to move disk mail02-int_Disk1 to domain 2penLA.
>>
>> Then another:
>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-34)
>> [] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User
>> admin@internal-authz have failed to move disk mail02-int_Disk1 to domain
>> 2penLA.
>>
>> Here I see a successful attempt:
>> 2018-05-09 21:58:42,628-03 INFO  [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] (default task-50)
>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: USER_MOVED_DISK(2,008),
>> User admin@internal-authz moving disk mail02-int_Disk1 to domain 2penLA.
>>
>>
>> Then, in the last attempt I see the attempt was successful but live merge
>> failed:
>> 2018-05-11 03:37:59,509-03 ERROR 
>> [org.ovirt.engine.core.bll.MergeStatusCommand]
>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in
>> volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f,
>> 52532d05-970e-4643-9774-96c31796062c]
>> 2018-05-11 03:38:01,495-03 INFO  [org.ovirt.engine.core.bll.Ser
>> ialChildCommandsExecutionCallback] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id:
>> '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id:
>> '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete
>> 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id:
>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for
>> step 'MERGE_STATUS'
>> 2018-05-11 03:38:01,501-03 INFO  [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command
>> 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364'
>> child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3,
>> 1c320f4b-7296-43c4-a3e6-8a868e23fc35, a0e9e70c-cd65-4dfb-bd00-076c4e99556a]'
>> executions were completed, status 'FAILED'
>> 2018-05-11 03:38:02,513-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot
>> '319e8bbb-9efe-4de4-a9a6-862e3deb891f' images
>> '52532d05-970e-4643-9774-96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f'
>> failed. Images have been marked illegal and can no longer be previewed or
>> reverted to. Please retry Live Merge on the snapshot to complete the
>> operation.
>> 2018-05-11 03:38:02,519-03 ERROR [org.ovirt.engine.core.bll.sna
>> pshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-2)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
>> with failure.
>> 2018-05-11 03:38:03,530-03 INFO  [org.ovirt.engine.core.bll.Con
>> currentChildCommandsExecutionCallback] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-37)
>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id:
>> '26bc52a4-4509-4577-b342-44a679bc628f' child commands
>> '[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed,
>> status 'FAILED'
>> 2018-05-11 03:38:04,548-03 ERROR 
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>> (EE-ManagedThread

[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
I believe you've hit this bug: https://bugzilla.redhat.c
om/show_bug.cgi?id=1565040

You can try to release the lease manually using the sanlock client command
(there's an example in the comments on the bug),
once the lease is free the job will fail and the disk can be unlock

On Thu, May 17, 2018 at 11:05 AM,  wrote:

> Hi,
>
> We're running oVirt 4.1.9 (I know it's not the recommended version, but we
> can't upgrade yet) and recently we had an issue with a Storage Domain while
> a VM was moving a disk. The Storage Domain went down for a few minutes,
> then it got back.
>
> However, the disk's state has stuck in a 'Migrating: 10%' state (see
> ss-2.png).
>
> I run the 'unlock_entity.sh' script to try to unlock the disk, with these
> parameters:
>
>  # PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh
> -t disk -u engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38
>
> The disk's state changed to 'OK', but the actual state still states it's
> migrating (see ss-1.png).
>
> Calling the script with -t all doesn't make a difference either.
>
> Currently, the disk is unmanageable: cannot be deactivated, moved or
> copied, as it says there's a copying operation running already.
>
> Could someone provide a way to unlock this disk? I don't mind modifying a
> value directly into the database, I just need the copying process cancelled.
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
In the vdsm log you will find the volumeInfo log which looks like this:
2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': '5c4d2216-
2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL', 'description':
'{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
'---
-', 'format': 'RAW', 'generation': 3, 'image':
'b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc', 'ctime': '1526543244', 'disktype':
'DATA', '
legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children':
[], 'pool': '', 'capacity': '1073741824', 'uuid': u'7190913d-320c-4fc9-
a5b3-c55b26aa30f4', 'truesize': '0', 'type': 'SPARSE', 'lease': {'path':
u'/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2e
b3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease',
'owners': [1], 'version': 8L, 'o
ffset': 0}} (__init__:355)

The lease path in my case is:
/rhev/data-center/mnt/10.35.0.233:
_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease

Then you can look in /var/log/sanlock.log
2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
5c4d2216-2eb3-4e24-b254-d5f83fde4dbe:7190913d-320c-4fc9-a5b3-c55b26aa30f4:/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
for 2,9,5049

Then you can use this command to unlock, the pid in this case is 5049

sanlock client release -r RESOURCE -p pid


On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik  wrote:

> I believe you've hit this bug: https://bugzilla.redhat.c
> om/show_bug.cgi?id=1565040
>
> You can try to release the lease manually using the sanlock client command
> (there's an example in the comments on the bug),
> once the lease is free the job will fail and the disk can be unlock
>
> On Thu, May 17, 2018 at 11:05 AM,  wrote:
>
>> Hi,
>>
>> We're running oVirt 4.1.9 (I know it's not the recommended version, but
>> we can't upgrade yet) and recently we had an issue with a Storage Domain
>> while a VM was moving a disk. The Storage Domain went down for a few
>> minutes, then it got back.
>>
>> However, the disk's state has stuck in a 'Migrating: 10%' state (see
>> ss-2.png).
>>
>> I run the 'unlock_entity.sh' script to try to unlock the disk, with these
>> parameters:
>>
>>  # PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh
>> -t disk -u engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38
>>
>> The disk's state changed to 'OK', but the actual state still states it's
>> migrating (see ss-1.png).
>>
>> Calling the script with -t all doesn't make a difference either.
>>
>> Currently, the disk is unmanageable: cannot be deactivated, moved or
>> copied, as it says there's a copying operation running already.
>>
>> Could someone provide a way to unlock this disk? I don't mind modifying a
>> value directly into the database, I just need the copying process cancelled.
>>
>> Thanks.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
I see because I am on debug level, you need to enable it in order to see

https://www.ovirt.org/develop/developer-guide/vdsm/log-files/

On Thu, 17 May 2018, 13:10 ,  wrote:

> Hi,
>
> Thanks. I've checked vdsm logs on all my hosts but the only entry I can
> find grepping by Volume.getInfo is like this:
>
>2018-05-17 10:14:54,892+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
> RPC call Volume.getInfo succeeded in 0.30 seconds (__init__:539)
>
> I cannot find a line like yours... any other way on how to obtain those
> parameters. This is an iSCSI based storage FWIW (both source and
> destination of the movement).
>
> Thanks.
>
> El 2018-05-17 10:01, Benny Zlotnik escribió:
> > In the vdsm log you will find the volumeInfo log which looks like
> > this:
> >
> > 2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
> > Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain':
> > '5c4d2216-
> > 2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL', 'description':
> > '{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
> > '---
> > -', 'format': 'RAW', 'generation': 3, 'image':
> > 'b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc', 'ctime': '1526543244',
> > 'disktype': 'DATA', '
> > legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824',
> > 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid':
> > u'7190913d-320c-4fc9-
> > a5b3-c55b26aa30f4', 'truesize': '0', 'type': 'SPARSE', 'lease':
> > {'path':
> > u'/rhev/data-center/mnt/10.35.0.233:
> _root_storage__domains_sd1/5c4d2216-2e
> >
> b3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease',
> > 'owners': [1], 'version': 8L, 'o
> > ffset': 0}} (__init__:355)
> >
> > The lease path in my case is:
> > /rhev/data-center/mnt/10.35.0.
> 233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease
> >
> > Then you can look in /var/log/sanlock.log
> >
> > 2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
> >
> 5c4d2216-2eb3-4e24-b254-d5f83fde4dbe:7190913d-320c-4fc9-a5b3-c55b26aa30f4:/rhev/data-center/mnt/10.35.0.233:
> _root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
> > for 2,9,5049
> >
> > Then you can use this command to unlock, the pid in this case is 5049
> >
> > sanlock client release -r RESOURCE -p pid
> >
> > On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik 
> > wrote:
> >
> >> I believe you've hit this
> >> bug: https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [1]
> >>
> >> You can try to release the lease manually using the sanlock client
> >> command (there's an example in the comments on the bug),
> >> once the lease is free the job will fail and the disk can be unlock
> >>
> >> On Thu, May 17, 2018 at 11:05 AM,  wrote:
> >>
> >>> Hi,
> >>>
> >>> We're running oVirt 4.1.9 (I know it's not the recommended
> >>> version, but we can't upgrade yet) and recently we had an issue
> >>> with a Storage Domain while a VM was moving a disk. The Storage
> >>> Domain went down for a few minutes, then it got back.
> >>>
> >>> However, the disk's state has stuck in a 'Migrating: 10%' state
> >>> (see ss-2.png).
> >>>
> >>> I run the 'unlock_entity.sh' script to try to unlock the disk,
> >>> with these parameters:
> >>>
> >>>  # PGPASSWORD=...
> >>> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t disk -u
> >>> engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38
> >>>
> >>> The disk's state changed to 'OK', but the actual state still
> >>> states it's migrating (see ss-1.png).
> >>>
> >>> Calling the script with -t all doesn't make a difference either.
> >>>
> >>> Currently, the disk is unmanageable: cannot be deactivated, moved
> >>> or copied, as it says there's a copying operation running already.
> >>>
> >>> Could someone provide a way to unlock this disk? I don't mind
> >>> modifying a value directly into the database, I just need the
> >>> copying process cancelled.
> >>>
> >>> Thanks.
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >
> >
> >
> > Links:
> > --
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1565040
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
By the way, please verify it's the same issue, you should see "the volume
lease is not FREE - the job is running" in the engine log

On Thu, May 17, 2018 at 1:21 PM, Benny Zlotnik  wrote:

> I see because I am on debug level, you need to enable it in order to see
>
> https://www.ovirt.org/develop/developer-guide/vdsm/log-files/
>
> On Thu, 17 May 2018, 13:10 ,  wrote:
>
>> Hi,
>>
>> Thanks. I've checked vdsm logs on all my hosts but the only entry I can
>> find grepping by Volume.getInfo is like this:
>>
>>2018-05-17 10:14:54,892+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
>> RPC call Volume.getInfo succeeded in 0.30 seconds (__init__:539)
>>
>> I cannot find a line like yours... any other way on how to obtain those
>> parameters. This is an iSCSI based storage FWIW (both source and
>> destination of the movement).
>>
>> Thanks.
>>
>> El 2018-05-17 10:01, Benny Zlotnik escribió:
>> > In the vdsm log you will find the volumeInfo log which looks like
>> > this:
>> >
>> > 2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
>> > Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain':
>> > '5c4d2216-
>> > 2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL', 'description':
>> > '{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
>> > '---
>> > -', 'format': 'RAW', 'generation': 3, 'image':
>> > 'b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc', 'ctime': '1526543244',
>> > 'disktype': 'DATA', '
>> > legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824',
>> > 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid':
>> > u'7190913d-320c-4fc9-
>> > a5b3-c55b26aa30f4', 'truesize': '0', 'type': 'SPARSE', 'lease':
>> > {'path':
>> > u'/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_
>> sd1/5c4d2216-2e
>> > b3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-
>> b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease',
>> > 'owners': [1], 'version': 8L, 'o
>> > ffset': 0}} (__init__:355)
>> >
>> > The lease path in my case is:
>> > /rhev/data-center/mnt/10.35.0.233:_root_storage__domains_
>> sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-
>> fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease
>> >
>> > Then you can look in /var/log/sanlock.log
>> >
>> > 2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
>> > 5c4d2216-2eb3-4e24-b254-d5f83fde4dbe:7190913d-320c-
>> 4fc9-a5b3-c55b26aa30f4:/rhev/data-center/mnt/10.35.0.233:_
>> root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-
>> d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/
>> 7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
>> > for 2,9,5049
>> >
>> > Then you can use this command to unlock, the pid in this case is 5049
>> >
>> > sanlock client release -r RESOURCE -p pid
>> >
>> > On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik 
>> > wrote:
>> >
>> >> I believe you've hit this
>> >> bug: https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [1]
>> >>
>> >> You can try to release the lease manually using the sanlock client
>> >> command (there's an example in the comments on the bug),
>> >> once the lease is free the job will fail and the disk can be unlock
>> >>
>> >> On Thu, May 17, 2018 at 11:05 AM,  wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> We're running oVirt 4.1.9 (I know it's not the recommended
>> >>> version, but we can't upgrade yet) and recently we had an issue
>> >>> with a Storage Domain while a VM was moving a disk. The Storage
>> >>> Domain went down for a few minutes, then it got back.
>> >>>
>> >>> However, the disk's state has stuck in a 'Migrating: 10%' state
>> >>> (see ss-2.png).
>> >>>
>> >>> I run the 'unlock_entity.sh' script to try to unlock the disk,
>> >>> with these parameters:
>> >>>
>> >>>  # PGPASSWORD=...
>> >>> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t disk -u
>> >>> engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38
>> >>>
>> >>> The disk's state changed to 'OK', but the actual state still
>> >>> states it's migrating (see ss-1.png).
>> >>>
>> >>> Calling the script with -t all doesn't make a difference either.
>> >>>
>> >>> Currently, the disk is unmanageable: cannot be deactivated, moved
>> >>> or copied, as it says there's a copying operation running already.
>> >>>
>> >>> Could someone provide a way to unlock this disk? I don't mind
>> >>> modifying a value directly into the database, I just need the
>> >>> copying process cancelled.
>> >>>
>> >>> Thanks.
>> >>> ___
>> >>> Users mailing list -- users@ovirt.org
>> >>> To unsubscribe send an email to users-le...@ovirt.org
>> >
>> >
>> >
>> > Links:
>> > --
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1565040
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
Which vdsm version are you using?

You can try looking for the image uuid in /var/log/sanlock.log

On Thu, May 17, 2018 at 2:40 PM,  wrote:

> Thanks.
>
> I've been able to see the line in the log, however, the format differs
> slightly from yours.
>
>   2018-05-17 12:24:44,132+0100 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
> Calling 'Volume.getInfo' in bridge with {u'storagepoolID':
> u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'imageID':
> u'b4013aba-a936-4a54-bb14-670d3a8b7c38', u'volumeID':
> u'c2cfbb02-9981-4fb7-baea-7257a824145c', u'storagedomainID':
> u'1876ab86-216f-4a37-a36b-2b5d99fcaad0'} (__init__:556)
> 2018-05-17 12:24:44,689+0100 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
> Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain':
> '1876ab86-216f-4a37-a36b-2b5d99fcaad0', 'voltype': 'INTERNAL',
> 'description': 'None', 'parent': 'ea9a0182-329f-4b8f-abe3-e894de95dac0',
> 'format': 'COW', 'generation': 1, 'image': 
> 'b4013aba-a936-4a54-bb14-670d3a8b7c38',
> 'ctime': '1526470759', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
> 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity':
> '21474836480', 'uuid': u'c2cfbb02-9981-4fb7-baea-7257a824145c',
> 'truesize': '1073741824', 'type': 'SPARSE', 'lease': {'owners': [8],
> 'version': 1L}} (__init__:582)
>
> As you can see, there's no path field there.
>
> How should I procceed?
>
> El 2018-05-17 12:01, Benny Zlotnik escribió:
>
>> vdsm-client replaces vdsClient, take a look
>> here: https://lists.ovirt.org/pipermail/devel/2016-July/013535.html
>> [4]
>>
>> On Thu, May 17, 2018 at 1:57 PM,  wrote:
>>
>> The issue is present in the logs:
>>>
>>>   2018-05-17 11:50:44,822+01 INFO
>>> [org.ovirt.engine.core.bll.storage.disk.image.VdsmImagePoller]
>>> (DefaultQuartzScheduler1) [39755bb7-9082-40d6-ae5e-64b5b2b5f98e]
>>> Command CopyData id: '84a49b25-0e37-4338-834e-08bd67c42860': the
>>> volume lease is not FREE - the job is running
>>>
>>> I tried setting the log level to debug but it seems I have not a
>>> vdsm-client command. All I have is a vdsm-tool command. Is it
>>> equivalent?
>>>
>>> Thanks
>>>
>>> El 2018-05-17 11:49, Benny Zlotnik escribió:
>>> By the way, please verify it's the same issue, you should see "the
>>> volume lease is not FREE - the job is running" in the engine log
>>>
>>> On Thu, May 17, 2018 at 1:21 PM, Benny Zlotnik
>>> 
>>> wrote:
>>>
>>> I see because I am on debug level, you need to enable it in order
>>> to
>>> see
>>>
>>> https://www.ovirt.org/develop/developer-guide/vdsm/log-files/ [1]
>>>
>>> [3]
>>>
>>> On Thu, 17 May 2018, 13:10 ,  wrote:
>>>
>>> Hi,
>>>
>>> Thanks. I've checked vdsm logs on all my hosts but the only entry
>>> I can
>>> find grepping by Volume.getInfo is like this:
>>>
>>>2018-05-17 10:14:54,892+0100 INFO  (jsonrpc/0)
>>> [jsonrpc.JsonRpcServer]
>>> RPC call Volume.getInfo succeeded in 0.30 seconds (__init__:539)
>>>
>>> I cannot find a line like yours... any other way on how to obtain
>>> those
>>> parameters. This is an iSCSI based storage FWIW (both source and
>>> destination of the movement).
>>>
>>> Thanks.
>>>
>>> El 2018-05-17 10:01, Benny Zlotnik escribió:
>>> In the vdsm log you will find the volumeInfo log which looks
>>> like
>>> this:
>>>
>>> 2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6)
>>> [jsonrpc.JsonRpcServer]
>>> Return 'Volume.getInfo' in bridge with {'status': 'OK',
>>> 'domain':
>>> '5c4d2216-
>>> 2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL',
>>> 'description':
>>> '{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
>>> '---
>>> -', 'format': 

[ovirt-users] Re: VM's disk stuck in migrating state

2018-05-17 Thread Benny Zlotnik
Sorry, I forgot it's ISCSI, it's a bit different

In my case it would look something like:
2018-05-17 17:30:12,740+0300 DEBUG (jsonrpc/7) [jsonrpc.JsonRpcServer]
Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': '3e541b2d-
2a49-4eb8-ae4b-aa9acee228c6', 'voltype': 'INTERNAL', 'description':
'{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
'---
-', 'format': 'RAW', 'generation': 0, 'image':
'dd6b5ae0-196e-4879-b076-a0a8d8a1dfde', 'ctime': '1526566607', 'disktype':
'DATA', '
legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children':
[], 'pool': '', 'capacity': '1073741824', 'uuid':
u'221c45e1-7f65-42c8-afc3-0ccc1d6fc148', 'truesize': '1073741824', 'type':
'PREALLOCATED', 'lease': {'path':
'/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases', 'owners
': [], 'version': None, 'offset': 109051904}} (__init__:355)

I then look for 221c45e1-7f65-42c8-afc3-0ccc1d6fc148 in sanlock.log:
2018-05-17 17:30:12 20753 [3335]: s10:r14 resource
3e541b2d-2a49-4eb8-ae4b-aa9acee228c6:221c45e1-7f65-42c8-afc3-0ccc1d6fc148:/dev/3e541b2d-2a49-4eb
8-ae4b-aa9acee228c6/leases:109051904 for 2,11,31496

So the resource would be:
3e541b2d-2a49-4eb8-ae4b-aa9acee228c6:221c45e1-7f65-42c8-afc3-0ccc1d6fc148:/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
and the pid is 31496

running
$ sanlock direct dump
/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
  offsetlockspace
   resource  timestamp  own  gen lver
 3e541b2d-2a49-4eb8-ae4b-aa9acee228c6
 221c45e1-7f65-42c8-afc3-0ccc1d6fc148 020753 0001 0004 5
...

If the vdsm pid changed (and it probably did) it will be different, so I
acquire it for the new pid
$ sanlock client acquire -r
3e541b2d-2a49-4eb8-ae4b-aa9acee228c6:221c45e1-7f65-42c8-afc3-0ccc1d6fc148:/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
-p 32265
acquire pid 32265

Then I can see the timestamp changed
$ sanlock direct dump
/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
  offsetlockspace
   resource  timestamp  own  gen lver
 3e541b2d-2a49-4eb8-ae4b-aa9acee228c6
 221c45e1-7f65-42c8-afc3-0ccc1d6fc148 021210 0001 0005 6

And then I release it:
$ sanlock client release -r
3e541b2d-2a49-4eb8-ae4b-aa9acee228c6:221c45e1-7f65-42c8-afc3-0ccc1d6fc148:/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
-p 32265
release pid 32265
release done 0

$ sanlock direct dump
/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
  offsetlockspace
   resource  timestamp  own  gen lver
 3e541b2d-2a49-4eb8-ae4b-aa9acee228c6
 221c45e1-7f65-42c8-afc3-0ccc1d6fc148 00 0001 0005 6

The timestamp is zeroed and the lease is free


On Thu, May 17, 2018 at 3:38 PM,  wrote:

> This is vdsm 4.19.45. I grepped the disk uuid in /var/log/sanlock.log but
> unfortunately no entry there...
>
>
> El 2018-05-17 13:11, Benny Zlotnik escribió:
>
>> Which vdsm version are you using?
>>
>> You can try looking for the image uuid in /var/log/sanlock.log
>>
>> On Thu, May 17, 2018 at 2:40 PM,  wrote:
>>
>> Thanks.
>>>
>>> I've been able to see the line in the log, however, the format
>>> differs slightly from yours.
>>>
>>>   2018-05-17 12:24:44,132+0100 DEBUG (jsonrpc/6)
>>> [jsonrpc.JsonRpcServer] Calling 'Volume.getInfo' in bridge with
>>> {u'storagepoolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63',
>>> u'imageID': u'b4013aba-a936-4a54-bb14-670d3a8b7c38', u'volumeID':
>>> u'c2cfbb02-9981-4fb7-baea-7257a824145c', u'storagedomainID':
>>> u'1876ab86-216f-4a37-a36b-2b5d99fcaad0'} (__init__:556)
>>> 2018-05-17 12:24:44,689+0100 DEBUG (jsonrpc/6)
>>> [jsonrpc.JsonRpcServer] Return 'Volume.getInfo' in bridge with
>>> {'status': 'OK', 'domain': '1876ab86-216f-4a37-a36b-2b5d99fcaad0',
>>> 'voltype': 'INTERNAL', 'description': 'None', 'parent':
>>> 'ea9a0182-329f-4b8f-abe3-e894de95dac0', 'format': 'COW',
>>> 'generation': 1, 'image': 'b4013aba-a936-4a54-bb14-670d3a8b7c38',
>>> &#

[ovirt-users] Re: problem to delete snapshot

2018-05-17 Thread Benny Zlotnik
Could be this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1555116

Adding Ala

On Thu, May 17, 2018 at 5:00 PM, Marcelo Leandro 
wrote:

> Error in engine.log.
>
>
> 2018-05-17 10:58:56,766-03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-31) [c4fc9663-51c1-4442-ba1b-01e4efe8c62c] Lock Acquired to
> object 
> 'EngineLock:{exclusiveLocks='[f120d81e-db65-44b8-b239-95d8ba3e0e31=VM]',
> sharedLocks=''}'
> 2018-05-17 10:58:56,923-03 ERROR 
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-31) [c4fc9663-51c1-4442-ba1b-01e4efe8c62c] Error during
> ValidateFailure.: java.lang.NullPointerException
> at org.ovirt.engine.core.bll.validator.storage.StorageDomainValidator.
> getTotalSizeForMerge(StorageDomainValidator.java:121) [bll.jar:]
> at org.ovirt.engine.core.bll.validator.storage.StorageDomainValidator.
> hasSpaceForMerge(StorageDomainValidator.java:207) [bll.jar:]
> at org.ovirt.engine.core.bll.validator.storage.
> MultipleStorageDomainsValidator.lambda$allDomainsHaveSpaceForMerge$0(
> MultipleStorageDomainsValidator.java:128) [bll.jar:]
> at org.ovirt.engine.core.bll.validator.storage.
> MultipleStorageDomainsValidator.validOrFirstFailure(
> MultipleStorageDomainsValidator.java:190) [bll.jar:]
> at org.ovirt.engine.core.bll.validator.storage.
> MultipleStorageDomainsValidator.allDomainsHaveSpaceForMerge(
> MultipleStorageDomainsValidator.java:125) [bll.jar:]
> at org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand.
> validateStorageDomains(RemoveSnapshotCommand.java:381) [bll.jar:]
> at org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand.validate(
> RemoveSnapshotCommand.java:359) [bll.jar:]
> at 
> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:840)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:390)
> [bll.jar:]
> at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.
> execute(DefaultBackendActionExecutor.java:13) [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:516)
> [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:498)
> [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:451)
> [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor1100.invoke(Unknown Source)
> [:1.8.0_161]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161]
> at org.jboss.as.ee.component.ManagedReferenceMethodIntercep
> tor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:437)
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.
> delegateInterception(Jsr299BindingsInterceptor.java:70)
> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.
> doMethodInterception(Jsr299BindingsInterceptor.java:80)
> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(
> Jsr299BindingsInterceptor.java:93) [wildfly-weld-10.1.0.Final.
> jar:10.1.0.Final]
> at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.
> processInvocation(UserInterceptorFactory.java:63)
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:437)
> at org.ovirt.engine.core.bll.interceptors.
> CorrelationIdTrackerInterceptor.aroundInvoke(
> CorrelationIdTrackerInterceptor.java:13) [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor228.invoke(Unknown Source)
> [:1.8.0_161]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161]
> at org.jboss.as.ee.component.ManagedReferenceLifecycleMetho
> dInterceptor.processInvocation(ManagedReferenceLifecycleMetho
> dInterceptor.java:89)
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.invocationmetrics.
> ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
> [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:437)
> at org.jboss.weld.ejb.AbstractEJBRequestScopeActivat
> ionInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
> [weld-core-impl-2.3.5.Final.jar:2.3.5.Final]
> at org.jboss.as.weld.ejb.EjbRequestS

[ovirt-users] Re: How to delete leftover of a failed live storage migration disk

2018-05-23 Thread Benny Zlotnik
Do you see this disk on engine side? it should be aware of this disk since
> it created
> the disk during live storage migration.
>
> Also, we should not have leftovers volumes after failed operations. Please
> file a bug
> for this and attach both engine.log and vdsm.log on the host doing the
> live storage
> migration.
>
> Nir
>
The fix is already in progress :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ui error

2018-06-03 Thread Benny Zlotnik
Which version are you using?

On Sun, 3 Jun 2018, 12:57 Arsène Gschwind, 
wrote:

> Hi,
>
> in the UI error log ui.log i do get a lot of those errors:
>
> 2018-06-03 10:57:17,486+02 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Permutation name: AEA766696E550914FBFDA5936BC98453
> 2018-06-03 10:57:17,486+02 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Uncaught exception:
> com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot read
> property 'G' of null
> at
> org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel.$lambda$1(DisksAllocationModel.java:107)
> at
> org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel$lambda$1$Type.onSuccess(DisksAllocationModel.java:107)
> at
> org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227)
> [frontend.jar:]
> at
> org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227)
> [frontend.jar:]
> at
> org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133)
> [frontend.jar:]
> at
> org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133)
> [frontend.jar:]
> at
> org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
> [frontend.jar:]
> at
> org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
> [frontend.jar:]
> at
> com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
> [gwt-servlet.jar:]
> at
> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233)
> [gwt-servlet.jar:]
> at
> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
> [gwt-servlet.jar:]
> at Unknown.eval(webadmin-0.js)
> at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236)
> [gwt-servlet.jar:]
> at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275)
> [gwt-servlet.jar:]
> at Unknown.eval(webadmin-0.js)
>
>
> This error also appears when tying to migrate a VM disk to another SD.
> Any idea why this happens?
>
> regards,
> Arsène
>
> --
>
> 
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
> Tel. +41 79 449 25 63  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F3JZZHGZYAEXN33TON46LWF3IQNK7KUM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2H3BD6WQZG6GKIY5753MBFPGRXG3V3DD/


[ovirt-users] Re: ui error

2018-06-03 Thread Benny Zlotnik
Are you able to move the disk?

Can you open a bug?

On Sun, Jun 3, 2018 at 1:35 PM, Arsène Gschwind 
wrote:

> I'm using version : 4.2.3.8-1.el7 the latest version.
>
>
> On Sun, 2018-06-03 at 12:59 +0300, Benny Zlotnik wrote:
>
> Which version are you using?
>
> On Sun, 3 Jun 2018, 12:57 Arsène Gschwind, 
> wrote:
>
> Hi,
>
> in the UI error log ui.log i do get a lot of those errors:
>
> 2018-06-03 10:57:17,486+02 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Permutation name: AEA766696E550914FBFDA5936BC98453
> 2018-06-03 10:57:17,486+02 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Uncaught exception: 
> com.google.gwt.core.client.JavaScriptException:
> (TypeError) : Cannot read property 'G' of null
> at org.ovirt.engine.ui.uicommonweb.models.storage.
> DisksAllocationModel.$lambda$1(DisksAllocationModel.java:107)
> at org.ovirt.engine.ui.uicommonweb.models.storage.
> DisksAllocationModel$lambda$1$Type.onSuccess(
> DisksAllocationModel.java:107)
> at 
> org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227)
> [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227)
> [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.
> OperationProcessor$1.$onSuccess(OperationProcessor.java:133)
> [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.
> OperationProcessor$1.onSuccess(OperationProcessor.java:133)
> [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.
> GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
> [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.
> GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
> [frontend.jar:]
> at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.
> onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
> at 
> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233)
> [gwt-servlet.jar:]
> at 
> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
> [gwt-servlet.jar:]
> at Unknown.eval(webadmin-0.js)
> at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236)
> [gwt-servlet.jar:]
> at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275)
> [gwt-servlet.jar:]
> at Unknown.eval(webadmin-0.js)
>
>
> This error also appears when tying to migrate a VM disk to another SD.
> Any idea why this happens?
>
> regards,
> Arsène
>
> --
>
> 
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
>  |
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> CH-4056 Basel
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> |
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> Switzerland
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> Tel. +41 79 449 25 63  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/F3JZZHGZYAEXN33TON46LWF3IQNK7KUM/
>
> --
>
> 
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
>  |
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> CH-4056 Basel
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> |
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C+Switzerland&entry=gmail&source=g>
> Switzerland
> <https://maps.google.com/?q=Klingelbergstr.+70+%7C+CH-4056+Basel+%7C

[ovirt-users] Re: Moving from thin to preallocated storage domains

2018-06-13 Thread Benny Zlotnik
Hi,

What do you mean by converting the LUN from thin to preallocated?
oVirt creates LVs on top of the LUNs you provide

On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver 
wrote:

> Hi all,
>
>
>
>   I have to move some FC storage domains from thin to preallocated. I
> would set the storage domain to maintenance, convert the LUN from thin to
> preallocated on the array, remove “Discard After Delete” from the advanced
> settings of the storage domain and active it again. Is there anything else
> I need to take care of?
>
>
>
> All the best,
>
> Oliver
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/VUEQY5DHUC633US5HZQO3N2IQ2TVCZPX/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QA3PA6EEO5LLUPWHOS6E7UEFOUNWZBBE/


[ovirt-users] Re: Moving from thin to preallocated storage domains

2018-06-14 Thread Benny Zlotnik
Adding Idan

On Wed, Jun 13, 2018 at 6:57 PM, Bruckner, Simone <
simone.bruck...@fabasoft.com> wrote:

> Hi,
>
>
>
>   I have defined thin LUNs on the array and presented them to the oVirt
> hosts. I will change the LUN from thin to preallocated on the array (which
> is transparent to the oVirt host).
>
>
>
> Besides removing “discard after delete” from the storage domain flags, is
> there anything else I need to take care of on the oVirt side?
>
>
>
> All the best,
>
> Oliver
>
>
>
> *Von:* Benny Zlotnik 
> *Gesendet:* Mittwoch, 13. Juni 2018 17:32
> *An:* Albl, Oliver 
> *Cc:* users@ovirt.org
> *Betreff:* [ovirt-users] Re: Moving from thin to preallocated storage
> domains
>
>
>
> Hi,
>
>
>
> What do you mean by converting the LUN from thin to preallocated?
>
> oVirt creates LVs on top of the LUNs you provide
>
>
>
> On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver 
> wrote:
>
> Hi all,
>
>
>
>   I have to move some FC storage domains from thin to preallocated. I
> would set the storage domain to maintenance, convert the LUN from thin to
> preallocated on the array, remove “Discard After Delete” from the advanced
> settings of the storage domain and active it again. Is there anything else
> I need to take care of?
>
>
>
> All the best,
>
> Oliver
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/VUEQY5DHUC633US5HZQO3N2IQ2TVCZPX/
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWUT4AISMHQ6YONSMN53MSRLNBH2X6F7/


[ovirt-users] Re: General failure

2018-06-18 Thread Benny Zlotnik
Can you provide full engine and vdsm logs?

On Mon, Jun 18, 2018 at 11:20 AM,  wrote:

> Hi,
>
> We're running oVirt 4.1.9 (we cannot upgrade at this time) and we're
> having a major problem in our infrastructure. On friday, a snapshots were
> automatically created on more than 200 VMs and as this was just a test
> task, all of them were deleted at the same time, which seems to have
> corrupted several VMs.
>
> When trying to delete a snapshot on some of the VMs, a "General error" is
> thrown with a NullPointerException in the engine log (attached).
>
> But the worst part is that when some of these machines is powered off and
> then powered on, the VMs are corrupt...
>
> VM myvm is down with error. Exit message: Bad volume specification
> {u'index': 0, u'domainID': u'110ea376-d789-40a1-b9f6-6b40c31afe01',
> 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'address':
> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x', u'type':
> u'pci', u'slot': u'0x06'}, u'volumeID': 
> u'1fd0f9aa-6505-45d2-a17e-859bd5dd4290',
> 'apparentsize': '23622320128', u'imageID': 
> u'65519220-68e1-462a-99b3-f0763c78eae2',
> u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface':
> u'virtio', u'optional': u'false', u'deviceId':
> u'65519220-68e1-462a-99b3-f0763c78eae2', 'truesize': '23622320128',
> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': u'disk',
> u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>
> We're really frustrated by now and don't know how to procceed... We have a
> DB backup (with engine-backup) from thursday which would have a "sane" DB
> definition without all the snapshots, as they were all created on friday.
> Would it be safe to restore this backup?
>
> Any help is really appreciated...
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/P5OOGBL3BRZIQ2I46FYELBUIIWT5QK4C/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7SLIUR6JDE4RHLEK72Y3NF6JFWXBW4PZ/


[ovirt-users] Re: General failure

2018-06-18 Thread Benny Zlotnik
Can you send the SPM logs as well?

On Mon, Jun 18, 2018 at 1:13 PM,  wrote:

> Hi Benny,
>
> Please find the logs at [1].
>
> Thank you.
>
>   [1]: https://wetransfer.com/downloads/12208fb4a6a5df3114bbbc10af1
> 94c8820180618101223/647c066b7b91096570def304da86dbca20180618101223/583d3d
>
>
> El 2018-06-18 09:28, Benny Zlotnik escribió:
>
>> Can you provide full engine and vdsm logs?
>>
>> On Mon, Jun 18, 2018 at 11:20 AM,  wrote:
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this time) and
>>> we're having a major problem in our infrastructure. On friday, a
>>> snapshots were automatically created on more than 200 VMs and as
>>> this was just a test task, all of them were deleted at the same
>>> time, which seems to have corrupted several VMs.
>>>
>>> When trying to delete a snapshot on some of the VMs, a "General
>>> error" is thrown with a NullPointerException in the engine log
>>> (attached).
>>>
>>> But the worst part is that when some of these machines is powered
>>> off and then powered on, the VMs are corrupt...
>>>
>>> VM myvm is down with error. Exit message: Bad volume specification
>>> {u'index': 0, u'domainID': u'110ea376-d789-40a1-b9f6-6b40c31afe01',
>>> 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
>>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
>>> u'1fd0f9aa-6505-45d2-a17e-859bd5dd4290', 'apparentsize':
>>> '23622320128', u'imageID': u'65519220-68e1-462a-99b3-f0763c78eae2',
>>> u'discard': False, u'specParams': {}, u'readonly': u'false',
>>> u'iface': u'virtio', u'optional': u'false', u'deviceId':
>>> u'65519220-68e1-462a-99b3-f0763c78eae2', 'truesize': '23622320128',
>>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
>>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
>>> u'disk'}.
>>>
>>> We're really frustrated by now and don't know how to procceed... We
>>> have a DB backup (with engine-backup) from thursday which would have
>>> a "sane" DB definition without all the snapshots, as they were all
>>> created on friday. Would it be safe to restore this backup?
>>>
>>> Any help is really appreciated...
>>>
>>> Thanks.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ [1]
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/ [2]
>>> List Archives:
>>>
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/messag
>> e/P5OOGBL3BRZIQ2I46FYELBUIIWT5QK4C/
>>
>>> [3]
>>>
>>
>>
>>
>> Links:
>> --
>> [1] https://www.ovirt.org/site/privacy-policy/
>> [2] https://www.ovirt.org/community/about/community-guidelines/
>> [3]
>> https://lists.ovirt.org/archives/list/users@ovirt.org/messag
>> e/P5OOGBL3BRZIQ2I46FYELBUIIWT5QK4C/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MWI6UHSVYXAI5OWOQWSOC7N7ZGOZ2VA/


[ovirt-users] Re: General failure

2018-06-18 Thread Benny Zlotnik
I'm having trouble following the errors, I think the SPM changed or the
vdsm log from the right host might be missing.

However, I believe what started the problems is this transaction timeout:
2018-06-15 14:20:51,378+01 ERROR
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-29) [1db468cb-85fd-4189-b356-d31781461504]
[within thread]: endAction for action type RemoveSnapshotSingleDisk threw
an exception.: org.springframework.jdbc.CannotGetJdbcConnectionException:
Could not get JDBC Connection; nested exception is java.sql.SQLException:
javax.resource.ResourceException: IJ000460: Error checking for a transaction
at
org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80)
[spring-jdbc.jar:4.2.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:615)
[spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680)
[spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712)
[spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762)
[spring-jdbc.jar:4.2.4.RELEASE]
at
org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152)
[dal.jar:]

This looks like a bug

Regardless, I am not sure restoring a backup would help since you probably
have orphaned images on the storage which need to be removed

Adding Ala

On Mon, Jun 18, 2018 at 4:19 PM,  wrote:

> Hi Benny,
>
> Please find the SPM logs at [1].
>
> Thank you
>
>   [1]: https://wetransfer.com/downloads/62bf649462aabbc2ef21824682b
> 0a08320180618131825/036b7782f58d337baf909a7220d8455320180618131825/5550ee
>
> El 2018-06-18 13:19, Benny Zlotnik escribió:
>
>> Can you send the SPM logs as well?
>>
>> On Mon, Jun 18, 2018 at 1:13 PM,  wrote:
>>
>> Hi Benny,
>>>
>>> Please find the logs at [1].
>>>
>>> Thank you.
>>>
>>>   [1]:
>>>
>>> https://wetransfer.com/downloads/12208fb4a6a5df3114bbbc10af1
>> 94c8820180618101223/647c066b7b91096570def304da86dbca20180618101223/583d3d
>>
>>> [1]
>>>
>>>
>>> El 2018-06-18 09:28, Benny Zlotnik escribió:
>>>
>>> Can you provide full engine and vdsm logs?
>>>
>>> On Mon, Jun 18, 2018 at 11:20 AM,  wrote:
>>>
>>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this time) and
>>> we're having a major problem in our infrastructure. On friday, a
>>> snapshots were automatically created on more than 200 VMs and as
>>> this was just a test task, all of them were deleted at the same
>>> time, which seems to have corrupted several VMs.
>>>
>>> When trying to delete a snapshot on some of the VMs, a "General
>>> error" is thrown with a NullPointerException in the engine log
>>> (attached).
>>>
>>> But the worst part is that when some of these machines is powered
>>> off and then powered on, the VMs are corrupt...
>>>
>>> VM myvm is down with error. Exit message: Bad volume specification
>>> {u'index': 0, u'domainID': u'110ea376-d789-40a1-b9f6-6b40c31afe01',
>>> 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
>>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
>>> u'1fd0f9aa-6505-45d2-a17e-859bd5dd4290', 'apparentsize':
>>> '23622320128', u'imageID': u'65519220-68e1-462a-99b3-f0763c78eae2',
>>> u'discard': False, u'specParams': {}, u'readonly': u'false',
>>> u'iface': u'virtio', u'optional': u'false', u'deviceId':
>>> u'65519220-68e1-462a-99b3-f0763c78eae2', 'truesize': '23622320128',
>>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
>>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
>>> u'disk'}.
>>>
>>> We're really frustrated by now and don't know how to procceed... We
>>> have a DB backup (with engine-backup) from thursday which would
>>> have
>>> a "sane" DB definition w

[ovirt-users] Re: General failure

2018-06-18 Thread Benny Zlotnik
Can you add the server.log?

On Mon, Jun 18, 2018 at 5:46 PM,  wrote:

> Indeed, when the problem started I think the SPM was the host I added as
> VDSM log in the first e-mail. Currently it is the one I sent in the second
> mail.
>
> FWIW, if it helps to debug more fluently, we can provide VPN access to our
> infrastructure so you can access and see whateve you need (all hosts, DB,
> etc...).
>
> Right now the machines that keep running work, but once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Benny Zlotnik escribió:
>
>> I'm having trouble following the errors, I think the SPM changed or
>> the vdsm log from the right host might be missing.
>>
>> However, I believe what started the problems is this transaction
>> timeout:
>>
>> 2018-06-15 14:20:51,378+01 ERROR
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-6-thread-29)
>> [1db468cb-85fd-4189-b356-d31781461504] [within thread]: endAction for
>> action type RemoveSnapshotSingleDisk threw an exception.:
>> org.springframework.jdbc.CannotGetJdbcConnectionException: Could not
>> get JDBC Connection; nested exception is java.sql.SQLException:
>> javax.resource.ResourceException: IJ000460: Error checking for a
>> transaction
>>  at
>> org.springframework.jdbc.datasource.DataSourceUtils.getConne
>> ction(DataSourceUtils.java:80)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:615)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P
>> ostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDi
>> alect.java:152)
>> [dal.jar:]
>>
>> This looks like a bug
>>
>> Regardless, I am not sure restoring a backup would help since you
>> probably have orphaned images on the storage which need to be removed
>>
>> Adding Ala
>>
>> On Mon, Jun 18, 2018 at 4:19 PM,  wrote:
>>
>> Hi Benny,
>>>
>>> Please find the SPM logs at [1].
>>>
>>> Thank you
>>>
>>>   [1]:
>>>
>>> https://wetransfer.com/downloads/62bf649462aabbc2ef21824682b
>> 0a08320180618131825/036b7782f58d337baf909a7220d8455320180618131825/5550ee
>>
>>> [1]
>>>
>>> El 2018-06-18 13:19, Benny Zlotnik escribió:
>>> Can you send the SPM logs as well?
>>>
>>> On Mon, Jun 18, 2018 at 1:13 PM,  wrote:
>>>
>>> Hi Benny,
>>>
>>> Please find the logs at [1].
>>>
>>> Thank you.
>>>
>>>   [1]:
>>>
>>>
>>> https://wetransfer.com/downloads/12208fb4a6a5df3114bbbc10af1
>> 94c8820180618101223/647c066b7b91096570def304da86dbca20180618101223/583d3d
>>
>>> [2]
>>>
>>> [1]
>>>
>>> El 2018-06-18 09:28, Benny Zlotnik escribió:
>>>
>>> Can you provide full engine and vdsm logs?
>>>
>>> On Mon, Jun 18, 2018 at 11:20 AM,  wrote:
>>>
>>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this time) and
>>> we're having a major problem in our infrastructure. On friday, a
>>> snapshots were automatically created on more than 200 VMs and as
>>> this was just a test task, all of them were deleted at the same
>>> time, which seems to have corrupted several VMs.
>>>
>>> When trying to delete a snapshot on some of the VMs, a "General
>>> error" is thrown with a NullPointerException in the engine log
>>> (attached).
>>>
>>> But the worst part is that when some of these machines is powered
>>> off and then powered on, the VMs are corrupt...
>>>
>>> VM myvm is down with error. Exit message: Bad volume specification
>>> {u'index': 0, u'domainID': u'110ea376-d789-40a1-b9f6-6b40c31afe01',
>>> 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain&#x

[ovirt-users] Re: General failure

2018-06-18 Thread Benny Zlotnik
We prevent starting VMs with illegal images[1]


You can use "$ vdsm-tool dump-volume-chains"
to look for illegal images and then look in the engine log for the reason
they became illagal,

if it's something like this, it usually means you can remove them:
63696:2018-06-15 09:41:58,134+01 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(DefaultQuartzScheduler2) [6fa97ea4-8f61-4a48-8e08-a8bb1b9de826] Merging of
snapshot 'e609d6cc-2025-4cf0-ad34-03519131cdd1' images
'1d01c6c8-b61e-42bc-a054-f04c3f792b10'..'ef6f732e-2a7a-4a14-a10f-bcc88bdd805f'
failed. Images have been marked illegal and can no longer be previewed or
reverted to. Please retry Live Merge on the snapshot to complete the
operation.


On Mon, Jun 18, 2018 at 5:46 PM,  wrote:

> Indeed, when the problem started I think the SPM was the host I added as
> VDSM log in the first e-mail. Currently it is the one I sent in the second
> mail.
>
> FWIW, if it helps to debug more fluently, we can provide VPN access to our
> infrastructure so you can access and see whateve you need (all hosts, DB,
> etc...).
>
> Right now the machines that keep running work, but once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Benny Zlotnik escribió:
>
>> I'm having trouble following the errors, I think the SPM changed or
>> the vdsm log from the right host might be missing.
>>
>> However, I believe what started the problems is this transaction
>> timeout:
>>
>> 2018-06-15 14:20:51,378+01 ERROR
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-6-thread-29)
>> [1db468cb-85fd-4189-b356-d31781461504] [within thread]: endAction for
>> action type RemoveSnapshotSingleDisk threw an exception.:
>> org.springframework.jdbc.CannotGetJdbcConnectionException: Could not
>> get JDBC Connection; nested exception is java.sql.SQLException:
>> javax.resource.ResourceException: IJ000460: Error checking for a
>> transaction
>>  at
>> org.springframework.jdbc.datasource.DataSourceUtils.getConne
>> ction(DataSourceUtils.java:80)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:615)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762)
>> [spring-jdbc.jar:4.2.4.RELEASE]
>>  at
>> org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P
>> ostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDi
>> alect.java:152)
>> [dal.jar:]
>>
>> This looks like a bug
>>
>> Regardless, I am not sure restoring a backup would help since you
>> probably have orphaned images on the storage which need to be removed
>>
>> Adding Ala
>>
>> On Mon, Jun 18, 2018 at 4:19 PM,  wrote:
>>
>> Hi Benny,
>>>
>>> Please find the SPM logs at [1].
>>>
>>> Thank you
>>>
>>>   [1]:
>>>
>>> https://wetransfer.com/downloads/62bf649462aabbc2ef21824682b
>> 0a08320180618131825/036b7782f58d337baf909a7220d8455320180618131825/5550ee
>>
>>> [1]
>>>
>>> El 2018-06-18 13:19, Benny Zlotnik escribió:
>>> Can you send the SPM logs as well?
>>>
>>> On Mon, Jun 18, 2018 at 1:13 PM,  wrote:
>>>
>>> Hi Benny,
>>>
>>> Please find the logs at [1].
>>>
>>> Thank you.
>>>
>>>   [1]:
>>>
>>>
>>> https://wetransfer.com/downloads/12208fb4a6a5df3114bbbc10af1
>> 94c8820180618101223/647c066b7b91096570def304da86dbca20180618101223/583d3d
>>
>>> [2]
>>>
>>> [1]
>>>
>>> El 2018-06-18 09:28, Benny Zlotnik escribió:
>>>
>>> Can you provide full engine and vdsm logs?
>>>
>>> On Mon, Jun 18, 2018 at 11:20 AM,  wrote:
>>>
>>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this time) and
>>> we're having a major problem in our infrastructure. On friday, a
>>> snapshots were automatically created on more than 200 VMs and as
>>> this was just a test task, all of them were deleted at the same
>>> time, which seems to have corrupted several VMs.
>>>
>>> When trying to delete a snapshot

[ovirt-users] Re: Creating snapshot of a subset of disks

2018-06-21 Thread Benny Zlotnik
You could something like this (IIUC):
dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
disk=types.Disk(
id=disk.id
)
)
]
)

Taken from ovirt-system-tests[1]

[1] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L340

On Thu, Jun 21, 2018 at 2:57 PM Gianluca Cecchi 
wrote:

> Hello,
> I'm trying to see how to create a snapshot of a VM, but only of a subset
> of its disks (actually it will be only the bootable one)
>
> Taking the examples at
> https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
>
> I can accomodate something like this
>
> # Get the reference to the service that manages the virtual machines:
> vms_service = system_service.vms_service()
>
> # Find the virtual machine and put into data_vm
> vm = vms_service.list(
> search='name=%s' % MY_VM_NAME,
> all_content=True,
> )[0]
> logging.info(
> 'Found virtual machine \'%s\', the id is \'%s\'.',
> vm.name, vm.id,
> )
>
> # Find the services that manage the data_vm virtual machine:
> vm_service = vms_service.vm_service(vm.id)
>
> # Send the request to create the snapshot. Note that this will return
> # before the snapshot is completely created, so we will later need to
> # wait till the snapshot is completely created.
>
> snaps_service = vm_service.snapshots_service()
> snap = snaps_service.add(
> snapshot=types.Snapshot(
> description=snap_description,
> persist_memorystate=False,
> ),
> )
>
> This makes a snapshot of all the dsks of the vm.
>
> I can previously filter in my case the bootable disk with something like
> this:
>
> # Locate the service that manages the disk attachments of the virtual
> # machine:
> disk_attachments_service = vm_service.disk_attachments_service()
>
> # Retrieve the list of disks attachments, and print the disk details.
> disk_attachments = disk_attachments_service.list()
> for disk_attachment in disk_attachments:
> disk = connection.follow_link(disk_attachment.disk)
> print("name: %s" % disk.name)
> print("id: %s" % disk.id)
> print("status: %s" % disk.status)
> print("bootable: %s" % disk_attachment.bootable)
> print("provisioned_size: %s" % disk.provisioned_size)
>
> So in case of an example VM with two disks I get this print out
>
> name: padnpro_bootdisk
> id: c122978a-70d7-48aa-97c5-2f17d4603b1e
> status: ok
> bootable: True
> provisioned_size: 59055800320
> name: padnpro_imp_Disk1
> id: 5454b137-fb2c-46a7-b345-e6d115802582
> status: ok
> bootable: False
> provisioned_size: 10737418240
>
>
> But I haven't found then the syntax to use to specify a disk list in the
> block where I create the sapshot of the VM
>
> snap = snaps_service.add(
> snapshot=types.Snapshot(
> description=snap_description,
> persist_memorystate=False,
> disxk x x x ? ? ?
> ),
> )
>
> Any help in this direction?
> Tahnsk,
> Gianluca
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5I22JVFREERWEIMM3TXCSH6EPE47TI3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DC2I7XL4BEBXEGMQK7UAUJ7V3UF76AQ6/


[ovirt-users] Re: Creating snapshot of a subset of disks

2018-06-21 Thread Benny Zlotnik
I can refer again to what we do in the ovirt-system-tests:

testlib.assert_true_within_long(
lambda:
vm1_snapshots_service.list()[-1].snapshot_status ==
types.SnapshotStatus.OK
)


Which tests whether the status change to desirable one within a given
period of time (I think ten minutes in case of assert_true_within_long),
the assert code itself is in ovirtlago:

https://github.com/lago-project/lago-ost-plugin/blob/130bed27a04c9b63161d6fc9cd3e68cd7b54d0c6/ovirtlago/testlib.py#L224

So instead of just asserting you can modify it to do whatever you need

On Thu, Jun 21, 2018 at 4:06 PM Gianluca Cecchi 
wrote:

> On Thu, Jun 21, 2018 at 2:00 PM, Benny Zlotnik 
> wrote:
>
>> You could something like this (IIUC):
>> dead_snap1_params = types.Snapshot(
>> description=SNAPSHOT_DESC_1,
>> persist_memorystate=False,
>> disk_attachments=[
>> types.DiskAttachment(
>> disk=types.Disk(
>> id=disk.id
>> )
>> )
>> ]
>> )
>>
>> Taken from ovirt-system-tests[1]
>>
>> [1] -
>> https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L340
>>
>>
>>
> Hi, thanks for your input!
> It seems I was able to reach my target
>
> What I've done
>
> # Locate the service that manages the disk attachments of the virtual
> # machine:
> disk_attachments_service = vm_service.disk_attachments_service()
>
> # Retrieve the list of disk attachments and then the bootable disk
> disk_attachments = disk_attachments_service.list()
>
> bootdisk = None
> for disk_attachment in disk_attachments:
> disk = connection.follow_link(disk_attachment.disk)
> if disk_attachment.bootable == True:
> bootdisk = connection.follow_link(disk_attachment.disk)
> break
>
> snaps_service = vm_service.snapshots_service()
> snap = snaps_service.add(
> snapshot=types.Snapshot(
> description=snap_description,
> persist_memorystate=False,
> disk_attachments=[
> types.DiskAttachment(
> disk=types.Disk(
> id=bootdisk.id
> )
> )
> ]
> ),
> )
> logging.info(
> 'Sent request to create snapshot \'%s\', the id is \'%s\'.',
> snap.description, snap.id,
> )
>
>
> It seems also the monitor function already present in backup.py of the sdk
> examples linked in my previous message is working ok,
>
> # Poll and wait till the status of the snapshot is 'ok', which means
> # that it is completely created:
> snap_service = snaps_service.snapshot_service(snap.id)
> while snap.snapshot_status != types.SnapshotStatus.OK:
> logging.info(
> 'Waiting till the snapshot is created, the status is now \'%s\'.',
> snap.snapshot_status,
> )
> time.sleep(1)
> snap = snap_service.get()
> logging.info('The snapshot is now complete.')
>
>
> In fact in the log file I have
>
> INFO:root:Sent request to create snapshot
> 'padnpro_imp-backup-0e0c7064-bec5-429b-9ad7-cd8d1e5b25be', the id is
> '4135e8cb-87e8-4f09-82f5-b9ad0ed2f5be'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:Waiting till the snapshot is created, the status is now 'locked'.
> INFO:root:The snapshot is now complete.
>
> What if I would like to emit an event if for any reason the creation of
> the snapshot doesn't complete in a predefined elapsed time and manage it?
> I think I have also to manage the case when for any reason no disk is
> marked as bootable inside the VM I'm backing up...
>
> Gianluca
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CJI7H4V7LG65MOQWP32AHJHVYYUYIG3/


[ovirt-users] Re: Python-SDK4: Check snapshot deletion result?

2018-07-12 Thread Benny Zlotnik
Perhaps you can query the status of job using the correlation id (taking
the examples from ovirt-system-tests):
  dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
disk=types.Disk(
id=disk.id
)
)
]
)
correlation_id = uuid.uuid4()

vm1_snapshots_service.add(dead_snap1_params,
  query={'correlation_id': correlation_id})

testlib.assert_true_within_long(
lambda:
test_utils.all_jobs_finished(engine, correlation_id)
)

All jobs finished checks that jobs with correlation_id have finished, it is
implemented like this[2]:
def all_jobs_finished(engine, correlation_id):
try:
jobs = engine.jobs_service().list(
search='correlation_id=%s' % correlation_id
)
except:
jobs = engine.jobs_service().list()
return all(job.status != types.JobStatus.STARTED for job in jobs)


You can instead do something like this:
 jobs = engine.jobs_service().list(
search='correlation_id=%s' % correlation_id
)

return any(job.status == types.JobStatus.FAILED for job in jobs)





[1] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L353
[2] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test_utils/__init__.py#L209

On Thu, Jul 12, 2018 at 10:28 AM  wrote:

> Hi Ondra,
>
> El 2018-07-12 08:02, Ondra Machacek escribió:
> > On 07/11/2018 10:10 AM, nico...@devels.es wrote:
> >> Hi,
> >>
> >> We're using ovirt-engine-sdk-python 4.1.6 on oVirt 4.1.9, currently
> >> we're trying to delete some snapshots via a script like this:
> >>
> >>  sys_serv = conn.system_service()
> >>  vms_service = sys_serv.vms_service()
> >>  vm_service = vms_service.vm_service(vmid)
> >>  snaps_service = vm_service.snapshots_service()
> >>  snaps_service.service('SNAPSHOT-ID').remove()
> >
> > In case of failure this line should raise Error, so you should know it
> > failed.
> >
>
> It doesn't, actually. This call is asynchronous, and the snapshot
> deletion seems to fail after about 10 seconds, so initially it seems to
> be correct but fails afterwards, that's why I need a way to check if the
> task ended correctly or not.
>
> >>
> >> This works, mostly... however, sometimes the deletion fails:
> >>
> >>  Failed to delete snapshot 'snapshot name' for VM 'vm'.
> >>
> >> Is it currently possible to know via Python-SDK that the deletion
> >> actually failed? I know I can check the state of a snapshot, but I'd
> >> like to check the result of the task. Is that possible somehow?
> >>
> >> Thanks.
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFGSUUJ3RNWX6H66RRGDPFLM6YEL577F/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFPROJO4XHL36SJIQIYAAXUTPI6N4IIS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W66ZNEDVIFCC3K56QHVHGEOP5ZGXAU4Z/


[ovirt-users] Re: Python-SDK4: Check snapshot deletion result?

2018-07-18 Thread Benny Zlotnik
Ah, sorry, I missed the fact you're using 4.1, this was introduced in 4.2
[1]
Regardless, correlation id will no appear in the job's fields, but it can
be used to search (again, in 4.2)

What you can probably do is just check the state of the system (i.e. the
number of snapshots stayed the same after a period of time)
You can also use the events in the audit log[2]
The list of events can be found here:
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/common/src/main/java/org/ovirt/engine/core/common/AuditLogType.java#L530
But I haven't tried this and I'm not sure if it's reliable

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1460701
[2] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test_utils/__init__.py#L249

On Wed, Jul 18, 2018 at 1:39 PM  wrote:

> Hi Benny,
>
> El 2018-07-12 08:50, Benny Zlotnik escribió:
> > Perhaps you can query the status of job using the correlation id
> > (taking the examples from ovirt-system-tests):
> >   dead_snap1_params = types.Snapshot(
> >
> > description=SNAPSHOT_DESC_1,
> > persist_memorystate=False,
> > disk_attachments=[
> > types.DiskAttachment(
> > disk=types.Disk(
> > id=disk.id [5]
> > )
> > )
> > ]
> > )
> > correlation_id = uuid.uuid4()
> >
> > vm1_snapshots_service.add(dead_snap1_params,
> >   query={'correlation_id':
> > correlation_id})
> >
> > testlib.assert_true_within_long(
> > lambda:
> > test_utils.all_jobs_finished(engine, correlation_id)
> > )
> >
>
> I tried this approach but with the snapshot deletion task instead of
> creating one.
>
>  customuuid = uuid4()
>  snaps_service.service(newsnap.id).remove(query={'correlation_id':
> customuuid})
>
> However, when this task is run, I see no task with this correlation_id.
> Moreover, I cannot find a correlation_id field in the job object.
>
> In [40]: job
> Out[40]: 
>
> In [41]: job.
> job.auto_cleared  job.description   job.external  job.id
> job.name  job.start_timejob.steps
> job.comment   job.end_time  job.href  job.last_updated
> job.owner job.status
>
> The 'id' field doesn't correspond to the correlation_id generated above.
>
> > All jobs finished checks that jobs with correlation_id have finished,
> > it is implemented like this[2]:
> >
> > def all_jobs_finished(engine, correlation_id):
> > try:
> > jobs = engine.jobs_service().list(
> > search='correlation_id=%s' % correlation_id
> > )
> > except:
> > jobs = engine.jobs_service().list()
> > return all(job.status != types.JobStatus.STARTED for job in
> > jobs)
> >
> > You can instead do something like this:
> >
> >  jobs = engine.jobs_service().list(
> > search='correlation_id=%s' % correlation_id
> > )
>
> This won't work either, it returns an exception claiming this:
>
> TypeError: list() got an unexpected keyword argument 'search'
>
> Any further hints with this?
>
> Thanks
>
> > return any(job.status == types.JobStatus.FAILED for job in jobs)
> >
> > [1]
> > -
> https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L353
> > [6]
> > [2]
> > -
> https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test_utils/__init__.py#L209
> > [7]
> >
> > On Thu, Jul 12, 2018 at 10:28 AM  wrote:
> >
> >> Hi Ondra,
> >>
> >> El 2018-07-12 08:02, Ondra Machacek escribió:
> >>> On 07/11/2018 10:10 AM, nico...@devels.es wrote:
> >>>> Hi,
> >>>>
> >>>> We're using ovirt-engine-sdk-python 4.1.6 on oVirt 4.1.9,
> >> currently
> >>>> we're trying to delete some snapshots via a script like this:
> >>>>
> >>>>   sys_serv = conn.system_service()
> >>>>   vms_service = sys_serv.vms_service()
> >>>>   vm_service = vms_service.vm_service(vmid)
> >>>>   snaps_service = vm_service.snapshots_service()
> >>>>   snaps_service.service('SNAPSHOT-ID').remove()
> >>>
> >>> In case of failure this line should raise Error, so you should
> >> know it
> >>> failed.
>

[ovirt-users] Re: Cannot start VM due to no active snapshot.

2018-07-19 Thread Benny Zlotnik
I can't write an elaborate response since I am away from my laptop, but a
workaround would be to simply insert the snapshot back to the snapshots
table
You need to locate the snapshot's id in the logs where the failure occured
and use vm's id

insert into snapshots values ('', '', 'ACTIVE',
'OK', 'Active VM', '2018-02-21 14:00:11.845-04'

On Thu, 19 Jul 2018, 14:48 ,  wrote:

> I've hit the issue described in:
> https://access.redhat.com/solutions/3393181
> https://bugzilla.redhat.com/show_bug.cgi?id=1561052
>
> I have one VM with three disks that cannot start.  Logs indicate a null
> pointer exception when trying to locate the snapshot.   I've verified in
> the engine db that no "Active" snapshot exists for the VM (this can also be
> seen in the WebUI).
>
> oVirt hosts and engine are both (now) on RHEL7.5. Not sure if this
> snapshot failed to create before the hosts were updated to 7.5 - the VM in
> question is a server that never gets shutdown but rather is migrated around
> while we perform host maintenance.
>
> oVirt: v4.2.1.7-1
>
> Host:
> kernel: 3.10.0-862.9.1
> KVM: 2.9.0-16
> libvirt: libvirt-3.9.0-14.el7_5.6
> VDSM: vdsm-4.20.27.1-1
>
> The article just says to contact RH support, but I don't have a RHEV
> support agreement for my oVirt cluster.  Anyone know how to recover this VM?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ORFFAFL5G2JDBJXAQKNBBTKJGGBRD636/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UM3JLBFO5XXJNV7SJTHYI6LRD46JPIYH/


[ovirt-users] Re: Cannot start VM due to no active snapshot.

2018-07-23 Thread Benny Zlotnik
Can you attach the logs from the original failure that caused the active
snapshot to disappear?
And also add your INSERT command

On Fri, Jul 20, 2018 at 12:08 AM  wrote:

> Benny,
>
> Thanks for the response!
>
> I don't think I found the right snapshot ID in the logs, but I was able to
> track down an ID in the images table.  I've inserted that ID as the active
> VM in the snapshots table and it now shows an Active VM with Status:Up
> disks in my snapshots view in the WebUI.  Unfortunately, though, when I try
> to start the VM it still fails.
>
> The new error being thrown is below:
>
> 2018-07-19 16:52:36,845-0400 INFO  (vm/f0087d72) [vds] prepared volume
> path: 
> /rhev/data-center/mnt/192.168.8.110:_oi_nfs_kvm-nfs-sr1/428a1232-ba20-4338-b24b-2983a112501c/images/4e897a16-3f7a-47dd-b047-88bb1b191406/2d01adb1-f629-4c10-9a2c-de92cf5d41bf
> (clientIF:497)
> 2018-07-19 16:52:36,846-0400 INFO  (vm/f0087d72) [vdsm.api] START
> prepareImage(sdUUID=u'428a1232-ba20-4338-b24b-2983a112501c',
> spUUID=u'39f25b84-a2ad-439f-8db7-2dd7896186d1',
> imgUUID=u'8462a296-65cc-4740-a479-912164fa7e1d',
> leafUUID=u'4bb7829f-89c8-4132-9ec3-960e39094898', allowIllegal=False)
> from=internal, task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:46)
> 2018-07-19 16:52:36,890-0400 INFO  (vm/f0087d72) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) from=internal,
> task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:50)
> 2018-07-19 16:52:36,890-0400 ERROR (vm/f0087d72)
> [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3170,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',)
> 2018-07-19 16:52:36,891-0400 INFO  (vm/f0087d72)
> [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',)" - code 227 (task:1181)
> 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) (dispatcher:82)
> 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [virt.vm]
> (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') The vm start process failed
> (vm:943)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2777, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2624, in
> _make_devices
> return self._make_devices_from_dict()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2644, in
> _make_devices_from_dict
> self._preparePathsForDrives(dev_spec_map[hwclass.DISK])
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1017, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {u'poolID':
> u'39f25b84-a2ad-439f-8db7-2dd7896186d1', 'index': '1', u'iface': u'virtio',
> 'apparentsize': '1441792', u'imageID':
> u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared':
> u'false', 'truesize': '1444864', u'type': u'disk', u'domainID':
> u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow',
> u'deviceId': u'8462a296-65cc-4740-a479-912164fa7e1d', u'address':
> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x', u'type':
> u'pci', u'slot': u'0x07'}, u'device': u'disk', u'propagateErrors': u'off',
> u'optional': u'false', 'vm_custom': {}, 'vmid':
> 'f0087d72-f051-4f62-b3fd-dd1a56a211ee', u'volumeID':
> u'4bb7829f-89c8-4132-9ec3-960e39094898', u'diskType': u'file',
> u'specParams': {}, u'discard': False}
> 2018-07-19 16:52:36,893-0400 INFO  (vm/f0087d72) [virt.vm]
> (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') Changed state to Down: Bad
> volume specification {u'poolID': u'39f25b84-a2ad-439f-8db7-2dd7896186d1',
> 'index': '1', u'iface': u'virtio', 'apparentsize': '1441792', u'imageID':
> u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared':
> u'false', 'truesize': '1444864', u'type': u'disk', u'domainID':
> u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow',
> u'deviceI

[ovirt-users] Re: Hang tasks delete snapshot

2018-07-23 Thread Benny Zlotnik
Can you locate commands with id: 8639a3dc-0064-44b8-84b7-5f733c3fd9b3,
94607c69-77ce-4005-8ed9-a8b7bd40c496 in the command_entities table?

On Mon, Jul 23, 2018 at 4:37 PM Marcelo Leandro 
wrote:

> Good morning,
>
> can anyone help me ?
>
> Marcelo Leandro
>
> 2018-06-27 10:53 GMT-03:00 Marcelo Leandro :
>
>> Hello,
>>
>> The task not show more in the gui, but in the engine.log show this msg:
>>
>> 2018-06-27 10:01:54,705-03 INFO
>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>> (DefaultQuartzScheduler4) [4172f065-06a4-4f09-954e-0dcfceb61cda] Command
>> 'RemoveSnapshot' (id: '8639a3dc-0064-44b8-84b7-5f733c3fd9b3') waiting on
>> child command id: '94607c69-77ce-4005-8ed9-a8b7bd40c496'
>> type:'RemoveSnapshotSingleDiskLive' to complete
>> 2018-06-27 10:01:54,712-03 ERROR
>> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
>> (DefaultQuartzScheduler4) [4172f065-06a4-4f09-954e-0dcfceb61cda] Failed
>> invoking callback end method 'onFailed' for command
>> '94607c69-77ce-4005-8ed9-a8b7bd40c496' with exception 'null', the callback
>> is marked for end method retries
>> 2018-06-27 10:02:04,730-03 INFO
>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>> (DefaultQuartzScheduler2) [4172f065-06a4-4f09-954e-0dcfceb61cda] Command
>> 'RemoveSnapshot' (id: '8639a3dc-0064-44b8-84b7-5f733c3fd9b3') waiting on
>> child command id: '94607c69-77ce-4005-8ed9-a8b7bd40c496'
>> type:'RemoveSnapshotSingleDiskLive' to complete
>> 2018-06-27 10:02:04,737-03 ERROR
>> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
>> (DefaultQuartzScheduler2) [4172f065-06a4-4f09-954e-0dcfceb61cda] Failed
>> invoking callback end method 'onFailed' for command
>> '94607c69-77ce-4005-8ed9-a8b7bd40c496' with exception 'null', the callback
>> is marked for end method retries
>>
>>
>> 2018-06-26 10:14 GMT-03:00 Marcelo Leandro :
>>
>>> Very thanks, its work for me.
>>>
>>> 2018-06-26 9:53 GMT-03:00 Nathanaël Blanchet :
>>>


 Le 26/06/2018 à 13:29, Marcelo Leandro a écrit :

 Hello ,

 Nathanael, Thank for reply, if possible can you describe this steps and
 what this command do?

 I would like understand for solve future problems.

 Very thanks.

 Marcelo Leandro

 2018-06-26 6:50 GMT-03:00 Nathanaël Blanchet :

> PGPASSWORD=X
> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t snapshot -u
> engine
>
 you can find your PGPASSWORD here :
 /etc/ovirt-engine/engine.conf.d/10-setup-database.conf

 296c010e-3c1d-4008-84b3-5cd39cff6aa1 |
> 525a4dda-dbbb-4872-a5f1-8ac2aed48392
>
 This command returns a list of locked processes of the chosen type (-t
 TYPE   - The object type {all | vm | template | disk | snapshot})
 First item is the vm id, second one the locked snapshot id.


> REMOVE
>
> PGPASSWORD=X
> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot -u
> engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
>
 Now use the locked snapshot id to unlock



>
> Le 25/06/2018 à 19:42, Marcelo Leandro a écrit :
>
> Hello,
> The few days I tried delete snapshot but the task not concluded yet.
> How I can stop this task? I Already try with taskcleaner.sh but dont had
> sucess.
>
> attached the ovirt and vdsm-spm logs.
>
> ovirt version. 4.1.9
>
> Thanks.
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2Z4VUZLOE75CJU6A3VHBLI7ZLQLXLNB/
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques227 avenue Professeur-Jean-Louis-Viala 
> 
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>

 --
 Nathanaël Blanchet

 Supervision réseau
 Pôle Infrastrutures Informatiques227 avenue Professeur-Jean-Louis-Viala 
 
 34193 MONTPELLIER CEDEX 5  
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14blanc...@abes.fr


>>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://li

[ovirt-users] Re: Issue with NFS and Storage domain setup

2018-08-15 Thread Benny Zlotnik
Can you attach the vdsm log?

On Wed, Aug 15, 2018 at 5:16 PM Inquirer Guy  wrote:

> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
>
>
>
>
>
> On 14 August 2018 at 10:22, Inquirer Guy  wrote:
>
>> Hi Ovirt,
>>
>> I successfully installed both ovirt-engine(ENGINE01) and ovirt
>> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
>> NFS share and already connected to my NODE01, all of these server though I
>> haven't setup a DNS server, was manually added hostname on every machines
>> and I can lookup and ping on them without a problem, I was able to add the
>> NODE01 to my ENGINE01 as well.
>>
>> My issue was when I tried creating a storage domain on my ENGINE01, I did
>> the below steps before running the engine-setup while also following the
>> guide on the ovirt url:
>> https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>>
>> #touch /etc/exports
>> #systemctl start rpcbind nfs-server
>> #systemctl enable rpcbind nfs-server
>> #engine-setup
>> #mkdir /var/lib/exports/data
>> #chown vdsm:kvm /var/lib/exports/data
>>
>> I added the 2 just in case but I have tried each alone but all fails
>> #vi /etc/exports
>> /var/lib/exports/data
>> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>> /var/lib/exports/data   0.0.0.0/0.0.0.0(rw)
>>
>> #systemctl restart rpc-statd nfs-server
>>
>>
>> Once I started to add my storage domain I get the below error
>>
>>
>>
>> Attached is the engine log for your reference.
>>
>> Hope you guys can help me with these, Im really interested with this
>> great product. Thanks!
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UVDHNLSFSDHUZU3VXSZVUYUCUR55YI2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QUK3EAPP6R3VG7NZNGTZQAQSGSIGQTQ/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
It can be done by deleting from the images table:
$ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b'";

of course the database should be backed up before doing this



On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
>
> On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
>  wrote:
>
> > It looks like the Pivot completed successfully, see attached vdsm.log.
> > Is there a way to recover that VM?
> > Or would it be better to recover the VM from Backup?
>
> This what we see in the log:
>
> 1. Merge request recevied
>
> 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> u'0002-0002-0002-0002-0289'},
> baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> from=:::10.34.38.31,39226,
> flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
>
> To track this job, we can use the jobUUID: 
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 2. Starting the merge
>
> 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> We see the original chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 3. The merge was completed, ready for pivot
>
> 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> is ready (vm:5847)
>
> At this point parent volume contains all the data in top volume and we can 
> pivot
> to the parent volume.
>
> 4. Vdsm detect that the merge is ready, and start the clean thread
> that will complete the merge
>
> 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
>
> 5. Requesting pivot to parent volume:
>
> 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> complete active layer commit (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
>
> 6. Pivot was successful
>
> 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> has completed (vm:5838)
>
> 7. Vdsm wait until libvirt updates the xml:
>
> 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
>
> 8. Syncronizing vdsm metadata
>
> 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
> volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
> newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
> task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)
>
> We can see the new chain:
> ['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']
>
> 2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
> Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)
>
> The old chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 2020-07-13 11:19:07,006+0200 INFO  (merge/720410c3) [storage.Image]
> Unlinking subchain: ['6197b30d-0732-4cc7-aef0-12f9f6e9565b']
> (image:1231)
> 2020-07-13 11:19:07,017+0200 INFO  (merge/720410c3) [storage.Image]
> Leaf volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b is being removed from
> the chain. Marking it ILLEGAL to prevent data corruption (image:1239)
>
> This matches what we see on storage.
>
> 9. Merge job is untracked
>
> 2020-07-13 11:19:21,134+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Cleanup thread
> 
> successfully completed, untracking job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> (base=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8,
> top=6197b30d-0732-4cc7-aef0-12f9f6e9565b) (vm:5752)
>
> This was a successful mer

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
Sorry, I only replied to the question, in addition to removing the
image from the images table, you may also need to set the parent as
the active image and remove the snapshot referenced by this image from
the database. Can you provide the output of:
$ psql -U engine -d engine -c "select * from images where
image_group_id = ";

As well as
$ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b';"

On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik  wrote:
>
> It can be done by deleting from the images table:
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
> of course the database should be backed up before doing this
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
> >
> > On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
> >  wrote:
> >
> > > It looks like the Pivot completed successfully, see attached vdsm.log.
> > > Is there a way to recover that VM?
> > > Or would it be better to recover the VM from Backup?
> >
> > This what we see in the log:
> >
> > 1. Merge request recevied
> >
> > 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> > merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> > u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> > u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> > u'0002-0002-0002-0002-0289'},
> > baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> > topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> > from=:::10.34.38.31,39226,
> > flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> > vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
> >
> > To track this job, we can use the jobUUID: 
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> > and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
> >
> > 2. Starting the merge
> >
> > 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> > chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> > top=None, bandwidth=0, flags=12 (vm:5945)
> >
> > We see the original chain:
> > 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
> >
> > 3. The merge was completed, ready for pivot
> >
> > 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > is ready (vm:5847)
> >
> > At this point parent volume contains all the data in top volume and we can 
> > pivot
> > to the parent volume.
> >
> > 4. Vdsm detect that the merge is ready, and start the clean thread
> > that will complete the merge
> >
> > 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> > for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
> >
> > 5. Requesting pivot to parent volume:
> >
> > 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> > complete active layer commit (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
> >
> > 6. Pivot was successful
> >
> > 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > has completed (vm:5838)
> >
> > 7. Vdsm wait until libvirt updates the xml:
> >
> > 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-21 Thread Benny Zlotnik
I forgot to add the `\x on` to make the output readable, can you run it
with:
$ psql -U engine -d engine -c "\x on" -c ""

On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
wrote:

> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+-++--+-+---
>
> +--+---+---++---+-
>
>  8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 | 2020-04-23 14:59:23+02 | 161061273600 
> | ---- | ---- 
> |   1 | 2020-07-06 20:38:36.093+02 | 
> 6bc03db7-82a3-4b7e-9674-0bdd76933eb8 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-04-23 14:59:20.919344+02 | 
> 2020-07-06 20:38:36.093788+02 | f  | 1 |   2
>
>  6197b30d-0732-4cc7-aef0-12f9f6e9565b | 2020-07-06 20:38:38+02 | 161061273600 
> | ---- | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 
> |   1 | 1970-01-01 01:00:00+01 | 
> fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-07-06 20:38:36.093788+02 | 
> 2020-07-06 20:38:52.139003+02 | t  | 0 |   2
>
> (2 rows)
>
>
>
> SELECT s.* FROM snapshots s, images i where i.vm_snapshot_id = s.snapshot_id 
> and i.image_guid = '6197b30d-0732-4cc7-aef0-12f9f6e9565b';
>
>  snapshot_id  |vm_id 
> | snapshot_type | status | description |   creation_date| 
>   app_list
>
>  | vm_configuration | _create_date
>   | _update_date  | memory_metadata_disk_id | 
> memory_dump_disk_id | vm_configuration_broken
>
> --+--+---++-++--
>
> -+--+---+---+-+-+-
>
>  fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 | b5534254-660f-44b1-bc83-d616c98ba0ba 
> | ACTIVE| OK | Active VM   | 2020-04-23 14:59:20.171+02 | 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt
>
> -guest-agent-common-1.0.14-1.el7 |      | 2020-04-23 
> 14:59:20.154023+02 | 2020-07-03 17:33:17.483215+02 | 
> | | f
>
> (1 row)
>
>
> Thanks,
> Arsene
>
> On Sun, 2020-07-19 at 16:34 +0300, Benny Zlotnik wrote:
>
> Sorry, I only replied to the question, in addition to removing the
>
> image from the images table, you may also need to set the parent as
>
> the active image and remove the snapshot referenced by this image from
>
> the database. Can you provide the output of:
>
> $ psql -U engine -d engine -c "select * from images where
>
> image_group_id = ";
>
>
> As well as
>
> $ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
>
> where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
>
> On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik <
>
> bzlot...@redhat.com
>
> > wrote:
>
>
> It can be done by deleting from the images table:
>
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
>
> of course the database should be backed up before doing this
>
>
>
>
> On Fri, Jul 17, 2020 at 6:45 

[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-23 Thread Benny Zlotnik
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939


On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot  wrote:

>
>
>
>
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots, things seem in order.
> Removing snapshots shows warning 'disk in illegal state'
>
> Mouse hover shows .. please do not shutdown before succesfully remove
> snapshot
>
>
> ovirt-engine log
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
> Merge failed
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
> node2.lab,
> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> MergeVDS, error = Merge failed, code = 52
> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
> EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52 (Failed with error mergeErr and code 52)
> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52
>   
>io='threads'/>
> 2020-07-22 16:40:39,659+02 ERROR
> [org.ovirt.engine.core.bll.MergeStatusCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
> 2020-07-22 16:40:41,524+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
> 'MERGE_STATUS'
> 2020-07-22 16:40:42,597+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
> failed. Images have been marked illegal and can no longer be previewed or
> reverted to. Please retry Live Merge on the snapshot to complete the
> operation.
> 2020-07-22 16:40:42,603+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
> with failure.
> 2020-07-22 16:40:43,679+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2020-07-22 16:40:43,774+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>
>
> VDSM on hypervisor
> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
> {"driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/m

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-23 Thread Benny Zlotnik
I think you can remove 6197b30d-0732-4cc7-aef0-12f9f6e9565b from images and
the corresponding snapshot, and set the parent,
8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 as active (active = 't' field), and
change its snapshot to be active snapshot. That is if I correctly
understand the current layout, that 6197b30d-0732-4cc7-aef0-12f9f6e9565b
was removed from the storage and 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 is
now the only volume for the disk

On Wed, Jul 22, 2020 at 1:32 PM Arsène Gschwind 
wrote:

> Please find the result:
>
> psql -d engine -c "\x on" -c "select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';"
>
> Expanded display is on.
>
> -[ RECORD 1 ]-+-
>
> image_guid| 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> creation_date | 2020-04-23 14:59:23+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | ----
>
> imagestatus   | 1
>
> lastmodified  | 2020-07-06 20:38:36.093+02
>
> vm_snapshot_id| 6bc03db7-82a3-4b7e-9674-0bdd76933eb8
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-04-23 14:59:20.919344+02
>
> _update_date  | 2020-07-06 20:38:36.093788+02
>
> active| f
>
> volume_classification | 1
>
> qcow_compat   | 2
>
> -[ RECORD 2 ]-+-
>
> image_guid| 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> creation_date | 2020-07-06 20:38:38+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> imagestatus   | 1
>
> lastmodified  | 1970-01-01 01:00:00+01
>
> vm_snapshot_id| fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-07-06 20:38:36.093788+02
>
> _update_date  | 2020-07-06 20:38:52.139003+02
>
> active| t
>
> volume_classification | 0
>
> qcow_compat   | 2
>
>
> psql -d engine -c "\x on" -c "SELECT s.* FROM snapshots s, images i where 
> i.vm_snapshot_id = s.snapshot_id and i.image_guid = 
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
> Expanded display is on.
>
> -[ RECORD 1 
> ]---+--
>
> snapshot_id | fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> vm_id   | b5534254-660f-44b1-bc83-d616c98ba0ba
>
> snapshot_type   | ACTIVE
>
> status  | OK
>
> description | Active VM
>
> creation_date   | 2020-04-23 14:59:20.171+02
>
> app_list| 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt-guest-agent-common-1.0.14-1.el7
>
> vm_configuration|
>
> _create_date| 2020-04-23 14:59:20.154023+02
>
> _update_date| 2020-07-03 17:33:17.483215+02
>
> memory_metadata_disk_id |
>
> memory_dump_disk_id |
>
> vm_configuration_broken | f
>
>
> Thanks.
>
>
>
> On Tue, 2020-07-21 at 13:45 +0300, Benny Zlotnik wrote:
>
> I forgot to add the `\x on` to make the output readable, can you run it
> with:
> $ psql -U engine -d engine -c "\x on" -c ""
>
> On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
> wrote:
>
> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+

[ovirt-users] Re: Problem with "ceph-common" pkg for oVirt Node 4.4.1

2020-08-19 Thread Benny Zlotnik
I think it would be easier to get an answer for this on a ceph mailing
list, but why do you need specifically 12.2.7?

On Wed, Aug 19, 2020 at 4:08 PM  wrote:
>
> Hi!
> I have a problem with install ceph-common package(needed for cinderlib 
> Managed Block Storage) in  oVirt Node 4.4.1 - oVirt doc say: "$ yum install 
> -y ceph-common" but no Repo with ceph-common ver 12.2.7 for CentOS8 - 
> official CentOS has only "ceph-common-10.2.5-4.el7.x86_64.rpm"  and CEPH has 
> only ceph-common ver. 14.2 for EL8
> How can I install ceph-common ver. 12.2.7?
>
> BR
> Mike
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJ4UFLRMLS7GMTTMUGUM4QHSVNX5CZRV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OKUXQ4DM3FNO77BF236C3PRIMLVDCGP/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Benny Zlotnik
The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
>
> I'm looking for the latest documentation for setting up a Managed Block
> Device storage domain so that I can move some of my VM images to ceph rbd.
>
> I found this:
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> ...but it has a big note at the top that it is "...not user
> documentation and should not be treated as such."
>
> The oVirt administration guide[1] does not talk about managed block devices.
>
> I've found a few mailing list threads that discuss people setting up a
> Managed Block Device with ceph, but didn't see any links to
> documentation steps that folks were following.
>
> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> and if so, where is the documentation for using it?
>
> --Mike
> [1]ovirt.org/documentation/administration_guide/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQ7QQOP5T6UBFRXGWHNUN2SYN2CBPIZZ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
>
> Hi Benny,
>
> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> Octopus.  Then I tried using these instructions, as well as the deep
> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
>
> I've done this a couple of times, and each time the engine fails when I
> try to add the new managed block storage domain.  The error on the
> screen indicates that it can't connect to the cinder database.  The
> error in the engine log is:
>
> 2020-09-29 17:02:11,859-05 WARN
> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> action 'AddManagedBlockStorageDomain' failed for user
> admin@internal-authz. Reasons:
> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
>
> I had created the db on the engine with this command:
>
> su - postgres -c "psql -d template1 -c \"create database cinder owner
> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> lc_ctype 'en_US.UTF-8';\""
>
> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
>
>  hostcinder  engine  ::0/0   md5
>  hostcinder      engine  0.0.0.0/0   md5
>
> Is there anywhere else I should look to find out what may have gone wrong?
>
> --Mike
>
> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> > The feature is currently in tech preview, but it's being worked on.
> > The feature page is outdated,  but I believe this is what most users
> > in the mailing list were using. We held off on updating it because the
> > installation instructions have been a moving target, but it is more
> > stable now and I will update it soon.
> >
> > Specifically speaking, the openstack version should be updated to
> > train (it is likely ussuri works fine too, but I haven't tried it) and
> > cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> > installed instead of using pip, same goes for os-brick. The rest of
> > the information is valid.
> >
> >
> > [1] 
> > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >
> > On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>
> >> I'm looking for the latest documentation for setting up a Managed Block
> >> Device storage domain so that I can move some of my VM images to ceph rbd.
> >>
> >> I found this:
> >>
> >> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> >>
> >> ...but it has a big note at the top that it is "...not user
> >> documentation and should not be treated as such."
> >>
> >> The oVirt administration guide[1] does not talk about managed block 
> >> devices.
> >>
> >> I've found a few mailing list threads that discuss people setting up a
> >> Managed Block Device with ceph, but didn't see any links to
> >> documentation steps that folks were following.
> >>
> >> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> >> and if so, where is the documentation for using it?
> >>
> >> --Mike
> >> [1]ovirt.org/documentation/administration_guide/
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHHOMSMDUWBHXZC77SQE4R3MAK7M4ZCN/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
Not sure about this, adding +Yedidyah Bar David

On Wed, Sep 30, 2020 at 3:04 PM Michael Thomas  wrote:
>
> I hadn't installed the necessary packages when the engine was first
> installed.
>
> However, running 'engine-setup --reconfigure-optional-components'
> doesn't work at the moment because (by design) my engine does not have a
> network route outside of the cluster.  It fails with:
>
> [ INFO  ] DNF Errors during downloading metadata for repository 'AppStream':
> - Curl error (7): Couldn't connect to server for
> http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
> [ ERROR ] DNF Failed to download metadata for repo 'AppStream': Cannot
> prepare internal mirrorlist: Curl error (7): Couldn't connect to server
> for
> http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
>
>
> I have a proxy set in the engine's /etc/dnf/dnf.conf, but it doesn't
> seem to be obeyed when running engine-setup.  Is there another way that
> I can get engine-setup to use a proxy?
>
> --Mike
>
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdated,  but I believe this is what most users
> >>> in the mailing list were using. We held off on updating it because the
> >>> installation instructions have been a moving target, but it is more
> >>> stable now and I will update it soon.
> >>>
> >>> Specifically speaking, the openstack version should be updated to
> >>> train (it is likely ussuri works fine too, but I haven't tried it) and
> >>> cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> >>> installed instead of using pip, same goes for os-brick. The rest of
> >>> the information is valid.
> >>>
> >>>
> >>> [1] 
> >>> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >>>
> >>> On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>>>
> >>>> I'm looking for the latest documentation for setting up a Managed Block

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
g id: 660ebc9e
> 2020-10-13 15:15:26,012-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Command
> 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed:
> EngineException: java.lang.NullPointerException (Failed with error
> ENGINE and code 5001)
> 2020-10-13 15:15:26,013-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.
> 2020-10-13 15:15:26,021-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-13) [7cb262cc] EVENT_ID:
> USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testvm_disk
> to VM grafana (User: michael.thomas@internal-authz).
> 2020-10-13 15:15:26,021-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Lock freed to object
> 'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]',
> sharedLocks=''}'
>
> The /var/log/cinder/ directory on the ovirt node is empty, and doesn't
> exist on the engine itself.
>
> To verify that it's not a cephx permission issue, I tried accessing the
> block storage from both the engine and the ovirt node using the
> credentials I set up in the ManagedBlockStorage setup page:
>
> [root@ovirt4]# rbd --id ovirt ls rbd.ovirt.data
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> [root@ovirt4]# rbd --id ovirt info
> rbd.ovirt.data/volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> rbd image 'volume-5419640e-445f-4b3f-a29d-b316ad031b7a':
>  size 100 GiB in 25600 objects
>  order 22 (4 MiB objects)
>  snapshot_count: 0
>  id: 68a7cd6aeb3924
>  block_name_prefix: rbd_data.68a7cd6aeb3924
>  format: 2
>  features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>  op_features:
>  flags:
>  create_timestamp: Tue Oct 13 06:53:55 2020
>  access_timestamp: Tue Oct 13 06:53:55 2020
>  modify_timestamp: Tue Oct 13 06:53:55 2020
>
> Where else can I look to see where it's failing?
>
> --Mike
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdat

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>
> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > to add `rbd default features = 3` to the configuration. I think there
> > are plans to support rbd-nbd in cinderlib which would allow using
> > additional features, but I'm not aware of anything concrete.
> >
> > Additionally, the path for the cinderlib log is
> > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > would appear in the vdsm.log on the relevant host, and would look
> > something like "RBD image feature set mismatch. You can disable
> > features unsupported by the kernel with 'rbd feature disable'"
>
> Thanks for the pointer!  Indeed,
> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> looking for.  In this case, it was a user error entering the RBDDriver
> options:
>
>
> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> option use_multipath_for_xfer
>
> ...it should have been 'use_multipath_for_image_xfer'.
>
> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> Domains -> Manage Domain', all driver options are unedittable except for
> 'Name'.
>
> Then I thought that maybe I can't edit the driver options while a disk
> still exists, so I tried removing the one disk in this domain.  But even
> after multiple attempts, it still fails with:
>
> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> update or delete on table "volumes" violates foreign key constraint
> "volume_attachment_volume_id_fkey" on table "volume_attachment"
> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> referenced from table "volume_attachment".
>
> See https://pastebin.com/KwN1Vzsp for the full log entries related to
> this removal.
>
> It's not lying, the volume no longer exists in the rbd pool, but the
> cinder database still thinks it's attached, even though I was never able
> to get it to attach to a VM.
>
> What are my options for cleaning up this stale disk in the cinder database?
>
> How can I update the driver options in my storage domain (deleting and
> recreating the domain is acceptable, if possible)?
>
> --Mike
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q5IC4SDS5AS64RIOKHBFNQDWCOBKKDJW/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
sorry, accidentally hit send prematurely, the database table is
driver_options, the options are json under driver_options

On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik  wrote:
>
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
>
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
> >
> > On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > > to add `rbd default features = 3` to the configuration. I think there
> > > are plans to support rbd-nbd in cinderlib which would allow using
> > > additional features, but I'm not aware of anything concrete.
> > >
> > > Additionally, the path for the cinderlib log is
> > > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > > would appear in the vdsm.log on the relevant host, and would look
> > > something like "RBD image feature set mismatch. You can disable
> > > features unsupported by the kernel with 'rbd feature disable'"
> >
> > Thanks for the pointer!  Indeed,
> > /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> > looking for.  In this case, it was a user error entering the RBDDriver
> > options:
> >
> >
> > 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> > option use_multipath_for_xfer
> >
> > ...it should have been 'use_multipath_for_image_xfer'.
> >
> > Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> > Domains -> Manage Domain', all driver options are unedittable except for
> > 'Name'.
> >
> > Then I thought that maybe I can't edit the driver options while a disk
> > still exists, so I tried removing the one disk in this domain.  But even
> > after multiple attempts, it still fails with:
> >
> > 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> > volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> > 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> > when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> > update or delete on table "volumes" violates foreign key constraint
> > "volume_attachment_volume_id_fkey" on table "volume_attachment"
> > DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> > referenced from table "volume_attachment".
> >
> > See https://pastebin.com/KwN1Vzsp for the full log entries related to
> > this removal.
> >
> > It's not lying, the volume no longer exists in the rbd pool, but the
> > cinder database still thinks it's attached, even though I was never able
> > to get it to attach to a VM.
> >
> > What are my options for cleaning up this stale disk in the cinder database?
> >
> > How can I update the driver options in my storage domain (deleting and
> > recreating the domain is acceptable, if possible)?
> >
> > --Mike
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQPXZYCE5GWKSHDN5FU7I5L4VP75QPEJ/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-03 Thread Benny Zlotnik
Do you know why it was stuck?

You can use unlock_entity.sh[1] to unlock the disk


[1]
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html

On Tue, Nov 3, 2020 at 1:38 PM  wrote:

> I have a vm that has two disks one active and another disabling when
> trying to migrate the disk to another storage the task was in a loop
> creating several snapshots, I turned off the VM and the loop stopped and
> after several hours the task disappeared and the VM disk was left blocking
> making it impossible to be deleted and when trying to delete the vm it does
> not exclude from the following message locked disk making it impossible to
> remove vm
>
>
> How can I solve this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5CTA74EKK337ROAS4HT5HU5YYOVSHDB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4RXBSMPBMACO3HMWTJQ2WNXKOIZJ7MQ/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-05 Thread Benny Zlotnik
You mean the disk physically resides on a different storage domain, but
engine sees it on another?
Which version did this happen on?
Do you have the logs from this failure?

On Tue, Nov 3, 2020 at 5:51 PM  wrote:

>
>
> I used it but it didn't work The disk is still in locked status
>
> when I run the unlock_entity.sh script it doesn't show that the disk is
> locked
>
> but it was possible to identify that the disk was moved to the other
> storage but shows that it is in the old storage
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFLKANS6H2KVOTIJBZ7E2OB4FD3NMYEO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2R77MFXD27LHAABWH343VSJTAAEFUOVZ/


[ovirt-users] Re: LiveStorageMigration fail

2020-11-09 Thread Benny Zlotnik
Which version are you using?
Did this happen more than once for the same disk?
A similar bug was fixed in 4.3.10.1[1]
There is another bug with a similar symptom which occurs very rarely and we
were unable to reproduce it

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1758048

On Mon, Nov 9, 2020 at 3:57 PM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hello experts,
>
> perhaps someone has an idea about that error. It appears when in try to
> migrate a disk to another storage, and this live. Generally it works
> good but - this is the log snippet:
>
> HSMGetAllTasksStatusesVDS failed: Error during destination image
> manipulation: u"image=02240cf3-65b6-487c-b5af-c266a1dd18f8, dest
> domain=3c4fbbfe-6796-4007-87ab-d7f205b7fae3: msg=Invalid parameter:
> 'capacity=134217728000'"
>
> Surely the is enough space on the target domain for this operation (~4TB).
>
> Any ideas..?
>
> Greetings from
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KT5HQXEFB477O7GW5KP4BJJUR5YBL6Q/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFDCQPAPIVHC2Z7MMZMORZTOP3O5RGXF/


  1   2   3   4   >