[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
It should be in the images table, there is an it_guid column which indicates which templates the image is based on On Wed, Dec 2, 2020 at 2:16 PM <3c.moni...@gruppofilippetti.it> wrote: > Hi, > if I can ask some other info, probably I find a "ghost disk" related to > previous problem. > >

[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
These are the available statuses[1], you can change it to 0, assuming the VM is down [1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/common/src/main/java/org/ovirt/engine/core/common/businessentities/VMStatus.java#L10 On Wed, Dec 2, 2020 at 12:57 PM

[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
I am not sure what is locked? If everything in the images table is 1, then the disks are not locked. If the VM is in status 15, which is "Images Locked" status, then this status is set in the vm_dynamic table On Wed, Dec 2, 2020 at 12:43 PM <3c.moni...@gruppofilippetti.it> wrote: > Hi, > in this

[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
imagestatus is in the images table, not vms On Wed, Dec 2, 2020 at 11:30 AM <3c.moni...@gruppofilippetti.it> wrote: > Hi. > I did a full select on "vms" and field "imagestatus" there isn't! > May be this the reason for which the tool is unable to manage it? > Follows full field list: > > >

[ovirt-users] Re: Check multipath status using API

2020-11-26 Thread Benny Zlotnik
It is implemented, there is no special API for this, using the events endpoint (ovirt-engine/api/events) is the way to access this information On Thu, Nov 26, 2020 at 3:00 PM Paulo Silva wrote: > Hi, > > Is it possible to check the multipath status using the current REST API on > ovirt? > >

[ovirt-users] Re: Backporting of Fixes

2020-11-15 Thread Benny Zlotnik
Hi, 4.3 is no longer maintained. Regardless, this bug was never reproduced and has no fixes attached to it, so there is nothing to backport. The related bugs and their fixes are all related to changes that were introduced in 4.4, so it is unlikely you hit the same issue. If you can share more

[ovirt-users] Re: LiveStorageMigration fail

2020-11-09 Thread Benny Zlotnik
Which version are you using? Did this happen more than once for the same disk? A similar bug was fixed in 4.3.10.1[1] There is another bug with a similar symptom which occurs very rarely and we were unable to reproduce it [1] https://bugzilla.redhat.com/show_bug.cgi?id=1758048 On Mon, Nov 9,

[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-05 Thread Benny Zlotnik
You mean the disk physically resides on a different storage domain, but engine sees it on another? Which version did this happen on? Do you have the logs from this failure? On Tue, Nov 3, 2020 at 5:51 PM wrote: > > > I used it but it didn't work The disk is still in locked status > > when I run

[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-03 Thread Benny Zlotnik
Do you know why it was stuck? You can use unlock_entity.sh[1] to unlock the disk [1] https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html On Tue, Nov 3, 2020 at 1:38 PM wrote: > I have a vm that has two disks one active and another disabling when > trying to migrate

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
sorry, accidentally hit send prematurely, the database table is driver_options, the options are json under driver_options On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik wrote: > > Did you attempt to start a VM with this disk and it failed, or you > didn't try at all? If it's t

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
or edit the database table [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8 On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas wrote: > > On 10/14/20 3:30 AM, Benny Zlotnik wrote: > > Jeff is right, it's a limitation of kernel rbd, the recommendation is > > to add `rb

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
order 22 (4 MiB objects) > snapshot_count: 0 > id: 68a7cd6aeb3924 > block_name_prefix: rbd_data.68a7cd6aeb3924 > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, > deep-flatten > op_features: > fl

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
can get engine-setup to use a proxy? > > --Mike > > > On 9/30/20 2:19 AM, Benny Zlotnik wrote: > > When you ran `engine-setup` did you enable cinderlib preview (it will > > not be enabled by default)? > > It should handle the creation of the database automatically,

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
; lc_ctype 'en_US.UTF-8';\"" > > ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf: > > hostcinder engine ::0/0 md5 > hostcinder engine 0.0.0.0/0 md5 > > Is there a

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Benny Zlotnik
The feature is currently in tech preview, but it's being worked on. The feature page is outdated, but I believe this is what most users in the mailing list were using. We held off on updating it because the installation instructions have been a moving target, but it is more stable now and I will

[ovirt-users] Re: Problem with "ceph-common" pkg for oVirt Node 4.4.1

2020-08-19 Thread Benny Zlotnik
I think it would be easier to get an answer for this on a ceph mailing list, but why do you need specifically 12.2.7? On Wed, Aug 19, 2020 at 4:08 PM wrote: > > Hi! > I have a problem with install ceph-common package(needed for cinderlib > Managed Block Storage) in oVirt Node 4.4.1 - oVirt doc

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-23 Thread Benny Zlotnik
> creation_date | 2020-04-23 14:59:20.171+02 > > app_list| > kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt-guest-agent-common-1.0.14-1.el7 > > vm_configuration| > > _

[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-23 Thread Benny Zlotnik
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+ [1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939 On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot wrote: > > > > > Hi all, > > I've got 2 two node setup, image based installs. > When doing ova exports or generic snapshots,

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-21 Thread Benny Zlotnik
.el7 | | 2020-04-23 > 14:59:20.154023+02 | 2020-07-03 17:33:17.483215+02 | > | | f > > (1 row) > > > Thanks, > Arsene > > On Sun, 2020-07-19 at 16:34 +0300, Benny Zlotnik wrote: > > Sorry, I on

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
from images where image_group_id = "; As well as $ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i where i.vm_snapshot_id = s.snapshot_id and i.image_guid = '6197b30d-0732-4cc7-aef0-12f9f6e9565b';" On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik wrote: > >

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
It can be done by deleting from the images table: $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid = '6197b30d-0732-4cc7-aef0-12f9f6e9565b'"; of course the database should be backed up before doing this On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer wrote: > > On Thu, Jul 16,

[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Benny Zlotnik
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939 On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David wrote: > > On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com > wrote: > > > > Hi, > > > > I tried to send the log to you by email, but it fails. So I have sent them > > to

[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-08 Thread Benny Zlotnik
yes, that's because cinderlib uses KRBD, so it has less features, I should add this to the documentation. I was told cinderlib has plans to add support for rbd-nbd, this would eventually allow use of newer features On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke wrote: > > > It looks like a

[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-07 Thread Benny Zlotnik
yes, it looks like a configuration issue, you can use plain `rbd` to check connectivity. regarding starting vms and live migration, are there bug reports for these? there is an issue we're aware of with live migration[1], it can be worked around by blacklisting rbd devices in the multipath.conf

[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Benny Zlotnik
I've used successfully rocky with 4.3 in the past, the main caveat with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip install cinderlib==0.9.0). Let me know if you have any issues. Hopefully during 4.4 we will have the repositories with the RPMs and installation will be much

[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-01 Thread Benny Zlotnik
egards, >> Strahil Nikolov >> >> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik >> написа: >> >Sorry, by overloaded I meant in terms of I/O, because this is an >> >active layer merge, the active layer >> >(aabf3788-8e47-4f8b-84

[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
M / 1 Tb disk) yet not overloaded. We > have multiple servers with the same specs with no issues. > > Regards, > > On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik wrote: >> >> Can you share the VM's xml? >> Can be obtained with `virsh -r dumpxml ` >> Is the VM overload

[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Can you share the VM's xml? Can be obtained with `virsh -r dumpxml ` Is the VM overloaded? I suspect it has trouble converging taskcleaner only cleans up the database, I don't think it will help here ___ Users mailing list -- users@ovirt.org To

[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
You can't see it because it is not a task, tasks only run on SPM, It is a VM job and the data about it is stored in the VM's XML, it's also stored in the vm_jobs table. You can see the status of the job in libvirt with `virsh blockjob sda --info` (if it's still running) On Wed, May 27, 2020

[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Live merge (snapshot removal) is running on the host where the VM is running, you can look for the job id (f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host On Wed, May 27, 2020 at 9:02 AM David Sekne wrote: > > Hello, > > I'm running oVirt version 4.3.9.4-1.el7. > > After a failed live

[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
> 1. The engine didn't clean it up itself - after all , no mater the reason, > the operation has failed? can't really answer without looking at the logs, engine should cleanup in case of a failure, there can be numerous reasons for cleanup to fail (connectivity issues, bug, etc) > 2. Why the

[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
anything in the logs (engine,vdsm)? if there's nothing on the storage, removing from the database should be safe, but it's best to check why it failed On Mon, Apr 20, 2020 at 5:39 PM Strahil Nikolov wrote: > > Hello All, > > did anyone observe the following behaviour: > > 1. Create a new disk

[ovirt-users] Re: does SPM still exist?

2020-03-24 Thread Benny Zlotnik
it hasn't disappeared, there has been work done to move operations that used to run only on SPM to run on regular hosts as well (copy/move disk) Currently the main operations performed by SPM are create/delete/extend volume and more[1] [1]

[ovirt-users] Re: oVirt behavior with thin provision/deduplicated block storage

2020-02-24 Thread Benny Zlotnik
we use the stats API in the engine, currently only to check if the backend is accessible, we have plans to use it for monitoring and validations but it is not implemented yet On Mon, Feb 24, 2020 at 3:35 PM Nir Soffer wrote: > > On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor wrote: > > > > On

[ovirt-users] Re: iSCSI Domain Addition Fails

2020-02-23 Thread Benny Zlotnik
anything in the vdsm or engine logs? On Sun, Feb 23, 2020 at 4:23 PM Robert Webb wrote: > > Also, I did do the “Login” to connect to the target without issue, from what > I can tell. > > > > From: Robert Webb > Sent: Sunday, February 23, 2020 9:06 AM > To: users@ovirt.org > Subject: iSCSI

[ovirt-users] Re: disk snapshot status is Illegal

2020-02-05 Thread Benny Zlotnik
is the VM with the issue) Changing the snapshot status with unlock_entity will likely work only if the chain is fine on the storage On Tue, Feb 4, 2020 at 7:40 PM Crazy Ayansh wrote: > please find the attached the logs. > > On Tue, Feb 4, 2020 at 10:23 PM Benny Zlotnik wrote: > &

[ovirt-users] Re: Recover VM if engine down

2020-02-04 Thread Benny Zlotnik
you need to go to the "import vm" tab on the storage domain and import them On Tue, Feb 4, 2020 at 7:30 PM matteo fedeli wrote: > > it does automatically when I attach or should I execute particular operations? > ___ > Users mailing list --

[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
e help. > > Thanks > Shashank > > > > On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik wrote: > >> Is the VM running? Can you remove it when the VM is down? >> Can you find the reason for illegal status in the logs? >> >> On Tue, Feb 4, 2020 at

[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
Is the VM running? Can you remove it when the VM is down? Can you find the reason for illegal status in the logs? On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh wrote: > Hey Guys, > > Any help on it ? > > Thanks > > On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh > wrote: > >> >> Hi Team, >> >> I

[ovirt-users] Re: Recover VM if engine down

2020-02-03 Thread Benny Zlotnik
you can attach the storage domain to another engine and import it On Mon, Feb 3, 2020 at 11:45 PM matteo fedeli wrote: > > Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data > storage domain. > ___ > Users mailing list --

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well? On Thu, Jan 9, 2020 at 2:19 PM David Johnson wrote: > We had a drive in our NAS fail, but afterwards one of our VM's will not > start. > > The boot drive on the VM is (so near as I can tell) the only drive > affected. > > I

[ovirt-users] Re: what is "use host" field in storage domain creation??

2019-12-30 Thread Benny Zlotnik
One host has to connect and setup the storage (mount the path, create the files, etc) so you are given the choice which host to use for this On Mon, Dec 30, 2019 at 11:07 AM wrote: > > hello and happy new year~ > > I am wondering the role of "use host" field in storage domain creation. > >

[ovirt-users] Re: VM Import Fails

2019-12-23 Thread Benny Zlotnik
Please attach engine and vdsm logs and specify the versions On Mon, Dec 23, 2019 at 10:08 AM Vijay Sachdeva wrote: > > Hi All, > > > > I am trying to import a VM from export domain, but import fails. > > > > Setup: > > > > Source DC has a NFS shared storage with two Hosts > Destination DC has a

[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-12-03 Thread Benny Zlotnik
> We are using Ceph with oVirt (via standalone Cinder) extensively in a > production environment. > I tested oVirt cinderlib integration in our dev environment, gave some > feedback here on the list and am currently waiting for the future > development. IMHO cinderlib in oVirt is currently not fit

[ovirt-users] Re: oVirt Admin Portal unaccessible via chrome (firefox works)

2019-11-24 Thread Benny Zlotnik
Works fine for me, anything interesting in the browser console? On Sat, Nov 23, 2019 at 7:04 PM Strahil Nikolov wrote: > > Hello Community, > > I have a constantly loading chrome on my openSuSE 15.1 (and my android > phone), while firefox has no issues . > Can someone test accessing the oVirt

[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-11-24 Thread Benny Zlotnik
The current plan to integrate ceph is via cinderlib integration[1] (currently in tech preview mode) because we still have no packaging ready, there are some manual installation steps required, but there is no need to install and configure openstack/cinder >1. Does this require you to install

[ovirt-users] Re: Low disk space on Storage

2019-11-12 Thread Benny Zlotnik
This was fixed in 4.3.6, I suggest upgrading On Tue, Nov 12, 2019 at 12:45 PM wrote: > > Hi, > > I'm running ovirt Version:4.3.4.3-1.el7 > My filesystem disk has 30 GB free space. > Cannot start a VM due to an I/O error storage. > When tryng to move the disk to another storage domain get this

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Benny Zlotnik
We support it as part of the cinderlib integration (Managed Block Storage), each rbd device is represented as single ovirt disk when used. The integration is still in tech preview and still has a long way to go, but any early feedback is highly appreciated On Mon, Oct 7, 2019 at 2:20 PM Strahil

[ovirt-users] Re: Cannot enable maintenance mode

2019-10-02 Thread Benny Zlotnik
Did you try the "Confirm Host has been rebooted" button? On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins wrote: > > Hello guys, > > No ideas for this issue? > > Thanks for your cooperation! > > Kind regards, > > -Original Message- > From: Bruno Martins > Sent: 29 de setembro de 2019 16:16

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Benny Zlotnik
This might be a bug, can you share the full vdsm and engine logs? On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski wrote: > > On ovirt 4.3.5 we are seeing various problems related to the rbd device > staying mapped after a guest has been live migrated. This causes problems > migrating the guest

[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-25 Thread Benny Zlotnik
6 in the postgres database like you > suggested. > > After that i could remove the data center from the ovirt management interface. > > Thanks again for your help > > Claudio > > Il 24/09/19 13:50, Benny Zlotnik ha scritto: > > ah yes, it's generally a good idea t

[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-24 Thread Benny Zlotnik
ttachable or activable, so i don't know what to do. > > Claudio > > Il 24/09/19 12:19, Benny Zlotnik ha scritto: >> >> Did you try to force remove the DC? >> You have the option in the UI >> >> On Tue, Sep 24, 2019 at 1:07 PM Claudio Soprano >> wrot

[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-24 Thread Benny Zlotnik
Did you try to force remove the DC? You have the option in the UI On Tue, Sep 24, 2019 at 1:07 PM Claudio Soprano wrote: > > Hi to all, > > We are using ovirt to manage 6 Data Centers, 3 of them are old Data > Centers with no hosts inside, but with domains, storage and VMs not running. > > We

[ovirt-users] Re: Disk locked after backup

2019-09-19 Thread Benny Zlotnik
it's probably[1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1749944 On Thu, Sep 19, 2019 at 12:03 PM Fabio Cesar Hansen wrote: > Hi. > > I am using the > https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py > script to backup my vms. > > The

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-15 Thread Benny Zlotnik
>* Would ovirt have been able to deal with clearing the rbd locks, or did I miss a trick somewhere to resolve this situation with manually going through each device and clering the lock? Unfortunately there is no trick on ovirt's side >* Might it be possible for ovirt to detect when the rbd

[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Benny Zlotnik
You don't need to remove it, the vm data will be available on the OVF_STORE disks on the other SD, where you copied the disks to. Once you put the domain in maintenance and new master SD will be elected. On Wed, Sep 11, 2019 at 12:13 PM Mark Steele wrote: > Good morning, > > I have a Storage

[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Benny Zlotnik
this means vdsm lost connectivity to the storage, but it also looks like it recovered eventually On Thu, Aug 8, 2019 at 12:26 PM Vrgotic, Marko wrote: > Another one that seem to be related: > > > > 2019-08-07 14:43:59,069-0700 ERROR (check/loop) [storage.Monitor] Error > checking path >

[ovirt-users] Re: SPM and Task error ...

2019-07-26 Thread Benny Zlotnik
taskcleaner.sh only clears tasks form the engine database. did you check your engine logs to see if this task is running? it's a task that is executed during a snapshot merge (removal of a snapshot), do you have any running snapshot removals? if not you can stop and clear the task using

[ovirt-users] Re: SPM and Task error ...

2019-07-25 Thread Benny Zlotnik
25/07/19 16:45, Benny Zlotnik ha scritto: > > Do you have vdsm logs? > > I'M not sure because this task is very old > > Is this task still running? > > I made this : > > # vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e > { > "mes

[ovirt-users] Re: SPM and Task error ...

2019-07-25 Thread Benny Zlotnik
Do you have vdsm logs? Is this task still running? On Thu, Jul 25, 2019 at 5:00 PM Enrico wrote: > Hi all, > my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is > 4.20.39.1-1.el7, > ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and > 2050 (fibre

[ovirt-users] Re: Storage domain 'Inactive' but still functional

2019-07-24 Thread Benny Zlotnik
We have seen something similar in the past and patches were posted to deal with this issue, but it's still in progress[1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1553133 On Mon, Jul 22, 2019 at 8:07 PM Strahil wrote: > I have a theory... But after all without any proof it will remain

[ovirt-users] Re: Cinderlib managed block storage, ceph jewel

2019-07-22 Thread Benny Zlotnik
> I hat do copy them to all the hosts to start a virtual machine with attached cinderlib ceph block device. That is strange, you shouldn't need to do this, cinderlib passes them to the hosts itself Do you have cinderlib.log to look at? (/var/log/ovirt-engine/cinderlib/cinderlib.log) On Mon, Jul

[ovirt-users] Re: LiveStoreageMigration failed

2019-07-18 Thread Benny Zlotnik
r > return func(inst, *args, **kwargs) >File "/usr/lib64/python2.7/site-packages/libvirt.py", line 729, in > blockCopy > if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', > dom=self) > libvirtError: argument unsupported: non-file destination

[ovirt-users] Re: LiveStoreageMigration failed

2019-07-18 Thread Benny Zlotnik
It should work, what is the engine and vdsm versions? Can you add vdsm logs as well? On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler < koeh...@luis.uni-hannover.de> wrote: > Hello, > > I try to migrate a disk of a running vm from gluster 3.12.15 to gluster > 3.12.15 but it fails. libGfApi set

[ovirt-users] Re: Cinderlib managed block storage, ceph jewel

2019-07-17 Thread Benny Zlotnik
Starting a VM should definitely work, I see in the error message: "RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable" Adding "rbd default features = 3" to ceph.conf might help with that. The other issue looks like a bug and it would be

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
t;Also is it possible to host the hosted engine on this storage? Unfortunately no On Tue, Jul 9, 2019 at 4:57 PM Dan Poltawski wrote: > On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote: > > VM live migration is supported and should work > > Can you add engine and cinder

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
VM live migration is supported and should work Can you add engine and cinderlib logs? On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski wrote: > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote: > > I've now managed to succesfully create/mount/delete volumes! > > However, I'm seeing live

[ovirt-users] Re: ISO Upload "Paused by System"

2019-07-09 Thread Benny Zlotnik
What does it say in the engine logs? On Tue, Jul 9, 2019 at 11:03 AM Ron Baduach -X (rbaduach - SQLINK LTD at Cisco) wrote: > Hi Guys, > > I tried to upload an ISO, and from "chrome" it's just "paused by system" > from the beginning > > From "Firefox", it started to upload, but after 0.5 hour

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
wrote: > On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote: > > Not too useful unfortunately :\ > > Can you try py-list instead of py-bt? Perhaps it will provide better > > results > > (gdb) py-list > 57if get_errno(ex) != errno.EEXIST: > 5

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Not too useful unfortunately :\ Can you try py-list instead of py-bt? Perhaps it will provide better results On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski wrote: > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote: > > Hi, > > > > You have a typo, it's py-bt and I ju

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Hi, You have a typo, it's py-bt and I just tried it myself, I only had to install: $ yum install -y python-devel (in addition to the packages specified in the link) On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski wrote: > Hi, > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:

[ovirt-users] Re: Managed Block Storage

2019-07-07 Thread Benny Zlotnik
Hi, Any chance you can setup gdb[1] so we can find out where it's stuck exactly? Also, which version of ovirt are you using? Can you also check the ceph logs for anything suspicious? [1] - https://wiki.python.org/moin/DebuggingWithGdb $ gdb python then `py-bt` On Thu, Jul 4, 2019 at 7:00 PM

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread Benny Zlotnik
On Thu, Jul 4, 2019 at 1:03 PM wrote: > I'm testing out the managed storage to connect to ceph and I have a few > questions: * Would I be correct in assuming that the hosted engine VM needs > connectivity to the storage and not just the underlying hosts themselves? > It seems like the cinderlib

[ovirt-users] Re: iso files

2019-06-24 Thread Benny Zlotnik
yes, you can use ovirt-imageio[1] [1] - https://ovirt.org/develop/release-management/features/storage/image-upload.html On Mon, Jun 24, 2019 at 4:34 PM wrote: > Hi, > > is possible to install a VM without an ISO domain? for version 4.3.4.3 ? > > Thanks > > > -- > --

[ovirt-users] Re: Attaching/Detaching Export Domain from CLI

2019-06-23 Thread Benny Zlotnik
you can use the remove action[1], notice you need to send a DELETE request http://ovirt.github.io/ovirt-engine-api-model/4.3/#services/attached_storage_domain/methods/remove On Sun, Jun 23, 2019 at 4:05 PM wrote: > Hello, > > Thanks Benny! I was able to attach and detach using the links you

[ovirt-users] Re: Attaching/Detaching Export Domain from CLI

2019-06-23 Thread Benny Zlotnik
You can do this using the SDK/REST API[1][2] [1] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/attach_nfs_iso_storage_domain.py [2] http://ovirt.github.io/ovirt-engine-api-model/4.3/#_attach_storage_domains_to_data_center On Sun, Jun 23, 2019 at 11:24 AM Alexander Stockel |

[ovirt-users] Re: Can't import some VMs after storage domain detach and reattach to new datacenter.

2019-06-23 Thread Benny Zlotnik
Can you attach engine and vdsm logs? On Sun, Jun 23, 2019 at 11:29 AM m black wrote: > Hi. > > I have a problem with importing some VMs after importing storage domain in > new datacenter. > > I have 5 servers with oVirt version 4.1.7, hosted-engine setup and > datacenter with iscsi, fc and nfs

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Benny Zlotnik
Also, what is the storage domain type? Block or File? On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik wrote: > > Can you attach vdsm and engine logs? > Does this happen for new VMs as well? > > On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote: > > > > after upgra

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Benny Zlotnik
Can you attach vdsm and engine logs? Does this happen for new VMs as well? On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote: > > after upgrading from 4.2 to 4.3, after a vm live migrates it's disk > images are become owned by root:root. Live migration succeeds and the vm > stays up, but

[ovirt-users] Re: Nvme over fabric array support through OVirt MANAGED_BLOCK_STORAGE Domain

2019-05-30 Thread Benny Zlotnik
If there is a backend driver available it should work. We did not test this though, so it would be great to get bug reports if you had any trouble Upon VM migration the disk should be automatically connected to the target host (and disconnected from the origin). On Thu, May 30, 2019 at 10:35 AM

[ovirt-users] Re: Old mailing list SPAM

2019-05-15 Thread Benny Zlotnik
yes, I reported it and it's being worked on[1] [1] https://ovirt-jira.atlassian.net/browse/OVIRT-2728 On Wed, May 15, 2019 at 4:05 PM Markus Stockhausen wrote: > > Hi, > > does anyone currently get old mails of 2016 from the mailing list? > We are spammed with something like this from

[ovirt-users] Re: Template Disk Corruption

2019-04-29 Thread Benny Zlotnik
; that VM. If you make a server VM there are no issues. > > > On 2019-04-24 09:30, Benny Zlotnik wrote: > > Does it happen all the time? For every template you create? > > Or is it for a specific template? > > > > On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter >

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
engine.log. After you delete the desktop VM, and create > another based on the template the new VM still boots, it just reports > disk read errors and fails boot. > > On 2019-04-24 05:01, Benny Zlotnik wrote: > > can you provide more info (logs, versions)? > > > >

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
can you provide more info (logs, versions)? On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter wrote: > > 1. Create server template from server VM (so it's a full copy of the > disk) > > 2. From template create a VM, override server to desktop, so that it > become a qcow2 overlay to the template

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-22 Thread Benny Zlotnik
eparing volume > 0e01f014-530b-4067-aa1d-4e9378626a9d/a1157ad0-44a8-4073-a20c-468978973f4f > (volume:567) > > I tried to filter the usual noise out of VDSM.log so hopefully this is the > relevant bit you need - let me know if the full thing would help. > > Regards, > Ca

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-20 Thread Benny Zlotnik
can I downgrade to 4.2, and is there a fix coming > in 4.3.3 for this? > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. cal...@well.ox.ac.uk > > On 10 Apr 2019, at

[ovirt-users] Re: oVirtNode 4.3.3 - Missing os-brick

2019-04-16 Thread Benny Zlotnik
This is just an info message, if you don't use managed block storage[1] you can ignore it [1] - https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html On Tue, Apr 16, 2019 at 7:09 PM Stefano Danzi wrote: > > Hello, > > I've just upgrade one node host to v.

[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Benny Zlotnik
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm] (vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start replication for vda to {'domainID': '244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-10 Thread Benny Zlotnik
Can you run: $ gdb -p $(pidof qemu-img convert) -batch -ex "t a a bt" On Wed, Apr 10, 2019 at 11:26 AM Callum Smith wrote: > > Dear All, > > Further to this, I can't migrate a disk to different storage using the GUI. > Both disks are configured identically and on the same physical NFS

[ovirt-users] Re: cinderlib: VM migration fails

2019-04-08 Thread Benny Zlotnik
Please open a bug for this, with vdsm and supervdsm logs On Mon, Apr 8, 2019 at 2:13 PM Matthias Leopold wrote: > > Hi, > > after I successfully started my first VM with a cinderlib attached disk > in oVirt 4.3.2 I now want to test basic operations. I immediately > learned that migrating this VM

[ovirt-users] Re: UI bug viewing/editing host

2019-04-04 Thread Benny Zlotnik
Looks like it was fixed[1] [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1690268 On Thu, Apr 4, 2019 at 1:47 PM Callum Smith wrote: > > 2019-04-04 10:43:35,383Z ERROR > [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default > task-15) [] Permutation name:

[ovirt-users] Re: vdsClient in oVirt 4.3

2019-04-03 Thread Benny Zlotnik
, > >> > >> Thanks for the help. > >> > >> Could you please tell me what job_uuid and vol_gen should be replaced > >> by? Should I just put any UUID for the job? > >> > >> Thanks. > >> > >> El 2019-04-03 09:52

[ovirt-users] Re: vdsClient in oVirt 4.3

2019-04-03 Thread Benny Zlotnik
any UUID for the job? > > Thanks. > > El 2019-04-03 09:52, Benny Zlotnik escribió: > > it should be something like this: > > $ cat update.json > > { > > "job_id":"", > > "vol_info": { > >

[ovirt-users] Re: vdsClient in oVirt 4.3

2019-04-03 Thread Benny Zlotnik
it should be something like this: $ cat update.json { "job_id":"", "vol_info": { "sd_id": "", "img_id": "", "vol_id": "", "generation": "" }, "legality": "LEGAL" } } $ vdsm-client SDM update_volume -f update.json On

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-02 Thread Benny Zlotnik
and dealing with the rbd feature issues I could > proudly start my first VM with a cinderlib provisioned disk :-) > > Thanks for help! > I'll keep posting my experiences concerning cinderlib to this list. > > Matthias > > Am 01.04.19 um 16:24 schrieb Benny Zlotnik: > > D

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
I added an example for ceph[1] [1] - https://github.com/oVirt/ovirt-site/blob/468c79a05358e20289e7403d9dd24732ab453a13/source/develop/release-management/features/storage/cinderlib-integration.html.md#create-storage-domain On Mon, Apr 1, 2019 at 5:24 PM Benny Zlotnik wrote: > > Did yo

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
Did you pass the rbd_user when creating the storage domain? On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold wrote: > > > Am 01.04.19 um 13:17 schrieb Benny Zlotnik: > >> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says: > >> > >> 2019-04-01 11:14:54,925

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says: > > 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error > connecting to ceph cluster. > Traceback (most recent call last): >File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", > line 337, in _do_conn >

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
Hi, Thanks for trying this out! We added a separate log file for cinderlib in 4.3.2, it should be available under /var/log/ovirt-engine/cinderlib/cinderlib.log They are not perfect yet, and more improvements are coming, but it might provide some insight about the issue >Although I don't think

[ovirt-users] Re: Cancel storage migration task?

2019-03-18 Thread Benny Zlotnik
gt; > On Mon, 18 Mar 2019 12:36:13 + *Benny Zlotnik > >* wrote > > is this live or cold migration? > which version? > > currently the best way (and probably the only one we have) is to kill the > qemu-img convert process (if you are doing cold migration), un

  1   2   >