[ovirt-users] Re: image upload on Managed Block Storage

2021-01-13 Thread Benny Zlotnik
The workaround I tried with ceph is to use `rbd import` and replace
the volume created by ovirt, the complete steps are:
1. Create an MBS disk in ovirt and find its ID
2. rbd import  --dest-pool 
3. rbd rm volume- --pool 
4. rbd mv  volume- --pool 

I only tried it with raw images



On Wed, Jan 13, 2021 at 10:12 AM Henry lol  wrote:
>
> yeah, I'm using ceph as a backend,
> then can oVirt discover/import existing volumes in ceph?
>
> 2021년 1월 13일 (수) 오후 5:00, Benny Zlotnik 님이 작성:
>>
>> It's not implemented yet, there are ways to workaround it with either
>> backend specific tools (like rbd) or by attaching the volume, are you
>> using ceph?
>>
>> On Wed, Jan 13, 2021 at 4:13 AM Henry lol  
>> wrote:
>> >
>> > Hello,
>> >
>> > I've just checked I can't upload an image into the MBS block through 
>> > either UI or restAPI.
>> >
>> > So, is there any method to do that?
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJZZZFO57RPCJKMDGXLKS3DTDD7YCFFK/


[ovirt-users] Re: image upload on Managed Block Storage

2021-01-13 Thread Benny Zlotnik
It's not implemented yet, there are ways to workaround it with either
backend specific tools (like rbd) or by attaching the volume, are you
using ceph?

On Wed, Jan 13, 2021 at 4:13 AM Henry lol  wrote:
>
> Hello,
>
> I've just checked I can't upload an image into the MBS block through either 
> UI or restAPI.
>
> So, is there any method to do that?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4EEGGZRSPPSQGM7GSRQN3YO4PTIHBLH/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-28 Thread Benny Zlotnik
On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin  wrote:
>
> Sandro, FYI we are not against cinderlib integration, more than we are 
> upgrade 4.3 to 4.4 due movement to cinderlib.
>
> But (!) current Managed Storage Block realization support only krbd (kernel 
> RBD) driver - it's also not a option, because kernel client is always lagging 
> behind librbd, and every update\bugfix we should reboot whole host instead 
> simple migration of all VMs and then migrate it back. Also with krbd host 
> will be use kernel page cache, and will not be unmounted if VM will crash 
> (qemu with librbd is one userland process).
>

There was rbd-nbd support at some point in cinderlib[1] which
addresses your concerns, but it was removed because of some issues

+Gorka, are there any plans to pick it up again?

[1] 
https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4



> So for me current situation look like this:
>
> 1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, 
> just update this code...
>
> 2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack 
> project_id related things... and... Done...
>
> 3. I don't care how current cinder + qemu code works, just write new one for 
> linux kernel, it's optimal to use userland apps, just add wrappers (no, it's 
> not);
>
> 4. Current Cinder integration require zero configuration on oVirt hosts. It's 
> lazy, why oVirt administrator do nothing? just write manual how-to install 
> packages - oVirt administrators love anything except "reinstall" from engine 
> (no, it's not);
>
> 5. We broke old code. New features is "Cinderlib is a Technology Preview 
> feature only. Technology Preview features are not supported with Red Hat 
> production service level agreements (SLAs), might not be functionally 
> complete, and Red Hat does not recommend to use them for production".
>
> 6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues 
> (we didn't see anything).
>
>
> And again, we are not hate new cinderlib integration. We just want that new 
> technology don't break all PRODUCTION clustes. Almost two years ago I write 
> on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about 
> "before deprecate, let's help to migrate". For now I see that oVirt totally 
> will disable QEMU RBD support and want to use kernel RBD module + python 
> os-brick + userland mappers + shell wrappers.
>
>
> Thanks, I hope I am writing this for a reason and it will help build bridges 
> between the community and the developers. We have been with oVirt for almost 
> 10 years and now it is a crossroads towards a different virtualization 
> manager.
>
> k
>
>
> So I see only regressions for now, hope we'll found some code owner who can 
> catch this oVirt 4.4 only bugs.
>

I looked at the bugs and I see you've already identified the problem
and have patches attached, if you can submit the patches and verify
them perhaps we can merge the fixes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7QTTECXLUD6LIEE36FBRJ3JSOQO27DP/


[ovirt-users] Re: illegal disk status

2020-12-15 Thread Benny Zlotnik
you can use:
$ vdsm-client Volume delete (you can use --help to see the params)

After this you'll need to remove the corresponding image manually from
the database images table, mark the parent image as active, remove the
snapshot from the snapshots table and fix the parent snapshot

Be sure to backup before trying this


On Sun, Dec 13, 2020 at 5:00 PM Daniel Menzel
 wrote:
>
> Hi,
>
> we have a problem with some VMs which cannot be started anymore due to an 
> illegal disk status of a snapshot.
>
> What happend (most likely)? we tried to snapshot those vms some days ago but 
> the storage domain didn't have enough free space left. Yesterday we shut 
> those vms down - and from then on they didn't start anymore.
>
> What have I tried so far?
>
> Via the web interface I tried to remove the snapshot - didn't work.
> Searched the internet. Found (among other stuff) this: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1649129
> via vdsm-tool dump-volume-chains I managed to list those 5 snapshots (see 
> below).
>
> The output for one machine was:
>
>image:2d707743-4a9e-40bb-b223-83e3be672dfe
>
>  - 9ae6ea73-94b4-4588-9a6b-ea7a58ef93c9
>status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, 
> type: PREALLOCATED, capacity: 32212254720, truesize: 32212254720
>
>  - f7d2c014-e8f5-4413-bfc5-4aa1426cb1e2
>status: ILLEGAL, voltype: LEAF, format: COW, legality: 
> ILLEGAL, type: SPARSE, capacity: 32212254720, truesize: 29073408
>
> So my idea was to follow the said bugzilla thread and update the volume - but 
> I didn't manage to find input for the job_id and generation.
>
> So my question is: Does anyone have an idea on how to (force) remove a given 
> snapshot via vsdm-{tool|client}?
>
> Thanks in advance!
> Daniel
>
> --
> Daniel Menzel
> Geschäftsführer
>
> Menzel IT GmbH
> Charlottenburger Str. 33a
> 13086 Berlin
>
> +49 (0) 30 / 5130 444 - 00
> daniel.men...@menzel-it.net
> https://menzel-it.net
>
> Geschäftsführer: Daniel Menzel, Josefin Menzel
> Unternehmenssitz: Berlin
> Handelsregister: Amtsgericht Charlottenburg
> Handelsregister-Nummer: HRB 149835 B
> USt-ID: DE 309 226 751
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZRHPBH6PKWUXSQIEKT4352D5RVNH6G6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTXEQDZSFUTOHFIAOQBMTCA2NMICCBUQ/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-10 Thread Benny Zlotnik
yes, the VM looks fine... to investigate this further I'd need the
full log vdsm log with error, please share it

On Wed, Dec 9, 2020 at 3:01 PM Joseph Goldman  wrote:
>
> Attached XML dump.
>
> Looks like its let me run a 'reboot' but im afraid to do a shutdown at
> this point.
>
> I have taken just a raw copy of the whole image group folder in the hope
> if worse came to worse I'd be able to recreate the disk with the actual
> files.
>
> All existing files seem to be referenced in the xmldump.
>
> On 9/12/2020 11:54 pm, Benny Zlotnik wrote:
> > The VM is running, right?
> > Can you run:
> > $ virsh -r dumpxml 
> >
> > On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman  wrote:
> >> Looks like the physical files dont exist:
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> >> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> >> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b', u'domainID':
> >> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> >> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> >> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> >> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> >> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> >> from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO  (jsonrpc/4) [api.virt] FINISH merge
> >> return={'status': {'message': 'Drive image file could not be found',
> >> 'code': 13}} from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
> >>
> >> Although looking on the physical file system they seem to exist:
> >>
> >> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> >> total 56637572
> >> -rw-rw. 1 vdsm kvm  15936061440 Dec  9 21:51
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> >> -rw-rw. 1 vdsm kvm  1048576 Dec  8 01:11
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> >> -rw-r--r--. 1 vdsm kvm  252 Dec  9 21:37
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> >> -rw-rw. 1 vdsm kvm  21521825792 Dec  8 01:47
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94
> >> -rw-rw. 1 vdsm kvm  1048576 May 17  2020
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> >> -rw-r--r--. 1 vdsm kvm  256 Dec  8 01:49
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> >> -rw-rw. 1 vdsm kvm 107374182400 Dec  9 01:13
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> >> -rw-rw. 1 vdsm kvm  1048576 Feb 24  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> >> -rw-r--r--. 1 vdsm kvm  320 May 17  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
> >>
> >> The UUID's match the UUID's in the snapshot list.
> >>
> >> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> >> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> >> pretty much those 2 calls and then XML dump.
> >>
> >> Still a bit lost on the most comfortable way forward unfortunately.
> >>
> >> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
> >>>> [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
> >>> grepping error is ok, but it does not show the reason for the failure,
> >>> which will probably be on the vdsm host (you can use flow_id
> >>> 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
> >>> Need to see the underlying error causing: VDSGenericException:
> >>> VDSErrorException: Failed to SnapshotVDS, error =
> >>> Snapshot failed, code = 48
> >>>
> >>>> Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
> >>>> DB) and then trying to create does not change it back to illegal, but
> >>>> trying to delete that snapshot fails and sets it back to 4.
> >>> I see, can you share the removal failure log (similar information as
> >>> requested above)
> >>>
> >>> regarding backup, I don't have a good answer, hopefully someone else
> >>> has suggestions
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct: 
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives: 
> >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGM4MNKBHS7EWOIPS6WGVQSKEKLKDAQ7/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-09 Thread Benny Zlotnik
The VM is running, right?
Can you run:
$ virsh -r dumpxml 

On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman  wrote:
>
> Looks like the physical files dont exist:
>
> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b', u'domainID':
> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> from=:::192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
>
> 2020-12-09 22:01:00,122+1000 INFO  (jsonrpc/4) [api.virt] FINISH merge
> return={'status': {'message': 'Drive image file could not be found',
> 'code': 13}} from=:::192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
>
> Although looking on the physical file system they seem to exist:
>
> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> total 56637572
> -rw-rw. 1 vdsm kvm  15936061440 Dec  9 21:51
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> -rw-rw. 1 vdsm kvm  1048576 Dec  8 01:11
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> -rw-r--r--. 1 vdsm kvm  252 Dec  9 21:37
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> -rw-rw. 1 vdsm kvm  21521825792 Dec  8 01:47
> a6d4533b-b0b0-475d-a436-26ce99a38d94
> -rw-rw. 1 vdsm kvm  1048576 May 17  2020
> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> -rw-r--r--. 1 vdsm kvm  256 Dec  8 01:49
> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> -rw-rw. 1 vdsm kvm 107374182400 Dec  9 01:13
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> -rw-rw. 1 vdsm kvm  1048576 Feb 24  2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> -rw-r--r--. 1 vdsm kvm  320 May 17  2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
>
> The UUID's match the UUID's in the snapshot list.
>
> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> pretty much those 2 calls and then XML dump.
>
> Still a bit lost on the most comfortable way forward unfortunately.
>
> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
> >> [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
> > grepping error is ok, but it does not show the reason for the failure,
> > which will probably be on the vdsm host (you can use flow_id
> > 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
> > Need to see the underlying error causing: VDSGenericException:
> > VDSErrorException: Failed to SnapshotVDS, error =
> > Snapshot failed, code = 48
> >
> >> Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
> >> DB) and then trying to create does not change it back to illegal, but
> >> trying to delete that snapshot fails and sets it back to 4.
> > I see, can you share the removal failure log (similar information as
> > requested above)
> >
> > regarding backup, I don't have a good answer, hopefully someone else
> > has suggestions
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CAKIPJCTQHNNVLZWQLLZXCJPKDLVIKKL/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-08 Thread Benny Zlotnik
>[root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
grepping error is ok, but it does not show the reason for the failure,
which will probably be on the vdsm host (you can use flow_id
9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
Need to see the underlying error causing: VDSGenericException:
VDSErrorException: Failed to SnapshotVDS, error =
Snapshot failed, code = 48

>Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
>DB) and then trying to create does not change it back to illegal, but
>trying to delete that snapshot fails and sets it back to 4.
I see, can you share the removal failure log (similar information as
requested above)

regarding backup, I don't have a good answer, hopefully someone else
has suggestions
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-08 Thread Benny Zlotnik
Do you know why your snapshot creation failed? Do you have logs with the error?

On paper the situation does not look too bad, as the only discrepancy
between the database and vdsm is the status of the image, and since
it's legal on vdsm, changing it legal in database should work (image
status 1)

>Active Image is not the same image that has a parentid of all 0
Can you elaborate on this? The image with the empty parent is usually
the base image (the first active image), the active image will usually
be the leaf (unless the VM is in preview or something similar)

Of course do not make any changes without backing up first
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3FBWNGWNQ3U4UAPXD7CXLLIRP25Y3BS/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
It should be in the images table, there is an it_guid column which
indicates which templates the image is based on

On Wed, Dec 2, 2020 at 2:16 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi,
> if I can ask some other info, probably I find a "ghost disk" related to
> previous problem.
>
> Infact, I still cannot remove the broken template, because its disk is
> still registered somewhere; can You please suggest me where to search for
> it?
>
> Thanks a lot.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXYSDN7LLCSLS4MV37XBCJT3EMX4BUKB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IOK6IUSOOOAIIIZQ6YGXLRKIBCLC75F2/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
These are the available statuses[1], you can change it to 0, assuming the
VM is down


[1]
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/common/src/main/java/org/ovirt/engine/core/common/businessentities/VMStatus.java#L10

On Wed, Dec 2, 2020 at 12:57 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi.
> It's correct.
> But how unlock / change / remove it?
> In the same table, a lot of fields are empty, "0" or NULL.
> Only vm_guid and status have a value.
> Thanks,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIZ7TQV46L3FBOV6PTNMTWQ2O7CF5EGX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AD3SUCPR2TR73VXTINI522M5BDTJ56FU/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
I am not sure what is locked? If everything in the images table is 1, then
the disks are not locked. If the VM is in status 15, which is "Images
Locked" status, then this status is set in the vm_dynamic table

On Wed, Dec 2, 2020 at 12:43 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi,
> in this table all imagestatus = 1
>
> Any other ideas?
>
> Thanks a lot,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I5E5IMBWORF45BLIKRSQPNSTH2O22WWG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DW43N255FSL62JNJMNFJH57NZTNOOODY/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
imagestatus is in the images table, not vms

On Wed, Dec 2, 2020 at 11:30 AM <3c.moni...@gruppofilippetti.it> wrote:

> Hi.
> I did a full select on "vms" and field "imagestatus" there isn't!
> May be this the reason for which the tool is unable to manage it?
> Follows full field list:
>
>
> "vm_name","mem_size_mb","max_memory_size_mb","num_of_io_threads","nice_level","cpu_shares","vmt_guid","os","description","free_text_comment","cluster_id","creation_date","auto_startup","lease_sd_id","lease_info","is_stateless","is_smartcard_enabled","is_delete_protected","sso_method","dedicated_vm_for_vds","default_boot_sequence","vm_type","vm_pool_spice_proxy","cluster_name","transparent_hugepages","trusted_service","storage_pool_id","storage_pool_name","cluster_spice_proxy","vmt_name","status","vm_ip","vm_ip_inet_array","vm_host","last_start_time","boot_time","downtime","guest_cur_user_name","console_cur_user_name","runtime_name","guest_os","console_user_id","guest_agent_nics_hash","run_on_vds","migrating_to_vds","app_list","vm_pool_name","vm_pool_id","vm_guid","num_of_monitors","single_qxl_pci","allow_console_reconnect","is_initialized","num_of_sockets","cpu_per_socket","threads_per_cpu","usb_policy","acpi_enable","session","num_of_cpus","quota_id","quota_name","quota_enforcement_
>
>  
> type","boot_sequence","utc_diff","client_ip","guest_requested_memory","time_zone","cpu_user","cpu_sys","elapsed_time","usage_network_percent","disks_usage","usage_mem_percent","usage_cpu_percent","run_on_vds_name","cluster_cpu_name","default_display_type","priority","iso_path","origin","cluster_compatibility_version","initrd_url","kernel_url","kernel_params","pause_status","exit_message","exit_status","migration_support","predefined_properties","userdefined_properties","min_allocated_mem","hash","cpu_pinning","db_generation","host_cpu_flags","tunnel_migration","vnc_keyboard_layout","is_run_and_pause","created_by_user_id","last_watchdog_event","last_watchdog_action","is_run_once","volatile_run","vm_fqdn","cpu_name","emulated_machine","current_cd","reason","exit_reason","instance_type_id","image_type_id","architecture","original_template_id","original_template_name","last_stop_time","migration_downtime","template_version_number","serial_number_policy","custom_serial_number","is_boot_m
>
>  
> enu_enabled","guest_cpu_count","next_run_config_exists","is_previewing_snapshot","numatune_mode","is_spice_file_transfer_enabled","is_spice_copy_paste_enabled","cpu_profile_id","is_auto_converge","is_migrate_compressed","custom_emulated_machine","bios_type","custom_cpu_name","spice_port","spice_tls_port","spice_ip","vnc_port","vnc_ip","ovirt_guest_agent_status","qemu_guest_agent_status","guest_mem_buffered","guest_mem_cached","small_icon_id","large_icon_id","migration_policy_id","provider_id","console_disconnect_action","resume_behavior","guest_timezone_offset","guest_timezone_name","guestos_arch","guestos_codename","guestos_distribution","guestos_kernel_version","guestos_type","guestos_version","custom_compatibility_version","guest_containers","has_illegal_images","multi_queues_enabled"
>
> And just for let You know, its "status = 15"
>
> Please let me know.
> Thanks,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCJNRY274XLTKNIXJBGKSJAS26YKCROI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRXOGGQ7ZPVIQEEZ4SDNCQXU5CQ6RPFC/


[ovirt-users] Re: Check multipath status using API

2020-11-26 Thread Benny Zlotnik
It is implemented, there is no special API for this, using the events
endpoint (ovirt-engine/api/events) is the way to access this information

On Thu, Nov 26, 2020 at 3:00 PM Paulo Silva  wrote:

> Hi,
>
> Is it possible to check the multipath status using the current REST API on
> ovirt?
>
> There is an old page that hints at this but I'm not sure if this has been
> implemented:
>
>
> https://www.ovirt.org/develop/release-management/features/storage/multipath-events.html
>
> Thanks
> --
> Paulo Silva 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WNCRLXDIYQBXF6HERZBSLU5JSIB75VPJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SR32GTHZ4BCWMUI4LFZFMPQWYAJP2HWG/


[ovirt-users] Re: Backporting of Fixes

2020-11-15 Thread Benny Zlotnik
Hi,

4.3 is no longer maintained.
Regardless, this bug was never reproduced and has no fixes attached to it,
so there is nothing to backport. The related bugs and their fixes are all
related to changes that were introduced in 4.4, so it is unlikely you hit
the same issue.
If you can share more details and attach logs we may know more.

On Thu, Nov 12, 2020 at 11:24 PM Gillingham, Eric J (US 393D) via Users <
users@ovirt.org> wrote:

> I'm still running on oVirt 4.3 due to some hardware that will require some
> extra effort to move to 4.4 we're not quite ready to do yet, and am
> currently hitting what I believe to be
> https://bugzilla.redhat.com/show_bug.cgi?id=1820998 which is fixed in
> 4.4. I'm wondering if there's a process to request a backport, or should I
> just open a new bug against 4.3?
>
> Thank You
> - Eric
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CNDJDE4RG6M24O5IKNNLNHELVBWVLPW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5NBXUOV2NFWPKKM7TTPPUW7FHMGHSTTJ/


[ovirt-users] Re: LiveStorageMigration fail

2020-11-09 Thread Benny Zlotnik
Which version are you using?
Did this happen more than once for the same disk?
A similar bug was fixed in 4.3.10.1[1]
There is another bug with a similar symptom which occurs very rarely and we
were unable to reproduce it

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1758048

On Mon, Nov 9, 2020 at 3:57 PM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hello experts,
>
> perhaps someone has an idea about that error. It appears when in try to
> migrate a disk to another storage, and this live. Generally it works
> good but - this is the log snippet:
>
> HSMGetAllTasksStatusesVDS failed: Error during destination image
> manipulation: u"image=02240cf3-65b6-487c-b5af-c266a1dd18f8, dest
> domain=3c4fbbfe-6796-4007-87ab-d7f205b7fae3: msg=Invalid parameter:
> 'capacity=134217728000'"
>
> Surely the is enough space on the target domain for this operation (~4TB).
>
> Any ideas..?
>
> Greetings from
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KT5HQXEFB477O7GW5KP4BJJUR5YBL6Q/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFDCQPAPIVHC2Z7MMZMORZTOP3O5RGXF/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-05 Thread Benny Zlotnik
You mean the disk physically resides on a different storage domain, but
engine sees it on another?
Which version did this happen on?
Do you have the logs from this failure?

On Tue, Nov 3, 2020 at 5:51 PM  wrote:

>
>
> I used it but it didn't work The disk is still in locked status
>
> when I run the unlock_entity.sh script it doesn't show that the disk is
> locked
>
> but it was possible to identify that the disk was moved to the other
> storage but shows that it is in the old storage
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFLKANS6H2KVOTIJBZ7E2OB4FD3NMYEO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2R77MFXD27LHAABWH343VSJTAAEFUOVZ/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-03 Thread Benny Zlotnik
Do you know why it was stuck?

You can use unlock_entity.sh[1] to unlock the disk


[1]
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html

On Tue, Nov 3, 2020 at 1:38 PM  wrote:

> I have a vm that has two disks one active and another disabling when
> trying to migrate the disk to another storage the task was in a loop
> creating several snapshots, I turned off the VM and the loop stopped and
> after several hours the task disappeared and the VM disk was left blocking
> making it impossible to be deleted and when trying to delete the vm it does
> not exclude from the following message locked disk making it impossible to
> remove vm
>
>
> How can I solve this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5CTA74EKK337ROAS4HT5HU5YYOVSHDB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4RXBSMPBMACO3HMWTJQ2WNXKOIZJ7MQ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
sorry, accidentally hit send prematurely, the database table is
driver_options, the options are json under driver_options

On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik  wrote:
>
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
>
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
> >
> > On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > > to add `rbd default features = 3` to the configuration. I think there
> > > are plans to support rbd-nbd in cinderlib which would allow using
> > > additional features, but I'm not aware of anything concrete.
> > >
> > > Additionally, the path for the cinderlib log is
> > > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > > would appear in the vdsm.log on the relevant host, and would look
> > > something like "RBD image feature set mismatch. You can disable
> > > features unsupported by the kernel with 'rbd feature disable'"
> >
> > Thanks for the pointer!  Indeed,
> > /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> > looking for.  In this case, it was a user error entering the RBDDriver
> > options:
> >
> >
> > 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> > option use_multipath_for_xfer
> >
> > ...it should have been 'use_multipath_for_image_xfer'.
> >
> > Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> > Domains -> Manage Domain', all driver options are unedittable except for
> > 'Name'.
> >
> > Then I thought that maybe I can't edit the driver options while a disk
> > still exists, so I tried removing the one disk in this domain.  But even
> > after multiple attempts, it still fails with:
> >
> > 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> > volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> > 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> > when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> > update or delete on table "volumes" violates foreign key constraint
> > "volume_attachment_volume_id_fkey" on table "volume_attachment"
> > DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> > referenced from table "volume_attachment".
> >
> > See https://pastebin.com/KwN1Vzsp for the full log entries related to
> > this removal.
> >
> > It's not lying, the volume no longer exists in the rbd pool, but the
> > cinder database still thinks it's attached, even though I was never able
> > to get it to attach to a VM.
> >
> > What are my options for cleaning up this stale disk in the cinder database?
> >
> > How can I update the driver options in my storage domain (deleting and
> > recreating the domain is acceptable, if possible)?
> >
> > --Mike
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQPXZYCE5GWKSHDN5FU7I5L4VP75QPEJ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>
> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > to add `rbd default features = 3` to the configuration. I think there
> > are plans to support rbd-nbd in cinderlib which would allow using
> > additional features, but I'm not aware of anything concrete.
> >
> > Additionally, the path for the cinderlib log is
> > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > would appear in the vdsm.log on the relevant host, and would look
> > something like "RBD image feature set mismatch. You can disable
> > features unsupported by the kernel with 'rbd feature disable'"
>
> Thanks for the pointer!  Indeed,
> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> looking for.  In this case, it was a user error entering the RBDDriver
> options:
>
>
> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> option use_multipath_for_xfer
>
> ...it should have been 'use_multipath_for_image_xfer'.
>
> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> Domains -> Manage Domain', all driver options are unedittable except for
> 'Name'.
>
> Then I thought that maybe I can't edit the driver options while a disk
> still exists, so I tried removing the one disk in this domain.  But even
> after multiple attempts, it still fails with:
>
> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> update or delete on table "volumes" violates foreign key constraint
> "volume_attachment_volume_id_fkey" on table "volume_attachment"
> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> referenced from table "volume_attachment".
>
> See https://pastebin.com/KwN1Vzsp for the full log entries related to
> this removal.
>
> It's not lying, the volume no longer exists in the rbd pool, but the
> cinder database still thinks it's attached, even though I was never able
> to get it to attach to a VM.
>
> What are my options for cleaning up this stale disk in the cinder database?
>
> How can I update the driver options in my storage domain (deleting and
> recreating the domain is acceptable, if possible)?
>
> --Mike
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q5IC4SDS5AS64RIOKHBFNQDWCOBKKDJW/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
tachDiskToVmCommand' failed:
> EngineException: java.lang.NullPointerException (Failed with error
> ENGINE and code 5001)
> 2020-10-13 15:15:26,013-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.
> 2020-10-13 15:15:26,021-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-13) [7cb262cc] EVENT_ID:
> USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testvm_disk
> to VM grafana (User: michael.thomas@internal-authz).
> 2020-10-13 15:15:26,021-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Lock freed to object
> 'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]',
> sharedLocks=''}'
>
> The /var/log/cinder/ directory on the ovirt node is empty, and doesn't
> exist on the engine itself.
>
> To verify that it's not a cephx permission issue, I tried accessing the
> block storage from both the engine and the ovirt node using the
> credentials I set up in the ManagedBlockStorage setup page:
>
> [root@ovirt4]# rbd --id ovirt ls rbd.ovirt.data
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> [root@ovirt4]# rbd --id ovirt info
> rbd.ovirt.data/volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> rbd image 'volume-5419640e-445f-4b3f-a29d-b316ad031b7a':
>  size 100 GiB in 25600 objects
>  order 22 (4 MiB objects)
>  snapshot_count: 0
>  id: 68a7cd6aeb3924
>  block_name_prefix: rbd_data.68a7cd6aeb3924
>  format: 2
>  features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>  op_features:
>  flags:
>  create_timestamp: Tue Oct 13 06:53:55 2020
>  access_timestamp: Tue Oct 13 06:53:55 2020
>  modify_timestamp: Tue Oct 13 06:53:55 2020
>
> Where else can I look to see where it's failing?
>
> --Mike
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdated,  but I believe this is what most users
> >>> in the mailing list were using. We held off on updating it because the
> >>> installation instructions have been a moving target, but it is more
> >>> stable now and I will update it soon.
> >>>
> >>> Specifically speaking, the openstack ver

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
Not sure about this, adding +Yedidyah Bar David

On Wed, Sep 30, 2020 at 3:04 PM Michael Thomas  wrote:
>
> I hadn't installed the necessary packages when the engine was first
> installed.
>
> However, running 'engine-setup --reconfigure-optional-components'
> doesn't work at the moment because (by design) my engine does not have a
> network route outside of the cluster.  It fails with:
>
> [ INFO  ] DNF Errors during downloading metadata for repository 'AppStream':
> - Curl error (7): Couldn't connect to server for
> http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
> [ ERROR ] DNF Failed to download metadata for repo 'AppStream': Cannot
> prepare internal mirrorlist: Curl error (7): Couldn't connect to server
> for
> http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
>
>
> I have a proxy set in the engine's /etc/dnf/dnf.conf, but it doesn't
> seem to be obeyed when running engine-setup.  Is there another way that
> I can get engine-setup to use a proxy?
>
> --Mike
>
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdated,  but I believe this is what most users
> >>> in the mailing list were using. We held off on updating it because the
> >>> installation instructions have been a moving target, but it is more
> >>> stable now and I will update it soon.
> >>>
> >>> Specifically speaking, the openstack version should be updated to
> >>> train (it is likely ussuri works fine too, but I haven't tried it) and
> >>> cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> >>> installed instead of using pip, same goes for os-brick. The rest of
> >>> the information is valid.
> >>>
> >>>
> >>> [1] 
> >>> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >>>
> >>> On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>>>
> >>>> I'm looking for the latest documentation for setting up a Managed Block
> >>>> Device storage domain so that I can move some of my VM images to ceph 
> >>>> rbd.
> >>>>
> >>>> I found this:
&

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
>
> Hi Benny,
>
> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> Octopus.  Then I tried using these instructions, as well as the deep
> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
>
> I've done this a couple of times, and each time the engine fails when I
> try to add the new managed block storage domain.  The error on the
> screen indicates that it can't connect to the cinder database.  The
> error in the engine log is:
>
> 2020-09-29 17:02:11,859-05 WARN
> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> action 'AddManagedBlockStorageDomain' failed for user
> admin@internal-authz. Reasons:
> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
>
> I had created the db on the engine with this command:
>
> su - postgres -c "psql -d template1 -c \"create database cinder owner
> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> lc_ctype 'en_US.UTF-8';\""
>
> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
>
>  hostcinder  engine  ::0/0   md5
>  hostcinder  engine  0.0.0.0/0   md5
>
> Is there anywhere else I should look to find out what may have gone wrong?
>
> --Mike
>
> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> > The feature is currently in tech preview, but it's being worked on.
> > The feature page is outdated,  but I believe this is what most users
> > in the mailing list were using. We held off on updating it because the
> > installation instructions have been a moving target, but it is more
> > stable now and I will update it soon.
> >
> > Specifically speaking, the openstack version should be updated to
> > train (it is likely ussuri works fine too, but I haven't tried it) and
> > cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> > installed instead of using pip, same goes for os-brick. The rest of
> > the information is valid.
> >
> >
> > [1] 
> > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >
> > On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>
> >> I'm looking for the latest documentation for setting up a Managed Block
> >> Device storage domain so that I can move some of my VM images to ceph rbd.
> >>
> >> I found this:
> >>
> >> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> >>
> >> ...but it has a big note at the top that it is "...not user
> >> documentation and should not be treated as such."
> >>
> >> The oVirt administration guide[1] does not talk about managed block 
> >> devices.
> >>
> >> I've found a few mailing list threads that discuss people setting up a
> >> Managed Block Device with ceph, but didn't see any links to
> >> documentation steps that folks were following.
> >>
> >> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> >> and if so, where is the documentation for using it?
> >>
> >> --Mike
> >> [1]ovirt.org/documentation/administration_guide/
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHHOMSMDUWBHXZC77SQE4R3MAK7M4ZCN/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Benny Zlotnik
The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
>
> I'm looking for the latest documentation for setting up a Managed Block
> Device storage domain so that I can move some of my VM images to ceph rbd.
>
> I found this:
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> ...but it has a big note at the top that it is "...not user
> documentation and should not be treated as such."
>
> The oVirt administration guide[1] does not talk about managed block devices.
>
> I've found a few mailing list threads that discuss people setting up a
> Managed Block Device with ceph, but didn't see any links to
> documentation steps that folks were following.
>
> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> and if so, where is the documentation for using it?
>
> --Mike
> [1]ovirt.org/documentation/administration_guide/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQ7QQOP5T6UBFRXGWHNUN2SYN2CBPIZZ/


[ovirt-users] Re: Problem with "ceph-common" pkg for oVirt Node 4.4.1

2020-08-19 Thread Benny Zlotnik
I think it would be easier to get an answer for this on a ceph mailing
list, but why do you need specifically 12.2.7?

On Wed, Aug 19, 2020 at 4:08 PM  wrote:
>
> Hi!
> I have a problem with install ceph-common package(needed for cinderlib 
> Managed Block Storage) in  oVirt Node 4.4.1 - oVirt doc say: "$ yum install 
> -y ceph-common" but no Repo with ceph-common ver 12.2.7 for CentOS8 - 
> official CentOS has only "ceph-common-10.2.5-4.el7.x86_64.rpm"  and CEPH has 
> only ceph-common ver. 14.2 for EL8
> How can I install ceph-common ver. 12.2.7?
>
> BR
> Mike
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJ4UFLRMLS7GMTTMUGUM4QHSVNX5CZRV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OKUXQ4DM3FNO77BF236C3PRIMLVDCGP/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-23 Thread Benny Zlotnik
I think you can remove 6197b30d-0732-4cc7-aef0-12f9f6e9565b from images and
the corresponding snapshot, and set the parent,
8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 as active (active = 't' field), and
change its snapshot to be active snapshot. That is if I correctly
understand the current layout, that 6197b30d-0732-4cc7-aef0-12f9f6e9565b
was removed from the storage and 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 is
now the only volume for the disk

On Wed, Jul 22, 2020 at 1:32 PM Arsène Gschwind 
wrote:

> Please find the result:
>
> psql -d engine -c "\x on" -c "select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';"
>
> Expanded display is on.
>
> -[ RECORD 1 ]-+-
>
> image_guid| 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> creation_date | 2020-04-23 14:59:23+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | ----
>
> imagestatus   | 1
>
> lastmodified  | 2020-07-06 20:38:36.093+02
>
> vm_snapshot_id| 6bc03db7-82a3-4b7e-9674-0bdd76933eb8
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-04-23 14:59:20.919344+02
>
> _update_date  | 2020-07-06 20:38:36.093788+02
>
> active| f
>
> volume_classification | 1
>
> qcow_compat   | 2
>
> -[ RECORD 2 ]-+-
>
> image_guid| 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> creation_date | 2020-07-06 20:38:38+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> imagestatus   | 1
>
> lastmodified  | 1970-01-01 01:00:00+01
>
> vm_snapshot_id| fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-07-06 20:38:36.093788+02
>
> _update_date  | 2020-07-06 20:38:52.139003+02
>
> active| t
>
> volume_classification | 0
>
> qcow_compat   | 2
>
>
> psql -d engine -c "\x on" -c "SELECT s.* FROM snapshots s, images i where 
> i.vm_snapshot_id = s.snapshot_id and i.image_guid = 
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
> Expanded display is on.
>
> -[ RECORD 1 
> ]---+--
>
> snapshot_id | fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> vm_id   | b5534254-660f-44b1-bc83-d616c98ba0ba
>
> snapshot_type   | ACTIVE
>
> status  | OK
>
> description | Active VM
>
> creation_date   | 2020-04-23 14:59:20.171+02
>
> app_list| 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt-guest-agent-common-1.0.14-1.el7
>
> vm_configuration|
>
> _create_date| 2020-04-23 14:59:20.154023+02
>
> _update_date| 2020-07-03 17:33:17.483215+02
>
> memory_metadata_disk_id |
>
> memory_dump_disk_id |
>
> vm_configuration_broken | f
>
>
> Thanks.
>
>
>
> On Tue, 2020-07-21 at 13:45 +0300, Benny Zlotnik wrote:
>
> I forgot to add the `\x on` to make the output readable, can you run it
> with:
> $ psql -U engine -d engine -c "\x on" -c ""
>
> On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
> wrote:
>
> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+-++-

[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-23 Thread Benny Zlotnik
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939


On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot  wrote:

>
>
>
>
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots, things seem in order.
> Removing snapshots shows warning 'disk in illegal state'
>
> Mouse hover shows .. please do not shutdown before succesfully remove
> snapshot
>
>
> ovirt-engine log
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
> Merge failed
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
> node2.lab,
> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> MergeVDS, error = Merge failed, code = 52
> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
> EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52 (Failed with error mergeErr and code 52)
> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52
>   
>io='threads'/>
> 2020-07-22 16:40:39,659+02 ERROR
> [org.ovirt.engine.core.bll.MergeStatusCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
> 2020-07-22 16:40:41,524+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
> 'MERGE_STATUS'
> 2020-07-22 16:40:42,597+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
> failed. Images have been marked illegal and can no longer be previewed or
> reverted to. Please retry Live Merge on the snapshot to complete the
> operation.
> 2020-07-22 16:40:42,603+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
> with failure.
> 2020-07-22 16:40:43,679+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2020-07-22 16:40:43,774+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>
>
> VDSM on hypervisor
> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
> {"driver": "qcow2", "file": {"driver": "file", "filename":
> 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-21 Thread Benny Zlotnik
I forgot to add the `\x on` to make the output readable, can you run it
with:
$ psql -U engine -d engine -c "\x on" -c ""

On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
wrote:

> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+-++--+-+---
>
> +--+---+---++---+-
>
>  8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 | 2020-04-23 14:59:23+02 | 161061273600 
> | ---- | ---- 
> |   1 | 2020-07-06 20:38:36.093+02 | 
> 6bc03db7-82a3-4b7e-9674-0bdd76933eb8 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-04-23 14:59:20.919344+02 | 
> 2020-07-06 20:38:36.093788+02 | f  | 1 |   2
>
>  6197b30d-0732-4cc7-aef0-12f9f6e9565b | 2020-07-06 20:38:38+02 | 161061273600 
> | ---- | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 
> |   1 | 1970-01-01 01:00:00+01 | 
> fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-07-06 20:38:36.093788+02 | 
> 2020-07-06 20:38:52.139003+02 | t  | 0 |   2
>
> (2 rows)
>
>
>
> SELECT s.* FROM snapshots s, images i where i.vm_snapshot_id = s.snapshot_id 
> and i.image_guid = '6197b30d-0732-4cc7-aef0-12f9f6e9565b';
>
>  snapshot_id  |vm_id 
> | snapshot_type | status | description |   creation_date| 
>   app_list
>
>  | vm_configuration | _create_date
>   | _update_date  | memory_metadata_disk_id | 
> memory_dump_disk_id | vm_configuration_broken
>
> --+--+---++-++--
>
> -+--+---+---+-+-+-
>
>  fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 | b5534254-660f-44b1-bc83-d616c98ba0ba 
> | ACTIVE| OK | Active VM   | 2020-04-23 14:59:20.171+02 | 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt
>
> -guest-agent-common-1.0.14-1.el7 |  | 2020-04-23 
> 14:59:20.154023+02 | 2020-07-03 17:33:17.483215+02 | 
> | | f
>
> (1 row)
>
>
> Thanks,
> Arsene
>
> On Sun, 2020-07-19 at 16:34 +0300, Benny Zlotnik wrote:
>
> Sorry, I only replied to the question, in addition to removing the
>
> image from the images table, you may also need to set the parent as
>
> the active image and remove the snapshot referenced by this image from
>
> the database. Can you provide the output of:
>
> $ psql -U engine -d engine -c "select * from images where
>
> image_group_id = ";
>
>
> As well as
>
> $ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
>
> where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
>
> On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik <
>
> bzlot...@redhat.com
>
> > wrote:
>
>
> It can be done by deleting from the images table:
>
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
>
> of course the database should be backed up before doing this
>
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer <
>
> nsof...@red

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
Sorry, I only replied to the question, in addition to removing the
image from the images table, you may also need to set the parent as
the active image and remove the snapshot referenced by this image from
the database. Can you provide the output of:
$ psql -U engine -d engine -c "select * from images where
image_group_id = ";

As well as
$ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b';"

On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik  wrote:
>
> It can be done by deleting from the images table:
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
> of course the database should be backed up before doing this
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
> >
> > On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
> >  wrote:
> >
> > > It looks like the Pivot completed successfully, see attached vdsm.log.
> > > Is there a way to recover that VM?
> > > Or would it be better to recover the VM from Backup?
> >
> > This what we see in the log:
> >
> > 1. Merge request recevied
> >
> > 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> > merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> > u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> > u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> > u'0002-0002-0002-0002-0289'},
> > baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> > topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> > from=:::10.34.38.31,39226,
> > flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> > vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
> >
> > To track this job, we can use the jobUUID: 
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> > and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
> >
> > 2. Starting the merge
> >
> > 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> > chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> > top=None, bandwidth=0, flags=12 (vm:5945)
> >
> > We see the original chain:
> > 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
> >
> > 3. The merge was completed, ready for pivot
> >
> > 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > is ready (vm:5847)
> >
> > At this point parent volume contains all the data in top volume and we can 
> > pivot
> > to the parent volume.
> >
> > 4. Vdsm detect that the merge is ready, and start the clean thread
> > that will complete the merge
> >
> > 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> > for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
> >
> > 5. Requesting pivot to parent volume:
> >
> > 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> > complete active layer commit (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
> >
> > 6. Pivot was successful
> >
> > 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > has completed (vm:5838)
> >
> > 7. Vdsm wait until libvirt updates the xml:
> >
> > 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
> >
> > 8. Syncronizing vdsm metadata
> >
> > 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> > imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> > img

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
It can be done by deleting from the images table:
$ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b'";

of course the database should be backed up before doing this



On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
>
> On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
>  wrote:
>
> > It looks like the Pivot completed successfully, see attached vdsm.log.
> > Is there a way to recover that VM?
> > Or would it be better to recover the VM from Backup?
>
> This what we see in the log:
>
> 1. Merge request recevied
>
> 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> u'0002-0002-0002-0002-0289'},
> baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> from=:::10.34.38.31,39226,
> flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
>
> To track this job, we can use the jobUUID: 
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 2. Starting the merge
>
> 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> We see the original chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 3. The merge was completed, ready for pivot
>
> 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> is ready (vm:5847)
>
> At this point parent volume contains all the data in top volume and we can 
> pivot
> to the parent volume.
>
> 4. Vdsm detect that the merge is ready, and start the clean thread
> that will complete the merge
>
> 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
>
> 5. Requesting pivot to parent volume:
>
> 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> complete active layer commit (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
>
> 6. Pivot was successful
>
> 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> has completed (vm:5838)
>
> 7. Vdsm wait until libvirt updates the xml:
>
> 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
>
> 8. Syncronizing vdsm metadata
>
> 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
> volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
> newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
> task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)
>
> We can see the new chain:
> ['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']
>
> 2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
> Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)
>
> The old chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 2020-07-13 11:19:07,006+0200 INFO  (merge/720410c3) [storage.Image]
> Unlinking subchain: ['6197b30d-0732-4cc7-aef0-12f9f6e9565b']
> (image:1231)
> 2020-07-13 11:19:07,017+0200 INFO  (merge/720410c3) [storage.Image]
> Leaf volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b is being removed from
> the chain. Marking it ILLEGAL to prevent data corruption (image:1239)
>
> This matches what we see on storage.
>
> 9. Merge job is untracked
>
> 2020-07-13 11:19:21,134+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Cleanup thread
> 
> successfully completed, untracking job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> (base=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8,
> top=6197b30d-0732-4cc7-aef0-12f9f6e9565b) (vm:5752)
>
> This was a successful 

[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Benny Zlotnik
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939

On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>  wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them 
> > to Google Drive. Please go to the link below to get them:
> >
> > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>
> I did get them, but not engine logs. Can you please attach them as well? 
> Thanks.
>
> vdsm.log.61 has:
>
> 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
> bandwidth, flags)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirt.libvirtError: internal error: qemu block name
> 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> "filename": 
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> doesn't match expected
> '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>
> Adding Eyal. Eyal, can you please have a look? Thanks.
>
> >
> > Best regards,
> >
> > Minnie Du--Presales & Business Development
> >
> > Mob  : +86-15244932162
> > Tel: +86-28-85530156
> > Skype :minnie...@vinchin.com
> > Email: minnie...@vinchin.com
> > Website: www.vinchin.com
> >
> > F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> > Road, Hi-Tech Zone, Chengdu, China
> >
> >
> > From: Yedidyah Bar David
> > Date: 2020-06-15 15:42
> > To: minnie.du
> > CC: users
> > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > >
> > > We have met a problem when testing oVirt 4.4.
> > >
> > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
> > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
> > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
> > > state of corresponding disk became illegal. Removing the snapshot in this 
> > > state requires a lot of risky work in the background, leading to the 
> > > inability to free up snapshot space. Long-term backups will cause the 
> > > target VM to create a large number of unrecoverable snapshots, thus 
> > > taking up a large amount of production storage. So we need your help.
> >
> > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > open a bug and attach all of them, just in case.
> >
> > Thanks!
> > --
> > Didi
> >
> >
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYVFRUWNYE2NFRZAYSIL2WQN72TYROT3/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-08 Thread Benny Zlotnik
yes, that's because cinderlib uses KRBD, so it has less features, I
should add this to the documentation.
I was told cinderlib has plans to add support for rbd-nbd, this would
eventually allow use of newer features

On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke
 wrote:
>
> > It looks like a configuration issue, you can use plain `rbd` to check 
> > connectivity.
> Yes, it was a configuration error. I fixed it.
> Also, I had to adapt different rbd feature sets between ovirt nodes and ceph 
> images. Now it seems to work.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/72OOSCUSTZAGYIDTEDIINDO47EBL2GLM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JHFAZNGY3OM2EIAMISABNOVBRGUDS4H/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-07 Thread Benny Zlotnik
yes, it looks like a configuration issue, you can use plain `rbd` to
check connectivity.
regarding starting vms and live migration, are there bug reports for these?
there is an issue we're aware of with live migration[1], it can be
worked around by blacklisting rbd devices in the multipath.conf

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801


On Thu, Jun 4, 2020 at 11:49 PM Mathias Schwenke
 wrote:
>
> Thanks vor your replay.
> Yes, I have some issues. In some cases starting or migrating a virtual 
> machine failed.
>
> At the moment it seems that I have a misconfiguration of my ceph connection:
> 2020-06-04 22:44:07,685+02 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution 
> failed: Traceback (most recent call last):
>   File "./cinderlib-client.py", line 179, in main
> args.command(args)
>   File "./cinderlib-client.py", line 232, in connect_volume
> backend = load_backend(args)
>   File "./cinderlib-client.py", line 210, in load_backend
> return cl.Backend(**json.loads(args.driver))
>   File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in 
> __init__
> self.driver.check_for_setup_error()
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 295, in check_for_setup_error
> with RADOSClient(self):
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 177, in __init__
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 353, in _connect_to_rados
> return _do_conn(pool, remote, timeout)
>   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in 
> _wrapper
> return r.call(f, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
> raise attempt.get()
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
> six.reraise(self.value[0], self.value[1], self.value[2])
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
> attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 351, in _do_conn
> raise exception.VolumeBackendAPIException(data=msg)
> VolumeBackendAPIException: Bad or unexpected response from the storage volume 
> backend API: Error connecting to ceph cluster.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4BMALG7MPMPS3JJU23OCQUMOCSO2D27/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YZPGW7IAUZMTNWY5FP5KOEWAGVBPVFE/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Benny Zlotnik
I've used successfully rocky with 4.3 in the past, the main caveat
with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip
install cinderlib==0.9.0).
Let me know if you have any issues.

Hopefully during 4.4 we will have the repositories with the RPMs and
installation will be much easier


On Thu, Jun 4, 2020 at 10:00 PM Mathias Schwenke
 wrote:
>
> At 
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>  ist described the cinderlib integration into oVirt:
> Installation:
> - install centos-release-openstack-pike on engine and all hosts
> - install openstack-cinder and python-pip on engine
> - pip install cinderlib on engine
> - install python2-os-brick on all hosts
> - install ceph-common on engine and on all hosts
>
> Which software versions do you use on CentOS 7 whith oVirt 4.3.10?
> The package centos-release-openstack-pike, as described at the 
> above-mentioned Managed Block Storage feature page, doesn't exist anymore in 
> the CentOS repositories, so I have to switch to 
> centos-release-openstack-queens or newer (rocky, stein, train). So I get (for 
> using with ceph luminous 12):
> - openstack-cinder 12.0.10
> - cinderlib 1.0.1
> - ceph-common 12.2.11
> - python2-os-brick 2.3.9
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5BRKSYAHJBLI65G6JEDZIWSQ72OCF3S/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FELJ2X2N74Q3SM2ZC3MV4ERWZWUM5ZUO/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-01 Thread Benny Zlotnik
Sorry for the late reply, but you may have hit this bug[1], I forgot about it.
The bug happens when you live migrate a VM in post-copy mode, vdsm
stops monitoring the VM's jobs.
The root cause is an issue in libvirt, so it depends on which libvirt
version you have

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1774230

On Fri, May 29, 2020 at 3:54 PM David Sekne  wrote:
>
> Hello,
>
> I tried the live migrate as well and it didn't help (it failed).
>
> The VM disks were in a illegal state so I ended up restoring the VM from 
> backup (It was least complex solution for my case).
>
> Thank you both for the help.
>
> Regards,
>
> On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov  wrote:
>>
>> I used  to have a similar issue and when I live migrated  (from 1  host to 
>> another)  it  automatically completed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik  
>> написа:
>> >Sorry, by overloaded I meant in terms of I/O, because this is an
>> >active layer merge, the active layer
>> >(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
>> >(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
>> >it as the active layer. So if there is constantly additional data
>> >written to the current active layer, vdsm may have trouble finishing
>> >the synchronization
>> >
>> >
>> >On Wed, May 27, 2020 at 4:55 PM David Sekne 
>> >wrote:
>> >>
>> >> Hello,
>> >>
>> >> Yes, no problem. XML is attached (I ommited the hostname and IP).
>> >>
>> >> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not
>> >overloaded. We have multiple servers with the same specs with no
>> >issues.
>> >>
>> >> Regards,
>> >>
>> >> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik 
>> >wrote:
>> >>>
>> >>> Can you share the VM's xml?
>> >>> Can be obtained with `virsh -r dumpxml `
>> >>> Is the VM overloaded? I suspect it has trouble converging
>> >>>
>> >>> taskcleaner only cleans up the database, I don't think it will help
>> >here
>> >>>
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQWZXFW622OIZLB27AHULO52CWYTVL2S/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Sorry, by overloaded I meant in terms of I/O, because this is an
active layer merge, the active layer
(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
it as the active layer. So if there is constantly additional data
written to the current active layer, vdsm may have trouble finishing
the synchronization


On Wed, May 27, 2020 at 4:55 PM David Sekne  wrote:
>
> Hello,
>
> Yes, no problem. XML is attached (I ommited the hostname and IP).
>
> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not overloaded. We 
> have multiple servers with the same specs with no issues.
>
> Regards,
>
> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik  wrote:
>>
>> Can you share the VM's xml?
>> Can be obtained with `virsh -r dumpxml `
>> Is the VM overloaded? I suspect it has trouble converging
>>
>> taskcleaner only cleans up the database, I don't think it will help here
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging

taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LCPJ2C2MW76MKVFBC4QAMRPSRRQQDC3U/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
 sda --info` (if it's still running)




On Wed, May 27, 2020 at 2:03 PM David Sekne  wrote:
>
> Hello,
>
> Thank you for the reply.
>
> Unfortunately I cant see the task on any on the hosts:
>
> vdsm-client Task getInfo taskID=f694590a-1577-4dce-bf0c-3a8d74adf341
> vdsm-client: Command Task.getInfo with args {'taskID': 
> 'f694590a-1577-4dce-bf0c-3a8d74adf341'} failed:
> (code=401, message=Task id unknown: 
> (u'f694590a-1577-4dce-bf0c-3a8d74adf341',))
>
> I can see it starting in VDSM log on the host runnig the VM:
>
> /var/log/vdsm/vdsm.log.2:2020-05-26 12:15:09,349+0200 INFO  (jsonrpc/6) 
> [virt.vm] (vmId='e113ff18-5687-4e03-8a27-b12c82ad6d6b') Starting merge with 
> jobUUID=u'f694590a-1577-4dce-bf0c-3a8d74adf341', original 
> chain=a78c7505-a949-43f3-b3d0-9d17bdb41af5 < 
> aabf3788-8e47-4f8b-84ad-a7eb311659fa (top), disk='sda', base='sda[1]', 
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> Also running vdsm-client Host getAllTasks I don't see any runnig tasks (on 
> any host).
>
> Am I missing something?
>
> Regards,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBTD3HLXPK7F7MBJCQEQV6E2KA3H7FZK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4HOFIS26PTTT56HNOUCG4MTOFFFAXSK/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host

On Wed, May 27, 2020 at 9:02 AM David Sekne  wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live storage migration a VM got stuck with snapshot. Checking 
> the engine logs I can see that the snapshot removal task is waiting for Merge 
> to complete and vice versa.
>
> 2020-05-26 18:34:04,826+02 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40') waiting on child command id: 
> 'f7d1de7b-9e87-47ba-9ba0-ee04301ba3b1' type:'Merge' to complete
> 2020-05-26 18:34:04,827+02 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Waiting on merge command to complete 
> (jobId = f694590a-1577-4dce-bf0c-3a8d74adf341)
> 2020-05-26 18:34:04,845+02 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshot' (id: 
> '47c9a847-5b4b-4256-9264-a760acde8275') waiting on child command id: 
> '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40' type:'RemoveSnapshotSingleDiskLive' to 
> complete
> 2020-05-26 18:34:14,277+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmJobsMonitoring] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM Job 
> [f694590a-1577-4dce-bf0c-3a8d74adf341]: In progress (no change)
>
> I cannot see any runnig tasks on the SPM (vdsm-client Host getAllTasksInfo). 
> I also cannot find the task ID in any of the other node's logs.
>
> I already tried restarting the Engine (didn't help).
>
> To start I'm puzzled as to where this task is queueing?
>
> Any Ideas on how I could resolve this?
>
> Thank you.
> Regards,
> David
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJBI3SMVXTPSGGJ66P55MU2ERN3HBCTH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZILERZCGSPOGPOSPM3GHVURC5CVVBVZU/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
> 1. The engine didn't clean it up itself - after all , no mater the reason, 
> the operation has failed?
can't really answer without looking at the logs, engine should cleanup
in case of a failure, there can be numerous reasons for cleanup to
fail (connectivity issues, bug, etc)
> 2. Why the query fail to see the disk , but I have managed to unlock it?
could be a bug, but it would need some way to reproduce
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TV5LJU6URKS2D5FZ5BOFVYV2EAJRBJGN/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
anything in the logs (engine,vdsm)?
if there's nothing on the storage, removing from the database should
be safe, but it's best to check why it failed

On Mon, Apr 20, 2020 at 5:39 PM Strahil Nikolov  wrote:
>
> Hello All,
>
> did anyone observe the following behaviour:
>
> 1. Create a new disk from the VM -> disks UI tab
> 2. Disk creation failes , but stays in locked state
> 3. Gluster storage has no directory with that uuid
> 4. /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh doesn't find 
> anything:
> [root@engine ~]# /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t 
> all
>
> Locked VMs
>
>
>
> Locked templates
>
>
>
> Locked disks
>
>
>
> Locked snapshots
>
>
>
> Illegal images
>
>
> Should I just delete the entry from the DB or I have another option ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6E4RJM7I3BT33CU3CAB74C2Q4QNBS5BW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THYLVMX65VAZ2YTA5GL2SR2LKHF2KRJC/


[ovirt-users] Re: does SPM still exist?

2020-03-24 Thread Benny Zlotnik
it hasn't disappeared, there has been work done to move operations
that used to run only on SPM to run on regular hosts as well
(copy/move disk)
Currently the main operations performed by SPM are
create/delete/extend volume and more[1]


[1] 
https://github.com/oVirt/ovirt-engine/tree/master/backend/manager/modules/vdsbroker/src/main/java/org/ovirt/engine/core/vdsbroker/irsbroker






On Tue, Mar 24, 2020 at 11:14 AM yam yam  wrote:
>
> Hello,
>
> I heard some say SPM disappeared since 3.6.
> nevertheless, SPM still exists in oVirt admin portal or even in RHV's manual.
> So, I am wondering whether SPM still exists now.
>
> And could I know how to get more detailed information for oVirt internals??
> is the code review the best way?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNZ4KGZTWHFSUNDDVVPBMYK3U7Y3QZPF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGB6C4OLF4SH3PJCR5F4TEAHN4LGHSPL/


[ovirt-users] Re: oVirt behavior with thin provision/deduplicated block storage

2020-02-24 Thread Benny Zlotnik
we use the stats API in the engine, currently only to check if the
backend is accessible, we have plans to use it for monitoring and
validations but it is not implemented yet

On Mon, Feb 24, 2020 at 3:35 PM Nir Soffer  wrote:
>
> On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor  wrote:
> >
> > On 22/02, Nir Soffer wrote:
> > > On Sat, Feb 22, 2020, 13:02 Alan G  wrote:
> > > >
> > > > I'm not really concerned about the reporting aspect, I can look in the 
> > > > storage vendor UI to see that. My concern is: will oVirt stop 
> > > > provisioning storage in the domain because it *thinks* the domain is 
> > > > full. De-dup is currently running at about 2.5:1 so I'm concerned that 
> > > > oVirt will think the domain is full way before it actually is.
> > > >
> > > > Not clear if this is handled natively in oVirt or by the underlying lvs?
> > >
> > > Because oVirt does not know about deduplication or actual allocation
> > > on the storage side,
> > > it will let you allocate up the size of the LUNs that you added to the
> > > storage domain, minus
> > > the size oVirt uses for its own metadata.
> > >
> > > oVirt uses about 5G for its own metadata on the first LUN in a storage
> > > domain. The rest of
> > > the space can be used by user disks. Disks are LVM logical volumes
> > > created in the VG created
> > > from the LUN.
> > >
> > > If you create a storage domain with 4T LUN, you will be able to
> > > allocate about 4091G on this
> > > storage domain. If you use preallocated disks, oVirt will stop when
> > > you allocated all the space
> > > in the VG. Actually it will stop earlier based on the minimal amount
> > > of free space configured for
> > > the storage domain when creating the storage domain.
> > >
> > > If you use thin disks, oVirt will allocate only 1G per disk (by
> > > default), so you can allocate
> > > more storage than you actually have, but when VMs will write to the
> > > disk, oVirt will extend
> > > the disks. Once you use all the available space in this VG, you will
> > > not be able to allocate
> > > more without extending the storage domain with new LUN, or resizing
> > > the  LUN on storage.
> > >
> > > If you use Managed Block Storage (cinderlib) every disk is a LUN with
> > > the exact size you
> > > ask when you create the disk. The actual allocation of this LUN
> > > depends on your storage.
> > >
> > > Nir
> > >
> >
> > Hi,
> >
> > I don't know anything about the oVirt's implementation, so I'm just
> > going to provide some information from cinderlib's point of view.
> >
> > Cinderlib was developed as a dumb library to abstract access to storage
> > backends, so all the "smart" functionality is pushed to the user of the
> > library, in this case oVirt.
> >
> > In practice this means that cinderlib will NOT limit the number of LUNs
> > or over-provisioning done in the backend.
> >
> > Cinderlib doesn't care if we are over-provisioning because we have dedup
> > and decompression or because we are using thin volumes where we don't
> > consume all the allocated space, it doesn't even care if we cannot do
> > over-provisioning because we are using thick volumes.  If it gets a
> > request to create a volume, it will try to do so.
> >
> > From oVirt's perspective this is dangerous if not controlled, because we
> > could end up consuming all free space in the backend and then running
> > VMs will crash (I think) when they could no longer write to disks.
> >
> > oVirt can query the stats of the backend [1] to see how much free space
> > is available (free_capacity_gb) at any given time in order to provide
> > over-provisioning limits to its users.  I don't know if oVirt is already
> > doing that or something similar.
> >
> > If is important to know that stats gathering is an expensive operation
> > for most drivers, and that's why we can request cached stats (cache is
> > lost as the process exits) to help users not overuse it.  It probably
> > shouldn't be gathered more than once a minute.
> >
> > I hope this helps.  I'll be happy to answer any cinderlib questions. :-)
>
> Thanks Gorka, good to know we already have API to get backend
> allocation info. Hopefully we will use this in future version.
>
> Nir
>
> >
> > Cheers,
> > Gorka.
> >
> > [1]: https://docs.openstack.org/cinderlib/latest/topics/backends.html#stats
> >
> > > >  On Fri, 21 Feb 2020 21:35:06 + Nir Soffer  
> > > > wrote 
> > > >
> > > >
> > > >
> > > > On Fri, Feb 21, 2020, 17:14 Alan G  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I have an oVirt cluster with a storage domain hosted on a FC storage 
> > > > array that utilises block de-duplication technology. oVirt reports the 
> > > > capacity of the domain as though the de-duplication factor was 1:1, 
> > > > which of course is not the case. So what I would like to understand is 
> > > > the likely behavior of oVirt when the used space approaches the 
> > > > reported capacity. Particularly around the critical action space 
> > > > blocker.
> > > >
> > > >

[ovirt-users] Re: iSCSI Domain Addition Fails

2020-02-23 Thread Benny Zlotnik
anything in the vdsm or engine logs?

On Sun, Feb 23, 2020 at 4:23 PM Robert Webb  wrote:
>
> Also, I did do the “Login” to connect to the target without issue, from what 
> I can tell.
>
>
>
> From: Robert Webb
> Sent: Sunday, February 23, 2020 9:06 AM
> To: users@ovirt.org
> Subject: iSCSI Domain Addition Fails
>
>
>
> So I am messing around with FreeNAS and iSCSI. FreeNAS has a target 
> configured, it is discoverable in oVirt, but then I click “OK” nothing 
> happens.
>
>
>
> I have a name for the domain defined and have expanded the advanced features, 
> but cannot find it anything showing an error.
>
>
>
> oVirt 4.3.8
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMAXFDMNHVGTMJUGU5FK26K6PNBAW3FP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KSLKO6ZP55ZSFCSXRONAPVCEOMZTE24M/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-05 Thread Benny Zlotnik
The vdsm logs are not the correct ones.
I assume this is the failure:
2020-02-04 22:04:53,631+05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-9)
[1e9f5492-095c-48ed-9aa0-1a899eedeab7] Command 'MergeVDSCommand(HostName =
iondelsvr72.iontrading.com,
MergeVDSCommandParameters:{hostId='22502af7-f157-40dc-bd5c-6611951be729',
vmId='4957c5d4-ca5e-4db7-8c78-ae8f4b694646',
storagePoolId='c5e0f32e-0131-11ea-a48f-00163e0fe800',
storageDomainId='70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419',
imageGroupId='737b5628-e9fe-42ec-9bce-38db80981107',
imageId='31c5e807-91f1-4f73-8a60-f97a83c6f471',
baseImageId='e4160ffe-2734-4305-8bf9-a7217f3049b6',
topImageId='31c5e807-91f1-4f73-8a60-f97a83c6f471', bandwidth='0'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
MergeVDS, error = Drive image file could not be found, code = 13

please find the vdsm logs containing flow_id
1e9f5492-095c-48ed-9aa0-1a899eedeab7 and provide output for `vdsm-tool
dump-volume-chains 70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419` so we can see the
status of the chain on vdsm
As well as `virsh -r dumpxml ind-co-ora-ee-02` (assuming ind-co-ora-ee-02
is the VM with the issue)

Changing the snapshot status with unlock_entity will likely work only if
the chain is fine on the storage



On Tue, Feb 4, 2020 at 7:40 PM Crazy Ayansh 
wrote:

> please find the attached the logs.
>
> On Tue, Feb 4, 2020 at 10:23 PM Benny Zlotnik  wrote:
>
>> back to my question then, can you check what made the snapshot illegal?
>> and attach the vdsm and engine logs from the occurrence so we can assess
>> the damage
>>
>> also run `dump-volume-chains ` where the image resides so we can
>> see what's the status of the image on vdsm
>>
>> On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
>> wrote:
>>
>>> Hi,
>>>
>>> Yes VM is running but i scared if i shutdown the VM and it not came back.
>>> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
>>> persists. I am also unable to take snapshot of the the same VM as the new
>>> snapshot failing. Please help.
>>>
>>> Thanks
>>> Shashank
>>>
>>>
>>>
>>> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik 
>>> wrote:
>>>
>>>> Is the VM running? Can you remove it when the VM is down?
>>>> Can you find the reason for illegal status in the logs?
>>>>
>>>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh <
>>>> shashank123rast...@gmail.com> wrote:
>>>>
>>>>> Hey Guys,
>>>>>
>>>>> Any help on it ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>>>> shashank123rast...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>   Hi Team,
>>>>>>
>>>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>>>> below error :-
>>>>>>
>>>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>>>> índ-co-ora-ee-02'
>>>>>>
>>>>>>
>>>>>>
>>>>>> [image: image.png]
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>>>
>>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YF3J3K66N5HORUYZP3HZEJWOU64IDNAS/


[ovirt-users] Re: Recover VM if engine down

2020-02-04 Thread Benny Zlotnik
you need to go to the "import vm" tab on the storage domain and import them

On Tue, Feb 4, 2020 at 7:30 PM matteo fedeli  wrote:
>
> it does automatically when I attach or should I execute particular operations?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVC674C7RF3JZXCOW4SRJL5OQRBE5RZD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4O4YN5RDOQEGBGD4DEHXFY7R72WGQYB/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
back to my question then, can you check what made the snapshot illegal? and
attach the vdsm and engine logs from the occurrence so we can assess the
damage

also run `dump-volume-chains ` where the image resides so we can see
what's the status of the image on vdsm

On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
wrote:

> Hi,
>
> Yes VM is running but i scared if i shutdown the VM and it not came back.
> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
> persists. I am also unable to take snapshot of the the same VM as the new
> snapshot failing. Please help.
>
> Thanks
> Shashank
>
>
>
> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik  wrote:
>
>> Is the VM running? Can you remove it when the VM is down?
>> Can you find the reason for illegal status in the logs?
>>
>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
>> wrote:
>>
>>> Hey Guys,
>>>
>>> Any help on it ?
>>>
>>> Thanks
>>>
>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>> shashank123rast...@gmail.com> wrote:
>>>
>>>>
>>>>   Hi Team,
>>>>
>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>> below error :-
>>>>
>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>> índ-co-ora-ee-02'
>>>>
>>>>
>>>>
>>>> [image: image.png]
>>>>
>>>> Thanks
>>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34YWQHVGTXSZZR6DKGE477AS7GDRHJ2Y/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
Is the VM running? Can you remove it when the VM is down?
Can you find the reason for illegal status in the logs?

On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
wrote:

> Hey Guys,
>
> Any help on it ?
>
> Thanks
>
> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh 
> wrote:
>
>>
>>   Hi Team,
>>
>> I am trying to delete a old snapshot of a virtual machine and getting
>> below error :-
>>
>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>> índ-co-ora-ee-02'
>>
>>
>>
>> [image: image.png]
>>
>> Thanks
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4IWIYIHGD3FEQ52Z4P5KHDDA424MIWK/


[ovirt-users] Re: Recover VM if engine down

2020-02-03 Thread Benny Zlotnik
you can attach the storage domain to another engine and import it

On Mon, Feb 3, 2020 at 11:45 PM matteo fedeli  wrote:
>
> Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data 
> storage domain.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVSJPYVBTQOQGGKT4HNETW453ZUPDL2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSUAEXSX3WP5XGI32NMD2RBOSA2ZWM6C/


[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well?


On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
> volume specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',

[ovirt-users] Re: what is "use host" field in storage domain creation??

2019-12-30 Thread Benny Zlotnik
One host has to connect and setup the storage (mount the path, create
the files, etc) so you are given the choice which host to use for this

On Mon, Dec 30, 2019 at 11:07 AM  wrote:
>
> hello and happy new year~
>
> I am wondering the role of "use host" field in storage domain creation.
>
> https://www.ovirt.org/documentation/install-guide/chap-Configuring_Storage.html
>
> above link says all communication to the storage domain is through "use host".
> but I can't understand inefficiently passing through that "use host" even 
> though every host can directly access to all domains.
>
> it seems like i am misunderstanding something.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKZHTE3CIV6VIZAN7762GCVHEG3VS2J6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGE4LI4NZVBOMWUOJW6JMQYKI5HRB54J/


[ovirt-users] Re: VM Import Fails

2019-12-23 Thread Benny Zlotnik
Please attach engine and vdsm logs and specify the versions

On Mon, Dec 23, 2019 at 10:08 AM Vijay Sachdeva
 wrote:
>
> Hi All,
>
>
>
> I am trying to import a VM from export domain, but import fails.
>
>
>
> Setup:
>
>
>
> Source DC has a NFS shared storage with two Hosts
> Destination DC has a local storage configured using LVM.
>
>
>
> Note: Used common export domain to export the VM.
>
>
>
>
>
> Anyone, please help me on this case to understand why it’s failing.
>
>
>
> Thanks
>
> Vijay Sachdeva
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4NTGBIKTYGJVGNAYOFYRBTXFMV3GRQU/


[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-12-03 Thread Benny Zlotnik
> We are using Ceph with oVirt (via standalone Cinder) extensively in a
> production environment.
> I tested oVirt cinderlib integration in our dev environment, gave some
> feedback here on the list and am currently waiting for the future
> development. IMHO cinderlib in oVirt is currently not fit for production
> use, I think this matches your assessment.

> What can we do to help advance Ceph integration in oVirt?
> What are the plans for oVirt 4.4?
> Will standalone Cinder storage domains still be supported in oVirt 4.4?
> Will there be a migration scenario from standalone Cinder to cinderlib?
TBH, to help, bug reports are probably the most useful. Having
feedback from users with "real world" setups
and usage will help us improve. As stated before, our biggest issue at
the moment is packaging, once it's handled
it will be significantly easier to test and develop.
Standalone cinder domains were never actually supported (never left
tech preview), we do not have immediate plans
for an upgrade path, but you can definitely submit an RFE for this

> Accidentally just yesterday I had an incident in our test environment
> where migration of a VM with MANAGED_BLOCK_STORAGE (=cinderlib) disks
> failed (causes are known and unrelated to cinderlib). Restarting the VM
> failed because of leftover rbdmapped devices. This is similar to the
> case I reported in https://bugzilla.redhat.com/show_bug.cgi?id=1697496.
> I don't clearly see if this fixed or not. Shall I report my recent problem?
We added better cleanup for migration cleanup to handle the bug.
There is another issue which was not known at the time with multipath
preventing unmapping rbd devices[1], and it might be what you
experienced.
This can be fixed manually by blacklisting rbd devices in multipath
conf, but once the bug is fixed vdsm will handle configuring it.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RIHZCN3HT4Z3YUOFSF7A2SJHKQDHPYP/


[ovirt-users] Re: oVirt Admin Portal unaccessible via chrome (firefox works)

2019-11-24 Thread Benny Zlotnik
Works fine for me, anything interesting in the browser console?

On Sat, Nov 23, 2019 at 7:04 PM Strahil Nikolov  wrote:
>
> Hello Community,
>
> I have a constantly loading chrome on my openSuSE 15.1 (and my android 
> phone), while firefox has no issues .
> Can someone test accessing the oVirt Admin portal via chrome on x86_64 Linux ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S6ET7C74PFOCKIFXPXB4PQDA6LHMDEC4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L7T4R6HYQ4IZQUZUKND3RX4QN4I2HDAD/


[ovirt-users] Re: Current status of Ceph support in oVirt (2019)?

2019-11-24 Thread Benny Zlotnik
The current plan to integrate ceph is via cinderlib integration[1]
(currently in tech preview mode) because we still have no packaging
ready, there are some manual installation steps required, but there is
no need to install and configure openstack/cinder


>1. Does this require you to install OpenStack, or will a vanilla Ceph 
>installation work?
a vanilla installation will work with cinderlib

>2. Is it possible to deploy Ceph on the same nodes that run oVirt? (i.e. is a 
>3-node oVirt + Ceph cluster possible?)
I haven't tried it, but should be possible

>3. Is there any monitoring/management of Ceph from within oVirt? (Guessing no?)
No, cinderlib is storage agnostic

>4. Are all the normal VM features working yet, or is this planned?
Most features (starting/stopping/snapshots/live migration)  are
working, but not all are fully tested (specifically snapshots)

>5. Is making Ceph a first-class citizen (like Gluster) on oVirt on the roadmap?
Not at the moment, maybe once cinderlib integration matures and we
have more feedback and users for the feature

[1] 
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html


On Sun, Nov 24, 2019 at 12:41 PM  wrote:
>
> Hi,
>
> I currently have a 3-node HA cluster running Proxmox (with integrated Ceph). 
> oVirt looks pretty neat, however, and I'm excited to check it out.
>
> One of the things I love about Proxmox is the integrated Ceph support.
>
> I saw on the mailing lists that there is some talk of Ceph support earlier, 
> but it was via OpenStack/Cinder. What exactly does this mean?
>
> 1. Does this require you to install OpenStack, or will a vanilla Ceph 
> installation work?
> 2. Is it possible to deploy Ceph on the same nodes that run oVirt? (i.e. is a 
> 3-node oVirt + Ceph cluster possible?)
> 3. Is there any monitoring/management of Ceph from within oVirt? (Guessing 
> no?)
> 4. Are all the normal VM features working yet, or is this planned?
> 5. Is making Ceph a first-class citizen (like Gluster) on oVirt on the 
> roadmap?
>
> Thanks,
> Victor
>
> https://www.reddit.com/r/ovirt/comments/ci38zp/ceph_rbd_support_in_ovirt_for_storing_vm_disks/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMPXXQQOMKEQJIJVXRUYKTSHQBRZPBQ6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QR5FBGAEHOSRV7AEL5HUFSR4JZW2P3I6/


[ovirt-users] Re: Low disk space on Storage

2019-11-12 Thread Benny Zlotnik
This was fixed in 4.3.6, I suggest upgrading

On Tue, Nov 12, 2019 at 12:45 PM  wrote:
>
> Hi,
>
> I'm running ovirt Version:4.3.4.3-1.el7
> My filesystem disk has 30 GB free space.
> Cannot start a VM due to an I/O error storage.
> When tryng to move the disk to another storage domain get this error:
> Error while executing action: Cannot move Virtual Disk. Low disk space on 
> Storage Domain DATA4.
>
> The sum of pre-allocated disk is the total of the storage domain disk.
>
> Any idea what can I do to move a disk to other storage domain?
>
> Many thanks
>
> --
> 
> Jose Ferradeira
> http://www.logicworks.pt
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQFPGUPB43I7OO7FXEPLG4XSG5X2INLJ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q4TYQ3CBVTHITX7PSVHM6QBYIEGZKT6E/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Benny Zlotnik
We support it as part of the cinderlib integration (Managed Block
Storage), each rbd device is represented as single ovirt disk when
used.
The integration is still in tech preview and still has a long way to
go, but any early feedback is highly appreciated


On Mon, Oct 7, 2019 at 2:20 PM Strahil  wrote:
>
> Hi Dan,
>
> As CEPH support is quite new, we need  DEV clarification.
>
> Hi Sandro,
>
> Who can help to clarify if Ovirt supports direct RBD LUNs presented on the 
> VMs?
> Are there any limitations in the current solution ?
>
> Best Regards,
> Strahil NikolovOn Oct 7, 2019 13:54, Dan Poltawski  
> wrote:
> >
> > On Mon, 2019-10-07 at 01:56 +0300, Strahil Nikolov wrote:
> > > I'm not very sure that you are supposed to use the CEPH by giving
> > > each VM direct access.
> > >
> > > Have you considered using an iSCSI gateway as an entry point for your
> > > storage domain ? This way oVirt will have no issues dealing with the
> > > rbd locks.
> > >
> > > Of course, oVirt might be able to deal with RBD locks , but that can
> > > be confirmed/denied by the devs.
> >
> > Thanks for your response - regarding the locks point, I realised later
> > that this was my own incorrect permissions given to the client. The
> > ceph client was detecting the broken locks when mounting the rbd device
> > and unable to blacklist it. I addressed this by swithcign to the
> > permissions 'profile rbd'.
> >
> > Regarding iSCSI, we are using this for the hosted engine. However, I am
> > attracted to the idea of managing block devices with individual rbd
> > devices to facilate individual block device level snapshotting and I
> > assume performance will be better.
> >
> > thanks,
> >
> > Dan
> >
> > 
> >
> > The Networking People (TNP) Limited. Registered office: Network House, 
> > Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company 
> > number: 07667393
> >
> > This email and any files transmitted with it are confidential and intended 
> > solely for the use of the individual or entity to whom they are addressed. 
> > If you have received this email in error please notify the system manager. 
> > This message contains confidential information and is intended only for the 
> > individual named. If you are not the named addressee you should not 
> > disseminate, distribute or copy this e-mail. Please notify the sender 
> > immediately by e-mail if you have received this e-mail by mistake and 
> > delete this e-mail from your system. If you are not the intended recipient 
> > you are notified that disclosing, copying, distributing or taking any 
> > action in reliance on the contents of this information is strictly 
> > prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DLM6FRTGVDG232PQFHUA3IDOS5PT6WQ2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXXQ2KIW23Q62MYZYSXE4POVYS3JXX72/


[ovirt-users] Re: Cannot enable maintenance mode

2019-10-02 Thread Benny Zlotnik
Did you try the "Confirm Host has been rebooted" button?

On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins  wrote:
>
> Hello guys,
>
> No ideas for this issue?
>
> Thanks for your cooperation!
>
> Kind regards,
>
> -Original Message-
> From: Bruno Martins 
> Sent: 29 de setembro de 2019 16:16
> To: users@ovirt.org
> Subject: [ovirt-users] Cannot enable maintenance mode
>
> Hello guys,
>
> I am being unable to put a host from a two nodes cluster into maintenance 
> mode in order to remove it from the cluster afterwards.
>
> This is what I see in engine.log:
>
> 2019-09-27 16:20:58,364 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9, Job 
> ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom Event ID: 
> -1, Message: Host CentOS-H1 cannot change into maintenance mode - not all Vms 
> have been migrated successfully. Consider manual intervention: 
> stopping/migrating Vms: Non interactive user (User: admin).
>
> Host has been rebooted multiple times. vdsClient shows no VM's running.
>
> What else can I do?
>
> Kind regards,
>
> Bruno Martins
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7GXNW6YWRPFWOKA6VU3RH4WD3/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD5DW6KKOOHGL3WFEKIIIS57BN3VWMAQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3PVYL73WUWMOR25R7IREIAPROPHPKF7Y/


[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Benny Zlotnik
This might be a bug, can you share the full vdsm and engine logs?


On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski  wrote:
>
> On ovirt 4.3.5 we are seeing various problems related to the rbd device 
> staying mapped after a guest has been live migrated. This causes problems 
> migrating the guest back, as well as rebooting the guest when it starts back 
> up on the original host. The error returned is ‘nrbd: unmap failed: (16) 
> Device or resource busy’. I’ve pasted the full vdsm log below.
>
>
>
> As far as I can tell this isn’t happening 100% of the time, and seems to be 
> more prevalent on busy guests.
>
>
>
> (Not sure if I should create a bug for this, so thought I’d start here first)
>
>
>
> Thanks,
>
>
>
> Dan
>
>
>
>
>
> Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume error=Managed 
> Volume Helper failed.: ('Error executing helper: Command 
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1 
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\', 
> \\\'privsep-helper\\\', \\\'--privsep_context\\\', 
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\', 
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
> starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities 
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
> privsep daemon running as pid 76076\\nTraceback (most recent call last):\\n  
> File "/usr/libexec/vdsm/managedvolume-helper", line 154, in \\n
> sys.exit(main(sys.argv[1:]))\\n  File 
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 
> 149, in detach\\nignore_errors=False)\\n  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 121, in 
> disconnect_volume\\nrun_as_root=True)\\n  File 
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in 
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, 
> in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 241, in 
> _wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 203, in 
> remote_call\\nraise 
> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError: 
> Unexpected error while running command.\\nCommand: rbd unmap 
> /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --conf 
> /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300 --mon_host 
> 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit code: 16\\nStdout: 
> u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write failednrbd: unmap failed: (16) 
> Device or resource busyn\\\'\\n\'',)#012Traceback (most recent call 
> last):#012  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 
> 124, in method#012ret = func(*args, **kwargs)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in 
> detach_volume#012return managedvolume.detach_volume(vol_id)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 67, in 
> wrapper#012return func(*args, **kwargs)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 135, 
> in detach_volume#012run_helper("detach", vol_info)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 179, 
> in run_helper#012sub_cmd, cmd_input=cmd_input)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in 
> __call__#012return callMethod()#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in 
> #012**kwargs)#012  File "", line 2, in 
> managedvolume_run_helper#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
> _callmethod#012raise convert_to_error(kind, 
> result)#012ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error 
> executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\', 
> \'detach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running 
> privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', 
> \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\', 
> \\\'--privsep_sock_path\\\', 
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
> starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities 
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
> privsep daemon running as 

[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-25 Thread Benny Zlotnik
Generally, the idea is that without a host there is no way to tell
what the status of the storage domain actually is and it might be used
by some unknown host.
That said, feel free to submit an RFE[1] if you think this can be useful.

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine


On Wed, Sep 25, 2019 at 2:58 PM Claudio Soprano
 wrote:
>
> I'm glad that i solved the problem but i would like to know why in the GUI:
>
> 1) The Force Remove Data Center didn't work ? I can understand the Remove 
> Data Center is the standard way to do it and it checks all is working before 
> doing it, but if it doesn't work the Force Remove would remove it without 
> checks (Storage Domains in maintenance ? Host not configured ? SPM not 
> working ?) maybe it can show a message that it is not recommended but then it 
> would work always and remove always a Data Center
>
> 2) Is not there an option to Force Maintenance a Storage Domain without a SPM 
> host active or no Host configured ? Again an option that alert the 
> administrator about the possibile issues
>
> Or
>
> 3) Is not there an option under Administration Panel that would permit to 
> Force the Maintenace of a Storage Domain (it is just a simple change of a 
> value of a field of a table in a Database).
>
> Repeat i know how it would work, but then having a way to fix it if someone 
> has not followed the standard way to do it, would be easy (it is just a value 
> in a database that blocks removing the Data Center).
>
> Just my opinions
>
> Claudio
>
> On 24/09/19 17:23, Claudio Soprano wrote:
>
> Thanks, i solved putting all the Storage domains in maintenance manually 
> changing the status to the value of 6 in the postgres database like you 
> suggested.
>
> After that i could remove the data center from the ovirt management interface.
>
> Thanks again for your help
>
> Claudio
>
> Il 24/09/19 13:50, Benny Zlotnik ha scritto:
>
> ah yes, it's generally a good idea to move them to maintenance in the case 
> you describe
>
> you can brobably change the status manually in the database, the table is 
> storage_pool_iso_map, and the status code for maintenance is 6
>
> On Tuesday, September 24, 2019, Claudio Soprano  
> wrote:
> > Yes, i tried it and i got
> >
> > "Error while executing action: Cannot remove Data Center which contains 
> > Storage Domains that are not in Maintenance status.
> > -Please deactivate all domains and wait for tasks to finish before removing 
> > the Data Center."
> >
> > But the domains are only attachable or activable, so i don't know what to 
> > do.
> >
> > Claudio
> >
> > Il 24/09/19 12:19, Benny Zlotnik ha scritto:
> >>
> >> Did you try to force remove the DC?
> >> You have the option in the UI
> >>
> >> On Tue, Sep 24, 2019 at 1:07 PM Claudio Soprano
> >>  wrote:
> >>>
> >>> Hi to all,
> >>>
> >>> We are using ovirt to manage 6 Data Centers, 3 of them are old Data
> >>> Centers with no hosts inside, but with domains, storage and VMs not 
> >>> running.
> >>>
> >>> We left them because we wanted to have some backups in case of failure
> >>> of the new Data Centers created.
> >>>
> >>> Time pass and now we would like to remove these Data Centers, but we got
> >>> no way for now to remove them.
> >>>
> >>> If we try to remove the Storage Domains (using remove o destroy) we get
> >>>
> >>> "Error while executing action: Cannot destroy the master Storage Domain
> >>> from the Data Center without another active Storage Domain to take its
> >>> place.
> >>> -Either activate another Storage Domain in the Data Center, or remove
> >>> the Data Center.
> >>> -If you have problems with the master Data Domain, consider following
> >>> the recovery process described in the documentation, or contact your
> >>> system administrator."
> >>>
> >>> if we try to remove the Data Center directly we get
> >>>
> >>> "Error while executing action: Cannot remove Data Center. There is no
> >>> active Host in the Data Center."
> >>>
> >>> How can we solve the problem ?
> >>>
> >>> It can be done via ovirt-shell or using some script or via ovirt
> >>> management interface ?
> >>>
> >>> Thanks in advance
> >>>
> >>> Claudio
> >>> ___
>

[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-24 Thread Benny Zlotnik
ah yes, it's generally a good idea to move them to maintenance in the case
you describe

you can brobably change the status manually in the database, the table is
storage_pool_iso_map, and the status code for maintenance is 6

On Tuesday, September 24, 2019, Claudio Soprano 
wrote:
> Yes, i tried it and i got
>
> "Error while executing action: Cannot remove Data Center which contains
Storage Domains that are not in Maintenance status.
> -Please deactivate all domains and wait for tasks to finish before
removing the Data Center."
>
> But the domains are only attachable or activable, so i don't know what to
do.
>
> Claudio
>
> Il 24/09/19 12:19, Benny Zlotnik ha scritto:
>>
>> Did you try to force remove the DC?
>> You have the option in the UI
>>
>> On Tue, Sep 24, 2019 at 1:07 PM Claudio Soprano
>>  wrote:
>>>
>>> Hi to all,
>>>
>>> We are using ovirt to manage 6 Data Centers, 3 of them are old Data
>>> Centers with no hosts inside, but with domains, storage and VMs not
running.
>>>
>>> We left them because we wanted to have some backups in case of failure
>>> of the new Data Centers created.
>>>
>>> Time pass and now we would like to remove these Data Centers, but we got
>>> no way for now to remove them.
>>>
>>> If we try to remove the Storage Domains (using remove o destroy) we get
>>>
>>> "Error while executing action: Cannot destroy the master Storage Domain
>>> from the Data Center without another active Storage Domain to take its
>>> place.
>>> -Either activate another Storage Domain in the Data Center, or remove
>>> the Data Center.
>>> -If you have problems with the master Data Domain, consider following
>>> the recovery process described in the documentation, or contact your
>>> system administrator."
>>>
>>> if we try to remove the Data Center directly we get
>>>
>>> "Error while executing action: Cannot remove Data Center. There is no
>>> active Host in the Data Center."
>>>
>>> How can we solve the problem ?
>>>
>>> It can be done via ovirt-shell or using some script or via ovirt
>>> management interface ?
>>>
>>> Thanks in advance
>>>
>>> Claudio
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SADE4JVXJYKZZQ7M3EPB4FY7LWLJPKFK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHRWX6WI5C6T3SAPHNHHNMUPMP2YLURT/


[ovirt-users] Re: How to delete obsolete Data Centers with no hosts, but with domains inside

2019-09-24 Thread Benny Zlotnik
Did you try to force remove the DC?
You have the option in the UI

On Tue, Sep 24, 2019 at 1:07 PM Claudio Soprano
 wrote:
>
> Hi to all,
>
> We are using ovirt to manage 6 Data Centers, 3 of them are old Data
> Centers with no hosts inside, but with domains, storage and VMs not running.
>
> We left them because we wanted to have some backups in case of failure
> of the new Data Centers created.
>
> Time pass and now we would like to remove these Data Centers, but we got
> no way for now to remove them.
>
> If we try to remove the Storage Domains (using remove o destroy) we get
>
> "Error while executing action: Cannot destroy the master Storage Domain
> from the Data Center without another active Storage Domain to take its
> place.
> -Either activate another Storage Domain in the Data Center, or remove
> the Data Center.
> -If you have problems with the master Data Domain, consider following
> the recovery process described in the documentation, or contact your
> system administrator."
>
> if we try to remove the Data Center directly we get
>
> "Error while executing action: Cannot remove Data Center. There is no
> active Host in the Data Center."
>
> How can we solve the problem ?
>
> It can be done via ovirt-shell or using some script or via ovirt
> management interface ?
>
> Thanks in advance
>
> Claudio
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SADE4JVXJYKZZQ7M3EPB4FY7LWLJPKFK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/324PJ47YGQXNO76WI62QWMWAZBMSEZTD/


[ovirt-users] Re: Disk locked after backup

2019-09-19 Thread Benny Zlotnik
it's probably[1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1749944

On Thu, Sep 19, 2019 at 12:03 PM Fabio Cesar Hansen 
wrote:

> Hi.
>
> I am using the
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py
>  script to backup my vms.
>
> The problem is that always after the backup finishes, the snapshot is
> locked.
>
>
>
> Has anyone ever experienced this?
>
>
>
> Node version:
>
> OS Version: RHEL - 7 - 6.1810.2.el7.centos
>
> OS Description: oVirt Node 4.3.5.2
>
> Kernel Version: 3.10.0 - 957.27.2.el7.x86_64
>
> KVM Version: 2.12.0 - 18.el7_6.7.1
>
> LIBVIRT Version: libvirt-4.5.0-10.el7_6.12
>
> VDSM Version: vdsm-4.30.24-1.el7
>
>
>
> Engine version: 4.3.5.5-1.el7
>
>
>
>
>
> Thanks
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMPBHJG6SJYJJL4DXKYH7VBILDK3V3PL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSONOHVT2NIFXCCFNFRDB2KDLGXF263B/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-15 Thread Benny Zlotnik
>* Would ovirt have been able to deal with clearing the rbd locks, or
did I miss a trick somewhere to resolve this situation with manually
going through each device and clering the lock?

Unfortunately there is no trick on ovirt's side

>* Might it be possible for ovirt to detect when the rbd images are
locked for writing and prevent launching?

Since rbd paths are provided via cinderlib, a higher level interface, ovirt
does not have knowledge of implementation details like this

On Thu, Sep 12, 2019 at 11:27 PM Dan Poltawski 
wrote:

> Yesterday we had a catastrophic hardware failure with one of our nodes
> using ceph and the experimental cinderlib integration.
>
> Unfortunately the ovirt cluster recover the situation well and took
> some manual intervention to resolve. I thought I'd share what happened
> and how we resolved it in case there is any best practice to share/bugs
> which are worth creating to help others in similar situaiton. We are
> early in our use of ovirt, so its quite possible we have things
> incorreclty configured.
>
> Our setup: We have two nodes, hosted engine on iSCSI, about 40vms all
> using managed block storage mounting the rbd volumes directly. I hadn't
> configured power management (perhaps this is the fundamental problem).
>
> Yesterday a hardware fault caused one of the nodes to crash and stay
> down awaiting user input in POST screens, taking 20 vms with it.
>
> The hosted engine was fortunately on the 'good' node  and detected that
> the node had become unresponsive, but noted 'Host cannot be fenced
> automatically because power management for the host is disabled.'.
>
> At this point, knowing that one node was dead, I wanted to bring up the
> failed vms on the good node. However, the vms were appearing in an
> unknown state and I couldn't do any operations on them. It wasn't clear
> to me what the best course of action to do there would be. I am not
> sure if there is a way to mark the node as failed?
>
> In my urgency to try and resolve the situation I managed to get the
> failed node startred back up, shortly after it came up the
> engine detected that all the vms were down, I put the failed host into
> maintaince mode and tried to start the failed vms.
>
> Unfortunately the failed vms did not start up cleanly - it turned out
> that they still had rbd locks preventing writing from the failed node.
>
> To finally gets the vms to start I then manually went through every
> vm's managed block, found the id and found the lock and removed it:
> rbd lock list rbd/volume-{id}
> rbd lock remove rbd/voleume-{id} 'auto {lockid}' {lockername}
>
> Some overall thoughts I had:
> * I'm not sure what the best course of action is to notify the engine
> about a catastrophic hardware failure? If power management was
> configured, I suppose it would've removed the power and marked them all
> down?
>
> * Would ovirt have been able to deal with clearing the rbd locks, or
> did I miss a trick somewhere to resolve this situation with manually
> going through each device and clering the lock?
>
> * Might it be possible for ovirt to detect when the rbd images are
> locked for writing and prevent launching?
>
> regards,
>
> Dan
>
> 
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZGGIT2KKBWCPXNB5JEQEA3KQP5ZBNXR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJCBWWTWYTHHID3KYL67KEK63H6F2HWT/


[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Benny Zlotnik
You don't need to remove it, the vm data will be available on the OVF_STORE
disks on the other SD, where you copied the disks to.
Once you put the domain in maintenance and new master SD will be elected.

On Wed, Sep 11, 2019 at 12:13 PM Mark Steele  wrote:

> Good morning,
>
> I have a Storage Domain that I would like to retire and move that role to
> a new storage domain. Both Domains exist on my current Data Center and I
> have moved all disks from the existing Data (Master) domain to the new Data
> Domain.
>
> The only thing that is still associated with the Data (Master) domain are
> two OVF_STORE items:
>
> Alias
> Virtual Size
> Actual Size
> Allocation Policy
> Storage Domain
> Storage Type
> Creation Date
> Attached To
> Alignment
> Status
> Description
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
>
> What is the procedure for moving / removing these items and 'promoting'
> the other Data Domain to Master?
>
> Our current version is:
>
> oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)
>
> Best regards,
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XB7JNHS62JMEYZRPZQNX5YGQLPKABE76/


[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Benny Zlotnik
this means vdsm lost connectivity to the storage, but it also looks like it
recovered eventually

On Thu, Aug 8, 2019 at 12:26 PM Vrgotic, Marko 
wrote:

> Another one that seem to be related:
>
>
>
> 2019-08-07 14:43:59,069-0700 ERROR (check/loop) [storage.Monitor] Error
> checking path 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata
> (monitor:499)
>
> Traceback (most recent call last):
>
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 497, in _pathChecked
>
> delay = result.delay()
>
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line 391,
> in delay
>
> raise exception.MiscFileReadException(self.path, self.rc, self.err)
>
> MiscFileReadException: Internal file read failure:
> (u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata',
> 1, 'Read timeout')
>
> 2019-08-07 14:43:59,116-0700 WARN  (monitor/6effda5) [storage.Monitor]
> Host id for domain 6effda5e-1a0d-4312-bf93-d97fa9eb5aee was released (id:
> 1) (monitor:445)
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Wednesday, 7 August 2019 at 09:50
> *To: *"users@ovirt.org" 
> *Subject: *Re: oVirt 4.3.5 potential issue with NFS storage
>
>
>
> Log line form VDSM:
>
>
>
> “[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
>
> 2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)
>
> 2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)”
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Wednesday, 7 August 2019 at 09:09
> *To: *"users@ovirt.org" 
> *Subject: *oVirt 4.3.5 potential issue with NFS storage
>
>
>
> Dear oVIrt,
>
>
>
> This is my third oVirt platform in the company, but first time I am seeing
> following logs:
>
>
>
> “2019-08-07 16:00:16,099Z INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed
> to object
> 'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]',
> sharedLocks=''}'
>
> 2019-08-07 16:00:25,618Z WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37723) [] domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem
> 'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,630Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37735) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-01.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from
> problem. No active host in the DC is reporting it as problematic, so
> clearing the domain recovery timer.”
>
>
>
> Can you help me understanding why is this being reported?
>
>
>
> This setup is:
>
>
>
> 5HOSTS, 3 in HA
>
> SelfHostedEngine
>
> Version 4.3.5
>
> NFS based Netapp storage, version 4.1
>
> “10.210.13.64:/ovirt_hosted_engine on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
>
>
> 10.210.13.64:/ovirt_production on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
> tmpfs on /run/user/0 type tmpfs
> (rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”
>
>
>
> First mount is SHE dedicated storage.
>
> Second mount “ovirt_produciton” is for other VM Guests.
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko Vrgotic
> ___
> Users mailing list -- 

[ovirt-users] Re: SPM and Task error ...

2019-07-26 Thread Benny Zlotnik
taskcleaner.sh only clears tasks form the engine database.
did you check your engine logs to see if this task is running?
it's a task that is executed during a snapshot merge (removal of a
snapshot), do you have any running snapshot removals?
if not you can stop and clear the task using vdsm-client

On Fri, Jul 26, 2019 at 1:31 PM Enrico  wrote:

>Dear all,
>
> I try this:
>
> # /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -v -t
> fdcf4d1b-82fe-49a6-b233-323ebe568f8e
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
>  This will remove the given Task!!!
> Caution, this operation should be used with care. Please contact support
> prior to running this command
> Are you sure you want to proceed? [y/n]
> y
> SELECT Deleteasync_tasks('fdcf4d1b-82fe-49a6-b233-323ebe568f8e');
>  0
>
> # /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -v -R
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
>  This will remove all async_tasks table content!!!
> Caution, this operation should be used with care. Please contact support
> prior to running this command
> Are you sure you want to proceed? [y/n]
> y
> TRUNCATE TABLE async_tasks cascade;
> TRUNCATE TABLE
>
> but after these commands I see the same messages inside engine.log:
>
> 2019-07-26 12:25:19,727+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Task id
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period time
> and should be polled. Pre-polling period is 6 millis.
> 2019-07-26 12:25:19,727+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Task id
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period time
> and should be polled. Pre-polling period is 6 millis.
> 2019-07-26 12:25:19,779+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] Task id
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period time
> and should be polled. Pre-polling period is 6 millis.
> 2019-07-26 12:25:19,779+02 ERROR
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) []
> BaseAsyncTask::logEndTaskFailure: Task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
> failure:
> 2019-07-26 12:25:19,779+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) []
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
> 2019-07-26 12:25:19,780+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false',
> taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 753de6fe
> 2019-07-26 12:25:19,781+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] START,
> HSMClearTaskVDSCommand(HostName = infn-vm05.management,
> HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
> taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: e5dc020
> 2019-07-26 12:25:19,786+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) []
> SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
> 'TaskStateError' and message was 'Operation is not allowed in this task
> state: ("can't clean in state running",)'. Task will not be cleaned
> 2019-07-26 12:25:19,786+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) []
> BaseAsyncTask::onTaskEndSuccess: Task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
> successfully.
> 2019-07-26 12:25:19,786+02 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) []
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
> 2019-07-26 12:25:19,787+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false',
> 

[ovirt-users] Re: SPM and Task error ...

2019-07-25 Thread Benny Zlotnik
can you grep vdsm logs to see if it is actually running?

you can use vdsm-client Task stop taskID=... and then vdsm-clien Task clear
taskID=...
but if it's actually running it can leave the system in an undesired state,
so be sure to check

On Thu, Jul 25, 2019 at 5:58 PM Enrico  wrote:

> Il 25/07/19 16:45, Benny Zlotnik ha scritto:
>
> Do you have vdsm logs?
>
> I'M not sure because this task is very old
>
> Is this task still running?
>
> I made this :
>
> # vdsm-client Task  getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
> {
> "message": "running job 1 of 1",
> "code": 0,
> "taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
> "taskResult": "",
> "taskState": "running"
> }
>  are there any other tools to check it ?
>
> Thanks
> Enrico
>
>
> On Thu, Jul 25, 2019 at 5:00 PM Enrico 
> wrote:
>
>>  Hi all,
>> my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
>> 4.20.39.1-1.el7,
>> ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
>> 2050 (fibre channel).
>>
>> I need to stop one of the hypervisors for maintenance but this system is
>> the storage pool manager.
>>
>> For this reason I decided to manually activate SPM in one of the other
>> nodes but this operation is not
>> successful.
>>
>> In the ovirt engine (engine.log) the error is this:
>>
>> 2019-07-25 12:39:16,744+02 INFO
>> [org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
>> ForceSelectSPMCommand internal: false. Entities affected :  ID:
>> 81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group MANIPULATE_HOST
>> with role type ADMIN
>> 2019-07-25 12:39:16,745+02 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
>> SpmStopOnIrsVDSCommand(
>> SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
>> ignoreFailoverLimit='false'}), log id: 37bf4639
>> 2019-07-25 12:39:16,747+02 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START, ResetIrsVDSCommand(
>> ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
>> ignoreFailoverLimit='false', vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
>> ignoreStopFailed='false'}), log id: 2522686f
>> 2019-07-25 12:39:16,749+02 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
>> SpmStopVDSCommand(HostName = infn-vm05.management,
>> SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
>> storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
>> 2019-07-25 12:39:16,758+02 *ERROR*
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
>> stopping SPM on vds 'infn-vm05.management', pool id
>> '18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks 'Task
>> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
>> 2019-07-25 12:39:16,758+02 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH, SpmStopVDSCommand,
>> log id: 1810fd8b
>> 2019-07-25 12:39:16,758+02 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH, ResetIrsVDSCommand,
>> log id: 2522686f
>> 2019-07-25 12:39:16,758+02 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
>> SpmStopOnIrsVDSCommand, log id: 37bf4639
>> 2019-07-25 12:39:16,760+02 *ERROR*
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
>> USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
>> infn-vm07.management as the SPM due to a failure to stop the current SPM.
>>
>> while in the hypervisor (SPM) vdsm.log:
>>
>> 2019-07-25 12:39:16,744+02 INFO
>> [org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
>> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
>> ForceSelectSPMCommand internal: false. Ent

[ovirt-users] Re: SPM and Task error ...

2019-07-25 Thread Benny Zlotnik
Do you have vdsm logs? Is this task still running?

On Thu, Jul 25, 2019 at 5:00 PM Enrico  wrote:

>  Hi all,
> my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
> 4.20.39.1-1.el7,
> ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
> 2050 (fibre channel).
>
> I need to stop one of the hypervisors for maintenance but this system is
> the storage pool manager.
>
> For this reason I decided to manually activate SPM in one of the other
> nodes but this operation is not
> successful.
>
> In the ovirt engine (engine.log) the error is this:
>
> 2019-07-25 12:39:16,744+02 INFO
> [org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
> ForceSelectSPMCommand internal: false. Entities affected :  ID:
> 81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group MANIPULATE_HOST
> with role type ADMIN
> 2019-07-25 12:39:16,745+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
> SpmStopOnIrsVDSCommand(
> SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false'}), log id: 37bf4639
> 2019-07-25 12:39:16,747+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START, ResetIrsVDSCommand(
> ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false', vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
> ignoreStopFailed='false'}), log id: 2522686f
> 2019-07-25 12:39:16,749+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
> SpmStopVDSCommand(HostName = infn-vm05.management,
> SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
> storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
> 2019-07-25 12:39:16,758+02 *ERROR*
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
> stopping SPM on vds 'infn-vm05.management', pool id
> '18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks 'Task
> 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
> 2019-07-25 12:39:16,758+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH, SpmStopVDSCommand,
> log id: 1810fd8b
> 2019-07-25 12:39:16,758+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH, ResetIrsVDSCommand,
> log id: 2522686f
> 2019-07-25 12:39:16,758+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
> SpmStopOnIrsVDSCommand, log id: 37bf4639
> 2019-07-25 12:39:16,760+02 *ERROR*
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
> USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
> infn-vm07.management as the SPM due to a failure to stop the current SPM.
>
> while in the hypervisor (SPM) vdsm.log:
>
> 2019-07-25 12:39:16,744+02 INFO
> [org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
> ForceSelectSPMCommand internal: false. Entities affected :  ID:
> 81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group MANIPULATE_HOST
> with role type ADMIN
> 2019-07-25 12:39:16,745+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
> SpmStopOnIrsVDSCommand(
> SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false'}), log id: 37bf4639
> 2019-07-25 12:39:16,747+02 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START, ResetIrsVDSCommand(
> ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
> ignoreFailoverLimit='false', vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
> ignoreStopFailed='false'}), log id: 2522686f
> 2019-07-25 12:39:16,749+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
> SpmStopVDSCommand(HostName = infn-vm05.management,
> SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
> storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
> 2019-07-25 12:39:16,758+02 *ERROR*
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
> task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] 

[ovirt-users] Re: Storage domain 'Inactive' but still functional

2019-07-24 Thread Benny Zlotnik
We have seen something similar in the past and patches were posted to deal
with this issue, but it's still in progress[1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1553133

On Mon, Jul 22, 2019 at 8:07 PM Strahil  wrote:

> I have a theory... But after all without any proof it will remain theory.
>
> The storage volumes are just VGs over a shared storage.The SPM host is
> supposed to be the only one that is working with the LVM metadata, but I
> have observed that when someone is executing a simple LVM command  (for
> example -lvs, vgs or pvs ) while another one is going on on another host -
> your metadata can corrupt, due to lack of clvmd.
>
> As a protection, I could offer you to try the following solution:
> 1. Create new iSCSI lun
> 2. Share it to all nodes and create the storage domain. Set it to
> maintenance.
> 3. Start dlm & clvmd services on all hosts
> 4. Convert the VG of your shared storage domain to have a 'cluster'-ed
> flag:
> vgchange -c y mynewVG
> 5. Check the lvs of that VG.
> 6. Activate the storage domain.
>
> Of course  test it on a test cluster before inplementing it on Prod.
> This is one of the approaches used in Linux HA clusters in order to avoid
> LVM metadata corruption.
>
> Best Regards,
> Strahil Nikolov
> On Jul 22, 2019 15:46, Martijn Grendelman 
> wrote:
>
> Hi,
>
> Op 22-7-2019 om 14:30 schreef Strahil:
>
> If you can give directions (some kind of history) , the dev might try to
> reproduce this type of issue.
>
> If it is reproduceable - a fix can be provided.
>
> Based on my experience, if something as used as Linux LVM gets broken, the
> case is way hard to reproduce.
>
>
> Yes, I'd think so too, especially since this activity (online moving of
> disk images) is done all the time, mostly without problems. In this case,
> there was a lot of activity on all storage domains, because I'm moving all
> my storage (> 10TB in 185 disk images) to a new storage platform. During
> the online move of one the images, the metadata checksum became corrupted
> and the storage domain went offline.
>
> Of course, I could dig up the engine logs and vdsm logs of when it
> happened, but that would be some work and I'm not very confident that the
> actual cause would be in there.
>
> If any oVirt devs are interested in the logs, I'll provide them, but
> otherwise I think I'll just see it as an incident and move on.
>
> Best regards,
> Martijn.
>
>
>
>
> On Jul 22, 2019 10:17, Martijn Grendelman 
>  wrote:
>
> Hi,
>
> Thanks for the tips! I didn't know about 'pvmove', thanks.
>
> In  the mean time, I managed to get it fixed by restoring the VG metadata
> on the iSCSI server, so on the underlying Zvol directly, rather than via
> the iSCSI session on the oVirt host. That allowed me to perform the restore
> without bringing all VMs down, which was important to me, because if I had
> to shut down VMs, I was sure I wouldn't be able to restart them before the
> storage domain was back online.
>
> Of course this is a more a Linux problem than an oVirt problem, but oVirt
> did cause it ;-)
>
> Thanks,
> Martijn.
>
>
>
> Op 19-7-2019 om 19:06 schreef Strahil Nikolov:
>
> Hi Martin,
>
> First check what went wrong with the VG -as it could be something simple.
> vgcfgbackup -f VGname will create a file which you can use to compare
> current metadata with a previous version.
>
> If you have Linux boxes - you can add disks from another storage and then
> pvmove the data inside the VM. Of course , you will need to reinstall grub
> on the new OS disk , or you won't be able to boot afterwards.
> If possible, try with a test VM before proceeding with important ones.
>
> Backing up the VMs is very important , because working on LVM metada
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37UDAWDXON3URKVGSR3YGIZML2ZVPZOG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHYGSEOGHWPBQHXQXOPRWWBOMRSTPADH/


[ovirt-users] Re: Cinderlib managed block storage, ceph jewel

2019-07-22 Thread Benny Zlotnik
> I hat do copy them to all the hosts to start a virtual machine with
attached cinderlib ceph block device.
That is strange, you shouldn't need to do this, cinderlib passes them to
the hosts itself
Do you have cinderlib.log to look at?
(/var/log/ovirt-engine/cinderlib/cinderlib.log)

On Mon, Jul 22, 2019 at 5:52 PM Mathias Schwenke <
mathias.schwe...@uni-dortmund.de> wrote:

> > Starting a VM should definitely work, I see in the error message:
> > "RBD image feature set mismatch. You can disable features unsupported by
> > the kernel with "rbd feature disable"
> > Adding "rbd default features = 3" to ceph.conf might help with that.
> Thanks! That helped.
>
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> means:
> > Also for Ceph backend, a keyring file and ceph.conf file is needed in
> the Engine.
> I hat do copy them to all the hosts to start a virtual machine with
> attached cinderlib ceph block device.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6AECIHGBKKRW2ZPTQN6RMLPPCT2E3XCW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JTY74GHS34QXV75HD3J6Z6VLQG4GLUIX/


[ovirt-users] Re: LiveStoreageMigration failed

2019-07-18 Thread Benny Zlotnik
v03'}],
> 'protocol': 'gluster'}, 'format': 'cow', u'poolID':
> u'0001-0001-0001-0001-0311', u'device': 'disk', 'protocol':
> 'gluster', 'propagateErrors': 'off', u'diskType': u'network', 'cache':
> 'none', u'vol
> umeID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', u'imageID':
> u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50', 'hosts': [{'port': '0',
> 'transport': 'tcp', 'name': '192.168.11.20'}], 'path':
>
> u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50
> /62656632-8984-4b7e-8be1-fd2547ca0f98', 'volumeChain': [{'domainID':
> u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path':
> u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
>
> 'volu
> meID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98.lease',
>
> 'imageID': u'd2964ff9-10f7-
> 4b92-8327-d68f3cfd5b50'}, {'domainID':
> u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path':
> u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a',
>
> 'volumeID': u'43dbb053-c5fe-45bf-94
> 64-acf77546b96a', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a.lease',
>
> 'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}]} (vm
> :4710)
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704,
> in diskReplicateStart
>  self._startDriveReplication(drive)
>File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843,
> in _startDriveReplication
>  self._dom.blockCopy(drive.name, destxml, flags=flags)
>File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 98, in f
>  ret = attr(*args, **kwargs)
>File
> "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 130, in wrapper
>  ret = f(*args, **kwargs)
>File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
> 92, in wrapper
>  return func(inst, *args, **kwargs)
>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 729, in
> blockCopy
>  if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> libvirtError: argument unsupported: non-file destination not supported yet
>
> 2019-07-18 09:29:09,796+0200 INFO  (jsonrpc/2) [api.virt] FINISH
> diskReplicateStart return={'status': {'message': 'Drive replication
> error', 'code': 55}} from=:::10.4.8.242,45784,
> flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f,
> vmId=3b79d0c0-47e9-47c3-8511-980a8cfe147c
>   (api:52)
>
>
> On 18.07.19 10:42, Benny Zlotnik wrote:
> > It should work, what is the engine and vdsm versions?
> > Can you add vdsm logs as well?
> >
> > On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler
> > mailto:koeh...@luis.uni-hannover.de>>
> wrote:
> >
> > Hello,
> >
> > I try to migrate a disk of a running vm from gluster 3.12.15 to
> gluster
> > 3.12.15 but it fails. libGfApi set to true by engine-config.
> >
> > ° taking a snapshot first, is working. Then at engine-log:
> >
> > 2019-07-18 09:29:13,932+02 ERROR
> >
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> >
> > (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> > [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in
> > 'VmReplicateDiskStartVDS' method
> >
> > 2019-07-18 09:29:13,936+02 ERROR
> >
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> > [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID:
> > VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command
> > VmReplicateDiskStartVDS failed: Drive replication error
> >
> > 2019-07-18 09:29:13,936+02 INFO
> >
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> >
> > (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> > [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command
> >
>  'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand'
> >
> > return value 'StatusOnlyReturn [status=Status [code=55, message=Drive
> > replication error]]'
> >

[ovirt-users] Re: LiveStoreageMigration failed

2019-07-18 Thread Benny Zlotnik
It should work, what is the engine and vdsm versions?
Can you add vdsm logs as well?

On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hello,
>
> I try to migrate a disk of a running vm from gluster 3.12.15 to gluster
> 3.12.15 but it fails. libGfApi set to true by engine-config.
>
> ° taking a snapshot first, is working. Then at engine-log:
>
> 2019-07-18 09:29:13,932+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in
> 'VmReplicateDiskStartVDS' method
>
> 2019-07-18 09:29:13,936+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command
> VmReplicateDiskStartVDS failed: Drive replication error
>
> 2019-07-18 09:29:13,936+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand'
> return value 'StatusOnlyReturn [status=Status [code=55, message=Drive
> replication error]]'
>
> 2019-07-18 09:29:13,936+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07
>
> 2019-07-18 09:29:13,937+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command
> 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07,
> VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4',
> vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c',
> storagePoolId='0001-0001-0001-0001-0311',
> srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f',
> targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640',
> imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50',
> imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed:
> VDSGenericException: VDSErrorException: Failed to
> VmReplicateDiskStartVDS, error = Drive replication error, code = 55
>
> 2019-07-18 09:29:13,937+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH,
> VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b
>
> 2019-07-18 09:29:13,937+02 ERROR
> [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk
> 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM
> '3b79d0c0-47e9-47c3-8511-980a8cfe147c')
>
> 2019-07-18 09:29:13,938+02 ERROR
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id:
> '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children
> [03672b60-443b-47ba-834c-ac306d7129d0,
> 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform
> the next operation, marking as 'ACTIVE'
>
> 2019-07-18 09:29:13,938+02 ERROR
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-84)
> [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive
> replication error (Failed with error replicaErr and code 55):
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> Drive replication error (Failed with error replicaErr and code 55)
>  at
> org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526)
>
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233)
>
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
>
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77)
>
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
>
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
>
> [bll.jar:]
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_212]
>  at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [rt.jar:1.8.0_212]
>  at
> 

[ovirt-users] Re: Cinderlib managed block storage, ceph jewel

2019-07-17 Thread Benny Zlotnik
Starting a VM should definitely work, I see in the error message:
"RBD image feature set mismatch. You can disable features unsupported by
the kernel with "rbd feature disable"
Adding "rbd default features = 3" to ceph.conf might help with that.

The other issue looks like a bug and it would be great if you can submit
one[1]

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

On Wed, Jul 17, 2019 at 3:30 PM  wrote:

> Hi.
> I tried to use manged block storage to connect our oVirt cluster (Version
> 4.3.4.3-1.el7) to our ceph storage ( version 10.2.11). I used the
> instructions from
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> At the moment, in the ovirt administration portal I can create and delete
> ceph volumes (ovirt disks) and attach them to virtual machines. If I try to
> launch a vm with connected ceph block storage volume, starting fails:
>
> 2019-07-16 19:39:09,251+02 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-53) [7cada945] Unexpected return value: Status [code=926,
> message=Managed Volume Helper failed.: ('Error executing helper: Command
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'attach\'] failed with rc=1
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\',
> \\\'privsep-helper\\\', \\\'--privsep_context\\\',
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmpB6ZBAs/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep daemon running as pid 112531\\nTraceback (most recent call
> last):\\n  File "/usr/libexec/vdsm/managedvolume-help
>  er", line 154, in \\nsys.exit(main(sys.argv[1:]))\\n  File
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line
> 137, in attach\\nattachment =
> conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96, in
> connect_volume\\nrun_as_root=True)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 205,
> in _wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
> remote_call\\nraise exc_type(*result[2])\\
> noslo_concurrency.processutils.Pr
>  ocessExecutionError: Unexpected error while running command.\\nCommand:
> rbd map volume-a57dbd5c-2f66-460f-b37f-5f7dfa95d254 --pool ovirt-volumes
> --conf /tmp/brickrbd_TLMTkR --id ovirtcinderlib --mon_host
> 192.168.61.1:6789 --mon_host 192.168.61.2:6789 --mon_host
> 192.168.61.3:6789\\nExit code: 6\\nStdout: u\\\'RBD image feature set
> mismatch. You can disable features unsupported by the kernel with "rbd
> feature disable".nIn some cases useful info is found in syslog - try
> "dmesg | tail" or so.n\\\'\\nStderr: u\\\'rbd: sysfs write
> failednrbd: map failed: (6) No such device or addressn\\\'\\n\'',)]
> 2019-07-16 19:39:09,251+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-53) [7cada945] Failed in 'AttachManagedBlockStorageVolumeVDS'
> method
>
> After disconnecting the disk, I can delete it (the volume disappears from
> ceph), but the disks stays in my oVirt administration portal as cinderlib
> means the disk ist still connected:
>
> 2019-07-16 19:42:53,551+02 INFO
> [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
> (EE-ManagedThreadFactory-engine-Thread-487362)
> [887b4d11-302f-4f8d-a3f9-7443a80a47ba] Running command: RemoveDiskCommand
> internal: false. Entities affected :  ID:
> a57dbd5c-2f66-460f-b37f-5f7dfa95d254 Type: DiskAction group DELETE_DISK
> with role type USER
> 2019-07-16 19:42:53,559+02 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.RemoveManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] Running command:
> RemoveManagedBlockStorageDiskCommand internal: true.
> 2019-07-16 19:42:56,240+02 ERROR
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] cinderlib
> execution failed
> DBReferenceError: (psycopg2.IntegrityError) update or delete on table
> "volumes" violates foreign key constraint
> "volume_attachment_volume_id_fkey" on table 

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
No problem :)

>Is it possible to migrate existing vms to managed block storage?
We do not have OVF support or stuff like that for MBS domains, you can
attach MBS disks to existing VMs
Or do you mean moving/copying existing disks to an MBS domain? in this case
the answer is unfortunately no
>Also is it possible to host the hosted engine on this storage?
Unfortunately no

On Tue, Jul 9, 2019 at 4:57 PM Dan Poltawski 
wrote:

> On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote:
> > VM live migration is supported and should work
> > Can you add engine and cinderlib logs?
>
>
> Sorry - looks like once again this was a misconfig by me on the ceph
> side..
>
> Is it possible to migrate existing vms to managed block storage? Also
> is it possible to host the hosted engine on this storage?
>
>
> Thanks Again for your help,
>
> Dan
>
> >
> > On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > > > I've now managed to succesfully create/mount/delete volumes!
> > >
> > > However, I'm seeing live migrations stay stuck. Is this supported?
> > >
> > > (gdb) py-list
> > >  345client.conf_set('rados_osd_op_timeout',
> > > timeout)
> > >  346client.conf_set('rados_mon_op_timeout',
> > > timeout)
> > >  347client.conf_set('client_mount_timeout',
> > > timeout)
> > >  348
> > >  349client.connect()
> > > >350ioctx = client.open_ioctx(pool)
> > >  351return client, ioctx
> > >  352except self.rados.Error:
> > >  353msg = _("Error connecting to ceph
> > > cluster.")
> > >  354LOG.exception(msg)
> > >  355client.shutdown()
> > >
> > >
> > > (gdb) py-bt
> > > #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> > > packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> > > (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> > > conf='/etc/ceph/ceph.conf', user='ovirt', client= > > remote
> > > 0x7fb1f4f83a60>)
> > > ioctx = client.open_ioctx(pool)
> > > #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> > > packages/retrying.py, line 217, in call
> > > (self= > > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > > _wait_incrementing_start=0, stop= > > 0x7fb1f4f23578>,
> > > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > > _wait_random_max=1000, _retry_on_result= > > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > > _wrap_exception=False, _wait_random_min=0,
> > > _wait_exponential_multiplier=1, wait= > > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn= > > 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> > > start_time=1562658179214, attempt_number=1)
> > > attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> > > #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> > > packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> > > None), kwargs={}, r= > > remote
> > > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > > _wait_incrementing_start=0, stop= > > 0x7fb1f4f23578>,
> > > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > > _wait_random_max=1000, _retry_on_result= > > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > > _wrap_exception=False, _wait_random_min=0,
> > > _wait_exponential_multiplier=1, wait= > > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> > > return r.call(f, *args, **kwargs)
> > > #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> > > packages/cinder/volume/drivers/rbd.py, line 358, in
> > > _connect_to_rados
> > > (self= > > 0x7fb20583e830>, _is_replication_enabled=False, _execute= > > at
> > > remote 0x7fb2041242a8>, _active_config={'name': 'ceph', 'conf':
> > > '/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
> > > _initialized=False, db= > > 0x7fb203f8d520>, qos_specs_get= > > 0x7fb1f677d460>, _lock= > > _waiters=) at remote
> > > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > > 'inc_retry_interval': True, 'retry_interval': 1,
> > > 'max_retry_inter

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
VM live migration is supported and should work
Can you add engine and cinderlib logs?

On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski 
wrote:

> On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > I've now managed to succesfully create/mount/delete volumes!
>
> However, I'm seeing live migrations stay stuck. Is this supported?
>
> (gdb) py-list
>  345client.conf_set('rados_osd_op_timeout',
> timeout)
>  346client.conf_set('rados_mon_op_timeout',
> timeout)
>  347client.conf_set('client_mount_timeout',
> timeout)
>  348
>  349client.connect()
> >350ioctx = client.open_ioctx(pool)
>  351return client, ioctx
>  352except self.rados.Error:
>  353msg = _("Error connecting to ceph cluster.")
>  354LOG.exception(msg)
>  355client.shutdown()
>
>
> (gdb) py-bt
> #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> conf='/etc/ceph/ceph.conf', user='ovirt', client= 0x7fb1f4f83a60>)
> ioctx = client.open_ioctx(pool)
> #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> packages/retrying.py, line 217, in call
> (self= 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> _wait_incrementing_start=0, stop=,
> _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> _wait_random_max=1000, _retry_on_result= 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> _wrap_exception=False, _wait_random_min=0,
> _wait_exponential_multiplier=1, wait= 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn= 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> start_time=1562658179214, attempt_number=1)
> attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> None), kwargs={}, r= 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> _wait_incrementing_start=0, stop=,
> _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> _wait_random_max=1000, _retry_on_result= 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> _wrap_exception=False, _wait_random_min=0,
> _wait_exponential_multiplier=1, wait= 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> return r.call(f, *args, **kwargs)
> #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 358, in _connect_to_rados
> (self= 0x7fb20583e830>, _is_replication_enabled=False, _execute= remote 0x7fb2041242a8>, _active_config={'name': 'ceph', 'conf':
> '/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
> _initialized=False, db= 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder.vol...(truncated)
> return _do_conn(pool, remote, timeout)
> #33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 177, in __init__
> (self= remote 0x7fb20583e830>, _is_replication_enabled=False,
> _execute=, _active_config={'name':
> 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> _active_backend_id=None, _initialized=False, db= at remote 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': ...(truncated)
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
> #44 Frame 0x7fb1f4f9a620, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 298, in
> check_for_setup_error (self= remote 0x7fb20583e830>, _is_replication_enabled=False,
> _execute=, _active_config={'name':
> 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> _active_backend_id=None, _initialized=False, db= at remote 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder...(truncated)
> with 

[ovirt-users] Re: ISO Upload "Paused by System"

2019-07-09 Thread Benny Zlotnik
What does it say in the engine logs?

On Tue, Jul 9, 2019 at 11:03 AM Ron Baduach -X (rbaduach - SQLINK LTD at
Cisco)  wrote:

> Hi Guys,
>
> I tried to upload an ISO, and from "chrome" it's just "paused by system"
> from the beginning
>
> From "Firefox", it started to upload, but after 0.5 hour it becomes
> "paused by system" too.
>
>
>
> I suspect it's the certificate issue, but I installed the certificate.
> [I'm working with Windows 10]
>
>
>
> Can you please tell me it this is the issue ?and if yes, how can I solve
> the Cert issue ?
>
>
>
> I'm just stuck here, so thanks in advance !
>
>
>
> [image:
> https://www.cisco.com/c/dam/m/en_us/signaturetool/images/logo/Cisco_Logo_no_TM_Indigo_Blue-RGB_43px.png]
>
> *Ron Baduach*
>
> Lab Staff
>
> rbadu...@cisco.com
>
> Tel: *+972 9 892 7004*
>
>
>
>
>
>
>
>
>
> Cisco Systems, Inc.
>
> 32 HaMelacha St., (HaSharon Bldg) P.O.Box 8735, I.Z.Sapir
>
> SOUTH NETANYA
>
> 42504
>
> Israel
>
> cisco.com
>
> [image: http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]
>
> Think before you print.
>
> This email may contain confidential and privileged material for the sole
> use of the intended recipient. Any review, use, distribution or disclosure
> by others is strictly prohibited. If you are not the intended recipient (or
> authorized to receive for the recipient), please contact the sender by
> reply email and delete all copies of this message.
>
> Please click here
> 
> for Company Registration Information.
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VOO2FHQUUUC6N66OEBBNGX357RMQJJA3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWTQI5DDA7PHM74KZW25RLO4NWJKEV6C/


[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Can you try to create mutliple ceph volumes manually via rbd from the
engine machine, so we can simulate what cinderlib does without using it,
this can be done
$ rbd -c ceph.conf create /vol1 --size 100M
$ rbd -c ceph.conf create /vol2 --size 100M

On Mon, Jul 8, 2019 at 4:58 PM Dan Poltawski 
wrote:

> On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote:
> > Not too useful unfortunately :\
> > Can you try py-list instead of py-bt? Perhaps it will provide better
> > results
>
> (gdb) py-list
>   57if get_errno(ex) != errno.EEXIST:
>   58raise
>   59return listener
>   60
>   61def do_poll(self, seconds):
>  >62return self.poll.poll(seconds)
> >
>
>
> Thanks for you help,
>
>
> Dan
>
> > On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > > > Hi,
> > > >
> > > > You have a typo, it's py-bt and I just tried it myself, I only
> > > had to
> > > > install:
> > > > $ yum install -y python-devel
> > > > (in addition to the packages specified in the link)
> > >
> > > Thanks - this is what I get:
> > >
> > > #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/epolls.py, line 62, in do_poll
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=F...(truncated)
> > > return self.poll.poll(seconds)
> > > #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/poll.py, line 85, in wait
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) at r...(truncated)
> > > presult = self.do_poll(seconds)
> > > #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/hub.py, line 346, in run
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) ...(truncated)
> > > self.wait(sleep_time)
> > >
> > >
> > >
> > > >
> > > > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > > > dan.poltaw...@tnp.net.uk> wrote:
> > > > > Hi,
> > > > >
> > 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Not too useful unfortunately :\
Can you try py-list instead of py-bt? Perhaps it will provide better results

On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski 
wrote:

> On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > Hi,
> >
> > You have a typo, it's py-bt and I just tried it myself, I only had to
> > install:
> > $ yum install -y python-devel
> > (in addition to the packages specified in the link)
>
> Thanks - this is what I get:
>
> #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/epolls.py, line 62, in do_poll
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=F...(truncated)
> return self.poll.poll(seconds)
> #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/poll.py, line 85, in wait
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=False) at r...(truncated)
> presult = self.do_poll(seconds)
> #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/hub.py, line 346, in run
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=False) ...(truncated)
> self.wait(sleep_time)
>
>
>
> >
> > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > Hi,
> > >
> > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > > Any chance you can setup gdb[1] so we can find out where it's
> > > > stuck
> > > > > exactly?
> > >
> > > Yes, abolutely - but I will need some assistance in getting GDB
> > > configured in the engine as I am not very familar with it - or how
> > > to enable the correct repos to get the debug info.
> > >
> > > $ gdb python 54654
> > >
> > > [...]
> > >
> > > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > > found)...done.
> > > (no debugging symbols found)...done.
> > > Loaded symbols for /lib64/libfreeblpriv3.so
> > > 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > > Missing separate debuginfos, use: debuginfo-install python-2.7.5-
> > > 80.el7_6.x86_64
> > > (gdb) pt-bt
> > > Undefined command: "pt-bt".  Try "help".
> > >
> > >
> > > > > Also, which version of ovirt are you using?
> > >
> > > Using 4.3.4
> > >
> > > > > Can you also check the ceph logs for anything suspicious?
> > >
> > > I haven't seen anything so far, but is an entirely resonable
> > > possibility this is ceph misoconfiguraiton as we are learning about
> > > both tools.
> > >
> > >
> > > thanks,
> > >
> > > Dan
> > >
> > > > >
> > > > >
> > > > > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > > > > $ gdb python 
> > > > > then `py-bt`
> > > > >
> > > > > On Thu, Jul 4, 2019 at 7:00 PM

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Hi,

You have a typo, it's py-bt and I just tried it myself, I only had to
install:
$ yum install -y python-devel
(in addition to the packages specified in the link)

On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski 
wrote:

> Hi,
>
> On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
>
> > Any chance you can setup gdb[1] so we can find out where it's stuck
> > exactly?
>
>
> Yes, abolutely - but I will need some assistance in getting GDB configured
> in the engine as I am not very familar with it - or how to enable the
> correct repos to get the debug info.
>
> $ gdb python 54654
>
> [...]
>
> Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols from
> /lib64/libfreeblpriv3.so...(no debugging symbols found)...done.
> (no debugging symbols found)...done.
> Loaded symbols for /lib64/libfreeblpriv3.so
> 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> python-2.7.5-80.el7_6.x86_64
> (gdb) pt-bt
> Undefined command: "pt-bt".  Try "help".
>
>
> > Also, which version of ovirt are you using?
>
>
> Using 4.3.4
>
> > Can you also check the ceph logs for anything suspicious?
>
>
> I haven't seen anything so far, but is an entirely resonable possibility
> this is ceph misoconfiguraiton as we are learning about both tools.
>
>
> thanks,
>
> Dan
>
> >
> >
> > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > $ gdb python 
> > then `py-bt`
> >
> > On Thu, Jul 4, 2019 at 7:00 PM  wrote:
>
> > > > Can you provide logs? mainly engine.log and cinderlib.log
> > > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> > >
> > >
> > > If I create two volumes, the first one succeeds successfully, the
> > > second one hangs. If I look in the processlist after creating the
> > > second volume which doesn't succceed, I see the python ./cinderlib-
> > > client.py create_volume [...] command still running.
> > >
> > > On the ceph side, I can see only the one rbd volume.
> > >
> > > Logs below:
> > >
> > >
> > >
> > > --- cinderlib.log --
> > >
> > > 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend
> > > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > > 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> > > '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d]
> > > 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> > > '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB [a793bfc9-
> > > fc37-4711-a144-d74c100cc75b]
> > >
> > > --- engine.log ---
> > >
> > > 2019-07-04 16:46:54,062+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > > AddDiskCommand internal: false. Entities affected :  ID: 31536d80-
> > > ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> > > with role type USER
> > > 2019-07-04 16:46:54,150+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > > kStorageDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-
> > > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > > AddManagedBlockStorageDiskCommand internal: true.
> > > 2019-07-04 16:46:56,863+01 INFO
> > > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> > > (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > > 2019-07-04 16:46:56,912+01 INFO
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > > or] (default task-22) [] EVENT_ID:
> > > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was
> > > successfully added.
> > > 2019-07-04 16:47:00,126+01 INFO
> > > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback
> > > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id: '15fe157d-7adb-
> > > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-4c01-b880-
> > > dd518421a2c6]' executions were completed, status 'SUCCEEDED'
> > > 2019-07-04 16:47:01,136+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > > ManagedThreadFactory-engineScheduled-Thread-99) [0b0f0d6f-cb20-

[ovirt-users] Re: Managed Block Storage

2019-07-07 Thread Benny Zlotnik
Hi,

Any chance you can setup gdb[1] so we can find out where it's stuck exactly?
Also, which version of ovirt are you using?
Can you also check the ceph logs for anything suspicious?


[1] - https://wiki.python.org/moin/DebuggingWithGdb
$ gdb python 
then `py-bt`

On Thu, Jul 4, 2019 at 7:00 PM  wrote:

> > Can you provide logs? mainly engine.log and cinderlib.log
> > (/var/log/ovirt-engine/cinderlib/cinderlib.log
>
>
> If I create two volumes, the first one succeeds successfully, the second
> one hangs. If I look in the processlist after creating the second volume
> which doesn't succceed, I see the python ./cinderlib-client.py
> create_volume [...] command still running.
>
> On the ceph side, I can see only the one rbd volume.
>
> Logs below:
>
>
>
> --- cinderlib.log --
>
> 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend stats
> [b07698bb-1688-472f-841b-70a9d52a250d]
> 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB
> [a793bfc9-fc37-4711-a144-d74c100cc75b]
>
> --- engine.log ---
>
> 2019-07-04 16:46:54,062+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command: AddDiskCommand
> internal: false. Entities affected :  ID:
> 31536d80-ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> with role type USER
> 2019-07-04 16:46:54,150+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> AddManagedBlockStorageDiskCommand internal: true.
> 2019-07-04 16:46:56,863+01 INFO
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> 2019-07-04 16:46:56,912+01 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The
> disk 'test0' was successfully added.
> 2019-07-04 16:47:00,126+01 INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-95)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id:
> '15fe157d-7adb-4031-9e81-f51aa0b6528f' child commands
> '[d056397a-7ed9-4c01-b880-dd518421a2c6]' executions were completed, status
> 'SUCCEEDED'
> 2019-07-04 16:47:01,136+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
> 2019-07-04 16:47:01,141+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand'
> successfully.
> 2019-07-04 16:47:01,145+01 WARN
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no
> unlocking
> 2019-07-04 16:47:01,186+01 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99) [] EVENT_ID:
> USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was successfully
> added.
> 2019-07-04 16:47:19,446+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22)
> [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command: AddDiskCommand
> internal: false. Entities affected :  ID:
> 31536d80-ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> with role type USER
> 2019-07-04 16:47:19,464+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
> [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command:
> AddManagedBlockStorageDiskCommand internal: true.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'commandCoordinator' is using 1 threads out of 10, 1 threads waiting for
> tasks.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> 

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread Benny Zlotnik
On Thu, Jul 4, 2019 at 1:03 PM  wrote:

> I'm testing out the managed storage to connect to ceph and I have a few
> questions:

* Would I be correct in assuming that the hosted engine VM needs
> connectivity to the storage and not just the underlying hosts themselves?
> It seems like the cinderlib client runs from the engine?

Yes, this is correct

> * Does the ceph config and keyring need to replicated onto each
> hypervisor/host?
>
No, see[1], the keyring and ceph config can be present only on the engine
machine

> * I have managed to do one block operation so far (i've created a volume
> which is visible on the ceph side), but multiple other operations have
> failed and are 'running' in the engine task list. Is there any way I can
> increase debugging to see whats happening?
>
Can you provide logs? mainly engine.log and cinderlib.log
(/var/log/ovirt-engine/cinderlib/cinderlib.log
[1] -
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html


>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6WAPFHQLVCQZDM7ON74ZQUFNVSOAFA5/


[ovirt-users] Re: iso files

2019-06-24 Thread Benny Zlotnik
yes, you can use ovirt-imageio[1]

[1] -
https://ovirt.org/develop/release-management/features/storage/image-upload.html

On Mon, Jun 24, 2019 at 4:34 PM  wrote:

> Hi,
>
> is possible to install a VM without an ISO domain? for version 4.3.4.3 ?
>
> Thanks
>
>
> --
> --
> Jose Ferradeira
> http://www.logicworks.pt
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2XUVZKEHSZBDWGSNGCZYDME4HJS34WA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5CU2RGNAKA6C53LA7KKPWPWP532IY6J/


[ovirt-users] Re: Attaching/Detaching Export Domain from CLI

2019-06-23 Thread Benny Zlotnik
you can use the remove action[1], notice you need to send a DELETE request

http://ovirt.github.io/ovirt-engine-api-model/4.3/#services/attached_storage_domain/methods/remove

On Sun, Jun 23, 2019 at 4:05 PM  wrote:

> Hello,
>
> Thanks Benny! I was able to attach and detach using the links you gave me.
>
> Namely, I used curl to attach:
>
> curl \
> --insecure \
> --user 'admin@internal:abc' \
> --request POST \
> --header 'Version: 4' \
> --header 'Content-Type: application/xml' \
> --header 'Accept: application/xml' \
> --data '
> 
>   export
> 
> ' \
>
> https://ovirt.link/ovirt-engine/api/datacenters/7e0895d0-a76d-11e8-8f5b-00163e4d9e5f/storagedomains
>
> And another curl to put it into maintenance, using the deactivate method:
> curl \
> --insecure \
> --user 'admin@internal:abc' \
> --request POST \
> --header 'Version: 3' \
> --header 'Content-Type: application/xml' \
> --header 'Accept: application/xml' \
> --data '
> 
> True
> 
> ' \
>
> https://ovirt.link/ovirt-engine/api/datacenters/0002-0002-0002-0002-00bb/storagedomains/0537c42b-6076-4f17-89fd-c0244ebde051/deactivate
>
> But after this, how do you DETACH the export domain via CLI/CURL? I have
> tried using the StorageDomainServerConnection but with no success.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMOOY7PDVEWTWNU7QIMDRTAX6KA5PKES/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KI7GBOZL4KNN2QH4UMFUYF7XMR2DIGXD/


[ovirt-users] Re: Attaching/Detaching Export Domain from CLI

2019-06-23 Thread Benny Zlotnik
You can do this using the SDK/REST API[1][2]


[1]
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/attach_nfs_iso_storage_domain.py
[2]
http://ovirt.github.io/ovirt-engine-api-model/4.3/#_attach_storage_domains_to_data_center

On Sun, Jun 23, 2019 at 11:24 AM Alexander Stockel | eMAG, Technology <
alexander.stoc...@emag.ro> wrote:

> Hello,
>
>
>
> We have 2 ovirt clusters, one with 3.5.5-1 and the other with 4.3.3.
>
>
>
> We started to move some VM’s using an export domain using thr WEB GUI
> which is fine but we want to do this from CLI(ovirt-shell or api).
>
>
>
> How can one do these operations from CLI:
>
> -  Attach export domain -
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/v2v_guide/sect-attaching_an_export_storage_domain
>
> -  Detach and maintenance an export domain -
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Administration_Guide/To_edit_a_cluster.html
>
>
>
> Do these operations are actually suported anywhere via CLI? If yes, how…?
> I could not find them nowhere on the forums or on any official
> documentation.
>
>
>
> Thank you!
>
>
>
> Cu stima / Kind Regards / Mit freundlichen Grüßen,
>
>
>
> Alexander Stöckel
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPTZDO3GMRX2N4LJR5PUWGVH6QUYFZKH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6EA2WRADNBRA7PYBNMYYK54HCUTYN5E/


[ovirt-users] Re: Can't import some VMs after storage domain detach and reattach to new datacenter.

2019-06-23 Thread Benny Zlotnik
Can you attach engine and vdsm logs?

On Sun, Jun 23, 2019 at 11:29 AM m black  wrote:

> Hi.
>
> I have a problem with importing some VMs after importing storage domain in
> new datacenter.
>
> I have 5 servers with oVirt version 4.1.7, hosted-engine setup and
> datacenter with iscsi, fc and nfs storages. Also i have 3 servers with
> oVirt 4.3.4, hosted-engine and nfs storage.
>
> I've set iscsi and fc storages to maintenance and detached them
> successfully on 4.1.7 datacenter.
> Then i've imported these storage domains via Import Domain in 4.3.4
> datacenter successfully.
>
> After storage domains were imported to new 4.3.4 datacenter i've tried to
> import VMs from VM Import tab on storages.
>
> On the FC storage it was good, all VMs imported and started, all VMs in
> place.
>
> And with iSCSI storage i've got problems:
> On the iSCSI storage some VMs imported and started, but some of them
> missing, some of missing VMs disks are showing at Disk Import, i've tried
> to import disks from Disk Import tab and got error - 'Failed to register
> disk'.
> Tried to scan disks with 'Scan Disks' in storage domain, also tried
> 'Update OVF' - no result.
>
> What caused this? What can i do to recover missing VMs? What logs to
> examine?
> Can it be storage domain disk corruption?
>
> Please, help.
>
> Thank you.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MF5IUXURKIQZNNG4YW6ELENFD4GZIDQZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MC6GQKEQ3KYQ6FONRYNPTO5BQQ2YSES3/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Benny Zlotnik
Also, what is the storage domain type? Block or File?

On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik  wrote:
>
> Can you attach vdsm and engine logs?
> Does this happen for new VMs as well?
>
> On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter  wrote:
> >
> > after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> > images are become owned by root:root. Live migration succeeds and the vm
> > stays up, but after shutting down the VM from this point, starting it up
> > again will cause it to fail. At this point i have to go in and change
> > the permissions back to vdsm:kvm on the images, and the VM will boot
> > again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDZ6QW645UVEPMKAEVZNQK5BVGQJEWPJ/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Benny Zlotnik
Can you attach vdsm and engine logs?
Does this happen for new VMs as well?

On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter  wrote:
>
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from this point, starting it up
> again will cause it to fail. At this point i have to go in and change
> the permissions back to vdsm:kvm on the images, and the VM will boot
> again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T66LXURNJDMAMEUQU2EEHRWRNYXBHX72/


[ovirt-users] Re: Nvme over fabric array support through OVirt MANAGED_BLOCK_STORAGE Domain

2019-05-30 Thread Benny Zlotnik
If there is a backend driver available it should work. We did not test
this though, so it would be great to get bug reports if you had any
trouble
Upon VM migration the disk should be automatically connected to the
target host (and disconnected from the origin).

On Thu, May 30, 2019 at 10:35 AM  wrote:
>
> for an NVMe over fabric storage array that supports Cinder backend plugin, is 
> it possible to use MANAGED_BLOCK_STORAGE domain to configure Virtual disk? in 
> this case, is it required to have a single Virtual disk accessible from all 
> the VDSM hosts (within the cluster) to support VM migration?
>
> Thanks,
> Amit
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QTBLWYR3I7KOPTKQILA5EUYXGHOXKDG6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZAEIPOY62XDGMZ5NFSP737LCQW74GVU/


[ovirt-users] Re: Old mailing list SPAM

2019-05-15 Thread Benny Zlotnik
yes, I reported it and it's being worked on[1]

[1] https://ovirt-jira.atlassian.net/browse/OVIRT-2728

On Wed, May 15, 2019 at 4:05 PM Markus Stockhausen
 wrote:
>
> Hi,
>
> does anyone currently get old mails of 2016 from the mailing list?
> We are spammed with something like this from teknikservice.nu:
>
> ...
> Received: from mail.ovirt.org (localhost [IPv6:::1]) by mail.ovirt.org
>  (Postfix) with ESMTP id A33EA46AD3; Tue, 14 May 2019 14:48:48 -0400 (EDT)
>
> Received: by mail.ovirt.org (Postfix, from userid 995) id D283A407D0; Tue, 14
>  May 2019 14:42:29 -0400 (EDT)
>
> Received: from bauhaus.teknikservice.nu (smtp.teknikservice.nu [81.216.61.60])
> (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No
>  client certificate requested) by mail.ovirt.org (Postfix) with ESMTPS id
>  BF954467FE for ; Tue, 14 May 2019 14:36:54 -0400 (EDT)
>
> Received: by bauhaus.teknikservice.nu (Postfix, from userid 0) id 259822F504;
>  Tue, 14 May 2019 20:32:33 +0200 (CEST) <- 3 YEAR TIME WARP ?
>
> Received: from washer.actnet.nu (washer.actnet.nu [212.214.67.187]) (using
>  TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client
>  certificate requested) by bauhaus.teknikservice.nu (Postfix) with ESMTPS id
>  430FEDA541 for ; Thu,  6 Oct 2016 18:02:51 +0200 (CEST)
>
> Received: from lists.ovirt.org (lists.ovirt.org [173.255.252.138]) (using
>  TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client
>  certificate requested) by washer.actnet.nu (Postfix) with ESMTPS id
>  D75A82293FC for ; Thu,  6 Oct 2016 18:04:11 +0200
>  (CEST)
> ...
>
> Markus
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XI3LV4GPACT7ILZ3BNJLHHQBEWI3HWLI/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YIILZ6TRE3ZXCIPPD3N2F363AAH3HLS/


[ovirt-users] Re: Template Disk Corruption

2019-04-29 Thread Benny Zlotnik
What storage are you using?
I could not reproduce this.
Can you check if the qcow file created is intact? You can probably use
`qemu-img check` to do this




On Wed, Apr 24, 2019 at 11:26 PM Alex McWhirter  wrote:
>
> Every template, when you make a desktop VM out of it and then delete
> that VM. If you make a server VM there are no issues.
>
>
> On 2019-04-24 09:30, Benny Zlotnik wrote:
> > Does it happen all the time? For every template you create?
> > Or is it for a specific template?
> >
> > On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter 
> > wrote:
> >>
> >> oVirt is 4.2.7.5
> >> VDSM is 4.20.43
> >>
> >> Not sure which logs are applicable, i don't see any obvious errors in
> >> vdsm.log or engine.log. After you delete the desktop VM, and create
> >> another based on the template the new VM still boots, it just reports
> >> disk read errors and fails boot.
> >>
> >> On 2019-04-24 05:01, Benny Zlotnik wrote:
> >> > can you provide more info (logs, versions)?
> >> >
> >> > On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter 
> >> > wrote:
> >> >>
> >> >> 1. Create server template from server VM (so it's a full copy of the
> >> >> disk)
> >> >>
> >> >> 2. From template create a VM, override server to desktop, so that it
> >> >> become a qcow2 overlay to the template raw disk.
> >> >>
> >> >> 3. Boot VM
> >> >>
> >> >> 4. Shutdown VM
> >> >>
> >> >> 5. Delete VM
> >> >>
> >> >>
> >> >>
> >> >> Template disk is now corrupt, any new machines made from it will not
> >> >> boot.
> >> >>
> >> >>
> >> >> I can't see why this happens as the desktop optimized VM should have
> >> >> just been an overlay qcow file...
> >> >> ___
> >> >> Users mailing list -- users@ovirt.org
> >> >> To unsubscribe send an email to users-le...@ovirt.org
> >> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> >> oVirt Code of Conduct:
> >> >> https://www.ovirt.org/community/about/community-guidelines/
> >> >> List Archives:
> >> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
> >> > ___
> >> > Users mailing list -- users@ovirt.org
> >> > To unsubscribe send an email to users-le...@ovirt.org
> >> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> > oVirt Code of Conduct:
> >> > https://www.ovirt.org/community/about/community-guidelines/
> >> > List Archives:
> >> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4AWEXB3CAZAAHZGI7X2FH25UXRZWHA7W/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
Does it happen all the time? For every template you create?
Or is it for a specific template?

On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter  wrote:
>
> oVirt is 4.2.7.5
> VDSM is 4.20.43
>
> Not sure which logs are applicable, i don't see any obvious errors in
> vdsm.log or engine.log. After you delete the desktop VM, and create
> another based on the template the new VM still boots, it just reports
> disk read errors and fails boot.
>
> On 2019-04-24 05:01, Benny Zlotnik wrote:
> > can you provide more info (logs, versions)?
> >
> > On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter 
> > wrote:
> >>
> >> 1. Create server template from server VM (so it's a full copy of the
> >> disk)
> >>
> >> 2. From template create a VM, override server to desktop, so that it
> >> become a qcow2 overlay to the template raw disk.
> >>
> >> 3. Boot VM
> >>
> >> 4. Shutdown VM
> >>
> >> 5. Delete VM
> >>
> >>
> >>
> >> Template disk is now corrupt, any new machines made from it will not
> >> boot.
> >>
> >>
> >> I can't see why this happens as the desktop optimized VM should have
> >> just been an overlay qcow file...
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LJMZXLO7UU7OXI6KHZSOYUIVTC6KA6R/


[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
can you provide more info (logs, versions)?

On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter  wrote:
>
> 1. Create server template from server VM (so it's a full copy of the
> disk)
>
> 2. From template create a VM, override server to desktop, so that it
> become a qcow2 overlay to the template raw disk.
>
> 3. Boot VM
>
> 4. Shutdown VM
>
> 5. Delete VM
>
>
>
> Template disk is now corrupt, any new machines made from it will not
> boot.
>
>
> I can't see why this happens as the desktop optimized VM should have
> just been an overlay qcow file...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4DGO7N3LL4DPE6PCMZSIJLXPAA6UTBU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NRRYHYBKRUGIVPOKDUKDD4F523WEF6J/


[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-22 Thread Benny Zlotnik
 10:17:19,754+ INFO  (tasks/7) [storage.Volume] Changing volume 
> u'/rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/e1c182ae-f25d-464c-b557-93088e894452/a1157ad0-44a8-4073-a20c-468978973f4f'
>  permission to 0660 (fileVolume:480)
> 2019-04-20 10:17:19,800+ INFO  (tasks/7) [storage.VolumeManifest] Volume: 
> preparing volume 
> 0e01f014-530b-4067-aa1d-4e9378626a9d/a1157ad0-44a8-4073-a20c-468978973f4f 
> (volume:567)
>
> I tried to filter the usual noise out of VDSM.log so hopefully this is the 
> relevant bit you need - let me know if the full thing would help.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 20 Apr 2019, at 10:54, Benny Zlotnik  wrote:
>
> Sorry, I kind of lost track of what the problem is
>
> The  "KeyError: 'appsList'" issue is a known bug[1]
>
> If a manual (not via vdsm) run of qemu-img is actually stuck, then
> let's involve the qemu-discuss list, with the version of the relevant
> packages (qemu, qemu-img, kernel, your distro) and the output of gdb
> commands
>
> [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1690301
>
>
>
> On Sat, Apr 20, 2019 at 1:36 AM Callum Smith  wrote:
>
>
> Dear Benny and others,
>
> So it seems I wasn’t being patient with GDB and it does show me some output. 
> This error of qemu-img convert even is failing and preventing updating 
> ovirt-node version from 4.3.2 to 4.3.3.1. I get a feeling this is an 
> unrelated error, but I thought I’d be complete:
>
> Excuse any typos, im having to type this manually from a remote session, but 
> the error:
>
> [733272.427922] hid-generic 0003:0624:0249.0001: usb_submit_urb(ctrl) failed: 
> -19
>
> If this bug is preventing even a local yum updatei can’t see how it’s any 
> issue other than somehow involved with the hardware of the hypervisor, our 
> network and storage configuration must be irrelevant to this fact at this 
> stage?
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 11 Apr 2019, at 12:00, Callum Smith  wrote:
>
> Without the sudo and running in a  dir where the root has access to, gdb has 
> zero output:
>
> 
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 11 Apr 2019, at 11:54, Callum Smith  wrote:
>
> Some more information:
>
> running qemu-img convert manually having captured the failed attempt from the 
> previous:
>
> sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/6597eede-9fa0-4451-84fc-9f9c070cb5f3/765fa48b-2e77-4637-b4ca-e1affcd71e48
>  -O raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/9cc99110-70a2-477f-b3ef-1031a912d12b/c2776107-4579-43a6-9d60-93a5ea9c64c5
>  -W
>
> Added the -W flag just to see what would happen:
>
> gdb -p 79913 -batch -ex "t a a bt"
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7f528d7661f0 in __poll_nocancel () from /lib64/libc.so.6
>
> Thread 1 (Thread 0x7f528e6bb840 (LWP 79913)):
> #0  0x7f528d7661f0 in __poll_nocancel () from /lib64/libc.so.6
> #1  0x7f528dc510fb in sudo_ev_scan_impl () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #2  0x7f528dc49b44 in sudo_ev_loop_v1 () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #3  0x55e94aa0e271 in exec_nopty ()
> #4  0x55e94aa0afda in sudo_execute ()
> #5  0x55e94aa18a12 in run_command ()
> #6  0x55e94aa0969e in main ()
>
>
> And without -W:
>
> gdb -p 85235 -batch -ex "t a a bt"
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7fc0cc69b1f0 in __poll_nocancel () from /lib64/libc.so.6
>
> Thread 1 (Thread 0x7fc0cd5f0840 (LWP 85235)):
> #0  0x7fc0cc69b1f0 in __poll_nocancel () from /lib64/libc.so.6
> #1  0x7fc0ccb860fb in sudo_ev_scan_impl () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #2  0x7fc0ccb7eb44 in sudo_ev_loop_v1 () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #3  0x5610f4397271 in exec_nopty ()
> #4  0x5610f4393fda in sudo_execute ()
> #5  0x5610f43a1a12 in run_com

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-20 Thread Benny Zlotnik
t; "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in 
> _dynamicMethod
>   result = 
> fn(*methodArgs)
> File 
> "", line 2, in getAllVmStats
> File 
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
>   ret = 
> func(*args, **kwargs)
> File 
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1388, in getAllVmStats
>   statsList = 
> self._cif.getAllVmStats()
> File 
> "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 567, in 
> getAllVmStats
>   return 
> [v.getStats() for v in self.vmContainer.values()]
> File 
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1766, in getStats
>   oga_stats = 
> self._getGuestStats()
> File 
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1967, in 
> _getGuestStats
>   stats = 
> self.guestAgent.getGuestInfo()
> File 
> "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 505, in 
> getGuestInfo
>   del 
> qga['appsList']
>   KeyError: 
> 'appsList'
>
>
> It's the qemu-img convert for sure that's just failing to do anything, this 
> is the command from the clone:
> /usr/bin/qemu-img convert -p -t none -T none -f raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/6597eede-9fa0-4451-84fc-9f9c070cb5f3/765fa48b-2e77-4637-b4ca-e1affcd71e48
>  -O raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/f0700631-e60b-4c2a-a6f5-a6c818ae7651/d4fb05ec-7c78-4d89-9a66-614c093c6e16
>
> gdb has a blank output for this though. This means 4.3.2 is fairly unusable 
> for us, so two questions, can I downgrade to 4.2, and is there a fix coming 
> in 4.3.3 for this?
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 10 Apr 2019, at 11:22, Callum Smith  wrote:
>
> Creating a disk on the target share works fine. - This seems to specifically 
> be an issue to do with moving a disk to/from a share.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 10 Apr 2019, at 09:53, Callum Smith  wrote:
>
> gdb -p $(pidof qemu-img convert) -batch -ex "t a a bt"
> 289444: No such file or directory.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 10 Apr 2019, at 09:36, Benny Zlotnik  wrote:
>
> Can you run:
> $ gdb -p $(pidof qemu-img convert) -batch -ex "t a a bt"
>
>
> On Wed, Apr 10, 2019 at 11:26 AM Callum Smith  wrote:
>
>
> Dear All,
>
> Further to this, I can't migrate a disk to different storage using the GUI. 
> Both disks are configured identically and on the same physical NFS provider.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 9 Apr 2019, at 12:12, Callum Smith  wrote:
>
> Dear All,
>
> It would seem this is a bug in 4.3.? - As upgrading the old oVirt HE to 4.3 
> (from 4.2.latest) now means that the export of VMs to export domain no longer 
> works.
>
> Again qemu-img convert is using some cpu, but no network. Progress is 0.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
&

[ovirt-users] Re: oVirtNode 4.3.3 - Missing os-brick

2019-04-16 Thread Benny Zlotnik
This is just an info message, if you don't use managed block
storage[1] you can ignore it

[1] - 
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

On Tue, Apr 16, 2019 at 7:09 PM Stefano Danzi  wrote:
>
> Hello,
>
> I've just upgrade one node host to v. 4.3.3 and I can see this entry in
> logs every 10 seconds:
>
> ==> /var/log/vdsm/vdsm.log <==
> 2019-04-16 18:03:06,417+0200 INFO  (jsonrpc/5) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.:
> ('Cannot import os_brick',) (caps:150)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A32HK5AZKVITKRIXAC5AV44MMXRDMTEE/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UCNE5FR5NXLD75IGZVPQ35FIZAMGFD4/


[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Benny Zlotnik
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm]
(vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start
replication for vda to {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'type': 'file'}, 'diskType': 'file', 'format': 'cow', 'cache': 'none',
'volumeID': '5c2738a4-4279-4cc3-a0de-6af1095f8879', 'imageID':
'9a66bf0f-1333-4931-ad58-f6f1aa1143be', 'poolID':
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'device': 'disk', 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'propagateErrors': 'off', 'volumeChain': [{'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2',
'volumeID': u'cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'volumeID': u'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}]} (vm:4710)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704,
in diskReplicateStart
self._startDriveReplication(drive)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843,
in _startDriveReplication
self._dom.blockCopy(drive.name, destxml, flags=flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 92, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in blockCopy
ret = libvirtmod.virDomainBlockCopy(self._o, disk, destxml, params, flags)
TypeError: block params must be a dictionary


It looks like a bug in libvirt[1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1687114

On Fri, Apr 12, 2019 at 12:06 PM Ladislav Humenik
 wrote:
>
> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the migration works as expected. Is there
> a known bug for this? Shall we open a new one?
>
> Setup:
> ovirt - Dell PowerEdge R630
>  - CentOS Linux release 7.6.1810 (Core)
>  - ovirt-engine-4.2.8.2-1.el7.noarch
>  - kernel-3.10.0-957.10.1.el7.x86_64
> hypervisors- Dell PowerEdge R640
>  - CentOS Linux release 7.6.1810 (Core)
>  - kernel-3.10.0-957.10.1.el7.x86_64
>  - vdsm-4.20.46-1.el7.x86_64
>  - libvirt-5.0.0-1.el7.x86_64
>  - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> storage domain  - netapp NFS share
>
>
> logs are attached
>
> --
> Ladislav Humenik
>
> System administrator
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVDEMZED7TSZNRIV3CURBI3YUKUXV5ZT/


[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-10 Thread Benny Zlotnik
Can you run:
$ gdb -p $(pidof qemu-img convert) -batch -ex "t a a bt"


On Wed, Apr 10, 2019 at 11:26 AM Callum Smith  wrote:
>
> Dear All,
>
> Further to this, I can't migrate a disk to different storage using the GUI. 
> Both disks are configured identically and on the same physical NFS provider.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 9 Apr 2019, at 12:12, Callum Smith  wrote:
>
> Dear All,
>
> It would seem this is a bug in 4.3.? - As upgrading the old oVirt HE to 4.3 
> (from 4.2.latest) now means that the export of VMs to export domain no longer 
> works.
>
> Again qemu-img convert is using some cpu, but no network. Progress is 0.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 8 Apr 2019, at 15:42, Callum Smith  wrote:
>
> Dear All,
>
> We've exported some VMs from our old oVirt infrastructure and want to import 
> them into the new one, but qemu-img appears to be failing. We have mounted an 
> export domain populated from the old oVirt in the new hosted engine and are 
> using the GUI to import the VM. Manually running the command sits at 16% CPU, 
> 0% network usage and no progress. It appears to lock the NFS mount and ls and 
> lsof both hang.
>
> sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw  source> -O raw 
>
> Conversely a simple cp will work (ruling out file permissions errors):
> sudo -u vddm cp  
>
> What might we be doing wrong?
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZUBKWQIGGSJETPVRWN42R4J7COPFV6GS/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6EE6X43GTAJ6L4QBH2XQJ4LVPIXCZC3T/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GD2XCKEIPNCXXQWMR4OP5IKGCORJQEGB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UYRP6ZFG2IZXBDVIPCJZ7WGARUL5KT4F/


[ovirt-users] Re: cinderlib: VM migration fails

2019-04-08 Thread Benny Zlotnik
Please open a bug for this, with vdsm and supervdsm logs

On Mon, Apr 8, 2019 at 2:13 PM Matthias Leopold
 wrote:
>
> Hi,
>
> after I successfully started my first VM with a cinderlib attached disk
> in oVirt 4.3.2 I now want to test basic operations. I immediately
> learned that migrating this VM (OS disk: iSCSI, 2nd disk: Managed Block)
> fails with a java.lang.NullPointerException (see below) in engine.log.
> This even happens when the cinderlib disk is deactivated.
> Shall I report things like this here, shall I open a bug report or shall
> I just wait because the feature is under development?
>
> thx
> Matthias
>
>
> 2019-04-08 12:57:40,250+02 INFO
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (default task-66) [4ef05101] cinderlib output: {"driver_volume_type":
> "rbd", "data": {"secret_type": "ceph", "name":
> "ovirt-test/volume-2f053070-f5b7-4f04-856c-87a56d70cd75",
> "auth_enabled": true, "keyring": "[client.ovirt-test_user_rbd]\n\tkey =
> xxx\n", "cluster_name": "ceph", "secret_uuid": null, "hosts":
> ["xxx.xxx.216.45", "xxx.xxx.216.54", "xxx.xxx.216.55"], "volume_id":
> "2f053070-f5b7-4f04-856c-87a56d70cd75", "discard": true,
> "auth_username": "ovirt-test_user_rbd", "ports": ["6789", "6789", "6789"]}}
> 2019-04-08 12:57:40,256+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-66) [4ef05101] START,
> AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
> AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
> vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'}), log
> id: 67d3a79e
> 2019-04-08 12:57:40,262+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-66) [4ef05101] Failed in
> 'AttachManagedBlockStorageVolumeVDS' method, for vds: 'ov-test-04-01';
> host: 'ov-test-04-01.foo.bar': null
> 2019-04-08 12:57:40,262+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-66) [4ef05101] Command
> 'AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
> AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
> vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'})'
> execution failed: null
> 2019-04-08 12:57:40,262+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
> (default task-66) [4ef05101] FINISH,
> AttachManagedBlockStorageVolumeVDSCommand, return: , log id: 67d3a79e
> 2019-04-08 12:57:40,310+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-66) [4ef05101] EVENT_ID: VM_MIGRATION_FAILED(65),
> Migration failed  (VM: ovirt-test01.srv, Source: ov-test-04-03).
> 2019-04-08 12:57:40,314+02 INFO
> [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
> [4ef05101] Lock freed to object
> 'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]',
> sharedLocks=''}'
> 2019-04-08 12:57:40,314+02 ERROR
> [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
> [4ef05101] Command 'org.ovirt.engine.core.bll.MigrateVmCommand' failed:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> java.lang.NullPointerException (Failed with error ENGINE and code 5001)
> 2019-04-08 12:57:40,314+02 ERROR
> [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
> [4ef05101] Exception: javax.ejb.EJBException:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> java.lang.NullPointerException (Failed with error ENGINE and code 5001)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HSU2OYTNMR3KU4MN2NV5IAP72G37TYH3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XPIKUU34ZYNF4AMHTIZ3KOOQFQA2E7OH/


[ovirt-users] Re: UI bug viewing/editing host

2019-04-04 Thread Benny Zlotnik
Looks like it was fixed[1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1690268

On Thu, Apr 4, 2019 at 1:47 PM Callum Smith  wrote:
>
> 2019-04-04 10:43:35,383Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
> 2019-04-04 10:43:35,383Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-15) [] Uncaught exception: 
> com.google.gwt.core.client.JavaScriptException: (TypeError) : oab(...) is null
> at 
> org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView.$lambda$0(HostPopupView.java:693)
> at 
> org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView$lambda$0$Type.eventRaised(HostPopupView.java:693)
> at org.ovirt.engine.ui.uicompat.Event.$raise(Event.java:99)
> at 
> org.ovirt.engine.ui.uicommonweb.models.ListModel.$setSelectedItem(ListModel.java:82)
> at 
> org.ovirt.engine.ui.uicommonweb.models.ListModel.setSelectedItem(ListModel.java:78)
> at 
> org.ovirt.engine.ui.uicommonweb.models.ListModel.itemsChanged(ListModel.java:236)
> at 
> org.ovirt.engine.ui.uicommonweb.models.ListModel.$itemsChanged(ListModel.java:224)
> at 
> org.ovirt.engine.ui.uicommonweb.models.ListModel.$setItems(ListModel.java:102)
> at 
> org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$updateClusterList(HostModel.java:1037)
> at 
> org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$lambda$13(HostModel.java:1017)
> at 
> org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel$lambda$13$Type.onSuccess(HostModel.java:1017)
> at 
> org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227) 
> [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227) 
> [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133)
>  [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133)
>  [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
>  [frontend.jar:]
> at 
> org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
>  [frontend.jar:]
> at 
> com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
>  [gwt-servlet.jar:]
> at 
> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) 
> [gwt-servlet.jar:]
> at 
> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
>  [gwt-servlet.jar:]
> at 
> Unknown.onreadystatechange<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#hosts-network_interfaces;name=virthyp04.virt.in.bmrc.ox.ac.uk)
> at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) 
> [gwt-servlet.jar:]
> at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) 
> [gwt-servlet.jar:]
> at 
> Unknown.Su/<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#hosts-network_interfaces;name=virthyp04.virt.in.bmrc.ox.ac.uk)
> at Unknown.anonymous(Unknown)
>
> 2019-04-04 10:43:40,636Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
> 2019-04-04 10:43:40,636Z ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
> task-15) [] Uncaught exception: 
> com.google.gwt.event.shared.UmbrellaException: Exception caught: (TypeError) 
> : oab(...) is null
> at java.lang.Throwable.Throwable(Throwable.java:70) [rt.jar:1.8.0_201]
> at 
> java.lang.RuntimeException.RuntimeException(RuntimeException.java:32) 
> [rt.jar:1.8.0_201]
> at 
> com.google.web.bindery.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:64)
>  [gwt-servlet.jar:]
> at 
> com.google.gwt.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:25)
>  [gwt-servlet.jar:]
> at 
> com.google.gwt.event.shared.HandlerManager.$fireEvent(HandlerManager.java:117)
>  [gwt-servlet.jar:]
> at com.google.gwt.user.client.ui.Widget.$fireEvent(Widget.java:127) 
> [gwt-servlet.jar:]
> at com.google.gwt.user.client.ui.Widget.fireEvent(Widget.java:127) 
> [gwt-servlet.jar:]
> at 
> com.google.gwt.event.dom.client.DomEvent.fireNativeEvent(DomEvent.java:110) 
> [gwt-servlet.jar:]
> at 
> com.google.gwt.user.client.ui.Widget.$onBrowserEvent(Widget.java:163) 
> [gwt-servlet.jar:]
> at 
> com.google.gwt.user.client.ui.Widget.onBrowserEvent(Widget.java:163) 
> [gwt-servlet.jar:]
>  

[ovirt-users] Re: vdsClient in oVirt 4.3

2019-04-03 Thread Benny Zlotnik
please note setLegality has been deprecated since 4.1

On Wed, Apr 3, 2019 at 1:45 PM  wrote:
>
> Thanks Liran, it worked perfectly.
>
> Regards.
>
> El 2019-04-03 11:33, Liran Rotenberg escribió:
> > I think the similar way to do it as you used to is:
> > $ vdsm-client Volume setLegality storagedomainID=sdUUID
> > storagepoolID=spUUID imageID=imgUUID legality=LEGAL volumeID=volUUID
> > Where the values you set should be inside quotes for example 'LEGAL'.
> >
> > On Wed, Apr 3, 2019 at 1:08 PM  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the help.
> >>
> >> Could you please tell me what job_uuid and vol_gen should be replaced
> >> by? Should I just put any UUID for the job?
> >>
> >> Thanks.
> >>
> >> El 2019-04-03 09:52, Benny Zlotnik escribió:
> >> > it should be something like this:
> >> >   $ cat update.json
> >> >   {
> >> >   "job_id":"",
> >> >   "vol_info": {
> >> >   "sd_id": "",
> >> >   "img_id": "",
> >> >   "vol_id": "",
> >> >   "generation": ""
> >> >   },
> >> >   "legality": "LEGAL"
> >> >   }
> >> >   }
> >> >
> >> >   $ vdsm-client SDM update_volume -f update.json
> >> >
> >> > On Wed, Apr 3, 2019 at 11:48 AM  wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> In oVirt 4.1 we used this command to set a volume as LEGAL:
> >> >>
> >> >>  vdsClient -s  setVolumeLegality sdUUID spUUID imgUUID
> >> >> leafUUID
> >> >> LEGAL
> >> >>
> >> >> What would be the equivalent to this command using vdsm-client in
> >> >> oVirt
> >> >> 4.3?
> >> >>
> >> >> Thanks.
> >> >> ___
> >> >> Users mailing list -- users@ovirt.org
> >> >> To unsubscribe send an email to users-le...@ovirt.org
> >> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> >> oVirt Code of Conduct:
> >> >> https://www.ovirt.org/community/about/community-guidelines/
> >> >> List Archives:
> >> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T7QYVJMDWNRUOKLOZGEA7QPDBKLX4TO2/
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUZX6W6KWRZWHDRDJPH6PIEGRVNAGVED/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/44GHPYCK6QYCA5Q52EYNDNABQ4N45QB2/


  1   2   >