[ovirt-users] Re: Missing snapshot in the engine

2021-11-08 Thread Benny Zlotnik
usually the snapshot remains in engine and is missing in vdsm, I wonder
what happened, do you have the logs from the delete attempt on
5cb3fe58-3e01-4d32-bc7c-5907a4f858a8?

On Mon, Nov 8, 2021 at 12:21 PM Francesco Lorenzini 
wrote:

> Hi Benny,
>
> here the output:
>
> can you attach the output of:
>   $ vdsm-tool dump-volume-chains e25db7d0-060a-4046-94b5-235f38097cd8
>
>
> [root@OVIRT-HOST-44 ~]#  vdsm-tool dump-volume-chains
> e25db7d0-060a-4046-94b5-235f38097cd8
>
> Images volume chains (base volume first)
>
>image:0285b926-dff5-4769-bcf5-bbeb886ad817
>
>  - ada65f83-2a16-4ad7-87ad-bc99cb8193fc
>status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
> type: SPARSE, capacity: 134217728, truesize: 36864
>
>
>image:4d79c1da-34f0-44e3-8b92-c4bcb8524d83
>
>  - 5aad30c7-96f0-433d-95c8-2317e5f80045
>status: OK, voltype: INTERNAL, format: COW, legality:
> LEGAL, type: SPARSE, capacity: 214748364800, truesize: 165235134464
>
>  - 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8
>status: OK, voltype: LEAF, format: COW, legality: ILLEGAL,
> type: SPARSE, capacity: 214748364800, truesize: 8759619584
>
>
>image:72b67a6a-0ea3-4101-90cc-a18bcf774717
>
>  - 4506da8b-d73a-46ba-a91e-07e786ae934b
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE, capacity: 32212254720, truesize: 8427077632
>
>
>image:bfc94094-9367-4590-81f0-cc590c8f84ea
>
>  - 53bed4ac-5e59-4376-a611-675f2c888b99
>status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
> type: SPARSE, capacity: 134217728, truesize: 36864
>
>
> as well as:
>   $ psql -U engine -d engine -c "\x on" -c "select * from images where
> image_group_id = '4d79c1da-34f0-44e3-8b92-c4bcb8524d83'"
>
>
> engine=# select * from images where image_group_id  =
> '4d79c1da-34f0-44e3-8b92-c4bcb8524d83';
> -[ RECORD 1 ]-+-
> image_guid| 5aad30c7-96f0-433d-95c8-2317e5f80045
> creation_date | 2021-08-31 11:29:31+02
> size  | 214748364800
> it_guid   | ----
> parentid  | ----
> imagestatus   | 1
> lastmodified  | 2021-10-23 05:15:24.043+02
> vm_snapshot_id| c8285f9f-03fa-4877-90a2-0baabf42f123
> volume_type   | 2
> volume_format | 4
> image_group_id| 4d79c1da-34f0-44e3-8b92-c4bcb8524d83
> _create_date  | 2021-08-31 11:29:31.980191+02
> _update_date  | 2021-11-08 10:19:39.477886+01
> active| t
> volume_classification | 1
> qcow_compat   | 2
>
>
>
> Francesco
>
> Il 08/11/2021 11:05, Benny Zlotnik ha scritto:
>
> can you attach the output of:
>   $ vdsm-tool dump-volume-chains e25db7d0-060a-4046-94b5-235f38097cd8
>
> as well as:
>   $ psql -U engine -d engine -c "\x on" -c "select * from images where
> image_group_id = '4d79c1da-34f0-44e3-8b92-c4bcb8524d83'"
>
>
>
> On Mon, Nov 8, 2021 at 11:58 AM francesco--- via Users  
>  wrote:
>
> Hi,
>
> I have an issue with a VM (Windows Server 2016), running on Centos8, oVirt 
> host 4.4.8, oVirt engine 4.4.5. I used to perform regular snapshot (deleting 
> the previous one) on this VM but starting from 25/10 the task fail with the 
> errors that I'll attach at the bottom. The volume ID mentioned in the 
> error... :
>
> [...] vdsm.storage.exception.prepareIllegalVolumeError: Cannot prepare 
> illegal volume: ('5cb3fe58-3e01-4d32-bc7c-5907a4f858a8',) [...]
>
> ... refers to a snapshot's volume, because the ID of the current volume is 
> different and smaller that one in the engine UI with ID 
> 5aad30c7-96f0-433d-95c8-2317e5f80045:
>
> [root@ovirt-host44 4d79c1da-34f0-44e3-8b92-c4bcb8524d83]# ls -lh
> total 163G
> -rw-rw 1 vdsm kvm 154G Nov  8 10:32 5aad30c7-96f0-433d-95c8-2317e5f80045
> -rw-rw 1 vdsm kvm 1.0M Aug 31 11:49 
> 5aad30c7-96f0-433d-95c8-2317e5f80045.lease
> -rw-r--r-- 1 vdsm kvm  360 Nov  8 10:19 
> 5aad30c7-96f0-433d-95c8-2317e5f80045.meta
> -rw-rw 1 vdsm kvm 8.2G Oct 25 05:16 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8
> -rw-rw 1 vdsm kvm 1.0M Oct 23 05:15 
> 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.lease
> -rw-r--r-- 1 vdsm kvm  254 Oct 25 05:16 
> 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.meta
>
>
> It seems that the last working snapshot performend on 25/10 was not 
> completely deleted and now is used as the base from a new snapshot on the 
> host side, but is not listed on the engine.
>
> Any id

[ovirt-users] Re: Missing snapshot in the engine

2021-11-08 Thread Benny Zlotnik
can you attach the output of:
  $ vdsm-tool dump-volume-chains e25db7d0-060a-4046-94b5-235f38097cd8

as well as:
  $ psql -U engine -d engine -c "\x on" -c "select * from images where
image_group_id = '4d79c1da-34f0-44e3-8b92-c4bcb8524d83'"



On Mon, Nov 8, 2021 at 11:58 AM francesco--- via Users  wrote:
>
> Hi,
>
> I have an issue with a VM (Windows Server 2016), running on Centos8, oVirt 
> host 4.4.8, oVirt engine 4.4.5. I used to perform regular snapshot (deleting 
> the previous one) on this VM but starting from 25/10 the task fail with the 
> errors that I'll attach at the bottom. The volume ID mentioned in the 
> error... :
>
> [...] vdsm.storage.exception.prepareIllegalVolumeError: Cannot prepare 
> illegal volume: ('5cb3fe58-3e01-4d32-bc7c-5907a4f858a8',) [...]
>
> ... refers to a snapshot's volume, because the ID of the current volume is 
> different and smaller that one in the engine UI with ID 
> 5aad30c7-96f0-433d-95c8-2317e5f80045:
>
> [root@ovirt-host44 4d79c1da-34f0-44e3-8b92-c4bcb8524d83]# ls -lh
> total 163G
> -rw-rw 1 vdsm kvm 154G Nov  8 10:32 5aad30c7-96f0-433d-95c8-2317e5f80045
> -rw-rw 1 vdsm kvm 1.0M Aug 31 11:49 
> 5aad30c7-96f0-433d-95c8-2317e5f80045.lease
> -rw-r--r-- 1 vdsm kvm  360 Nov  8 10:19 
> 5aad30c7-96f0-433d-95c8-2317e5f80045.meta
> -rw-rw 1 vdsm kvm 8.2G Oct 25 05:16 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8
> -rw-rw 1 vdsm kvm 1.0M Oct 23 05:15 
> 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.lease
> -rw-r--r-- 1 vdsm kvm  254 Oct 25 05:16 
> 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.meta
>
>
> It seems that the last working snapshot performend on 25/10 was not 
> completely deleted and now is used as the base from a new snapshot on the 
> host side, but is not listed on the engine.
>
> Any idea? I should manually merge the snapsot on the host side? If yes, any 
> indications on that?
>
> Thank you for your time,
> Francesco
>
>
>
> --- Engine log during snapshot removal:
>
>
>
> 2021-11-08 10:19:25,751+01 INFO  
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (default 
> task-63) [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Lock Acquired to object 
> 'EngineLock:{exclusiveLocks='[f1d56493-b5e0-480f-87a3-5e7f373712fa=VM]', 
> sharedLocks=''}'
> 2021-11-08 10:19:26,306+01 INFO  
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: 
> CreateSnapshotForVmCommand internal: false. Entities affected :  ID: 
> f1d56493-b5e0-480f-87a3-5e7f373712fa Type: VMAction group 
> MANIPULATE_VM_SNAPSHOTS with role type USER
> 2021-11-08 10:19:26,383+01 INFO  
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: 
> CreateSnapshotDiskCommand internal: true. Entities affected :  ID: 
> f1d56493-b5e0-480f-87a3-5e7f373712fa Type: VMAction group 
> MANIPULATE_VM_SNAPSHOTS with role type USER
> 2021-11-08 10:19:26,503+01 INFO  
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: CreateSnapshotCommand 
> internal: true. Entities affected :  ID: ---- 
> Type: Storage
> 2021-11-08 10:19:26,616+01 INFO  
> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] START, CreateVolumeVDSCommand( 
> CreateVolumeVDSCommandParameters:{storagePoolId='609ff8db-09c5-435b-b2e5-023d57003138',
>  ignoreFailoverLimit='false', 
> storageDomainId='e25db7d0-060a-4046-94b5-235f38097cd8', 
> imageGroupId='4d79c1da-34f0-44e3-8b92-c4bcb8524d83', 
> imageSizeInBytes='214748364800', volumeFormat='COW', 
> newImageId='74e7188d-3727-4ed6-a2e5-dfa73b9e7da3', imageType='Sparse', 
> newImageDescription='', imageInitialSizeInBytes='0', 
> imageId='5aad30c7-96f0-433d-95c8-2317e5f80045', 
> sourceImageGroupId='4d79c1da-34f0-44e3-8b92-c4bcb8524d83', 
> shouldAddBitmaps='false'}), log id: 514e7f02
> 2021-11-08 10:19:26,768+01 INFO  
> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] FINISH, CreateVolumeVDSCommand, 
> return: 74e7188d-3727-4ed6-a2e5-dfa73b9e7da3, log id: 514e7f02
> 2021-11-08 10:19:26,805+01 INFO  
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] CommandAsyncTask::Adding 
> CommandMultiAsyncTasks object for command 
> 'eb1f1fdd-a46e-45e1-a6f0-3a97fe1f6e28'
> 2021-11-08 10:19:26,805+01 INFO  
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] 
> (EE-ManagedThreadFactory-engine-Thread-49) 
> [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] CommandMultiAsyncTasks::attachTask: 
> Attaching task '4bb54004-f96c-4f14-abca-bea477d866ea' to command 

[ovirt-users] Re: export to export domain concurrency

2021-11-04 Thread Benny Zlotnik
 yes, it should work, did you run into issues?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLAFB5JHDMVJLBXOKD4CMCS7RUFHDJHO/


[ovirt-users] Re: is disk reduce command while VM is active after snapshot deletion save?

2021-11-03 Thread Benny Zlotnik
On Wed, Nov 3, 2021 at 2:51 PM  wrote:
>
> When creating a snapshot, the volume gets two lv extents (2GB) . But after 
> deleting the snapshot, the extents are kept and with every Snapshot creation 
> the volume size increases.
> This is really annoying, especially when using snapshot for nightly backup. 
> we are using oVirt 4.3.10 and we can't upgrade to 4.4
>
> I have read that to shrink the volume the disk reduce command is what helps 
> here, but the rest api documentation says, that this is only applicable while 
> vm is not running. crazy as i am, i have called the reduce command while the 
> vm was running, and it seemed to work. the volume has shrunk and the vm 
> didn't crash.
it's actually blocked only for the active volume of a running VM, I
suppose you did not run it on the active volume?
but it seems like the documentation needs to be fixed

> But is it save to do so? and why isn't the reduce command is called after 
> deletion of a snapshot?
It actually is called, except when the active layer participates in
the live merge (or the SD isn't block SD)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HWR6O46TQXLC4PO7JYVL3XGZ2RJD6MLW/


[ovirt-users] Re: Failed to delete snapshot

2021-10-28 Thread Benny Zlotnik
We need full logs (what happened before the snippet you pasted), as
well as vdsm logs from the SPM host, and vdsm logs from the host
running this VM

On Wed, Oct 27, 2021 at 5:04 PM  wrote:
>
> Hello all.
>
> We use vprotect to make snapshot backups of our vm's.
> This VM, let's call it OVIRTVM, has it's disks created as thin.
> A full snapshot is created daily on sunday and the daily ones are incremental.
>
> After the backup vprotect tries to delete the snapshot and almost always 
> fails with the error:
> "Failed to delete snapshot 'vProtect 2021-10-25 22:30:17.641654' for VM 
> 'OVIRTVM'."
>
> From engine.log can't get much more information:
>
> 2021-10-26 22:39:02,510+01 INFO  
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-511666) 
> [] User admin@internal successfully logged in with scopes: ovirt-app-api 
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search 
> ovirt-ext=token-info:validate ovirt-ext=token:password-access
> 2021-10-26 22:39:02,558+01 ERROR 
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-15) 
> [666e0f97-8b02-4b1e-80d4-2a640dd28d90] Ending command 
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2021-10-26 22:39:02,595+01 INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default 
> task-511666) [1a9981] Running command: CreateUserSessionCommand internal: 
> false.
> 2021-10-26 22:39:02,630+01 INFO  
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-511612) 
> [] User admin@internal successfully logged in with scopes: ovirt-app-api 
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search 
> ovirt-ext=token-info:validate ovirt-ext=token:password-access
> 2021-10-26 22:39:02,674+01 INFO  
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-511669) 
> [] User admin@internal successfully logged in with scopes: ovirt-app-api 
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search 
> ovirt-ext=token-info:validate ovirt-ext=token:password-access
> 2021-10-26 22:39:02,729+01 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-15) 
> [666e0f97-8b02-4b1e-80d4-2a640dd28d90] EVENT_ID: 
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot 
> 'vProtect 2021-10-25 22:30:17.641654' for VM 'OVIRTVM'.
> 2021-10-26 22:39:02,739+01 INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default 
> task-511675) [55891f0c] Running command: CreateUserSessionCommand internal: 
> false.
> 2021-10-26 22:39:02,755+01 INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default 
> task-511660) [7895798d] Running command: CreateUserSessionCommand internal: 
> false.
>
> What else should I be looking for?
>
> Thanks in advance!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/63FSCAGHIQJTCWONJ3RPCOIWKAFYM7NE/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D4JFU3JXXOLNHM34JCMMQA5D5LUMZ4AJ/


[ovirt-users] Re: about the Live Storage Migration

2021-09-26 Thread Benny Zlotnik
when you move a disk that's attached to a running VM, live storage
migration will be performed

On Sun, Sep 26, 2021 at 2:08 PM Tommy Sway  wrote:
>
> From the document:
>
>
>
> Overview of Live Storage Migration
>
> Virtual disks can be migrated from one storage domain to another while the 
> virtual machine to which they are attached is running. This is referred to as 
> live storage migration. When a disk attached to a running virtual machine is 
> migrated, a snapshot of that disk’s image chain is created in the source 
> storage domain, and the entire image chain is replicated in the destination 
> storage domain. As such, ensure that you have sufficient storage space in 
> both the source storage domain and the destination storage domain to host 
> both the disk image chain and the snapshot. A new snapshot is created on each 
> live storage migration attempt, even when the migration fails.
>
> Consider the following when using live storage migration:
>
> You can live migrate multiple disks at one time.
>
> Multiple disks for the same virtual machine can reside across more than one 
> storage domain, but the image chain for each disk must reside on a single 
> storage domain.
>
> You can live migrate disks between any two storage domains in the same data 
> center.
>
> You cannot live migrate direct LUN hard disk images or disks marked as 
> shareable.
>
>
>
> But where do users perform online storage migrations?
>
> There seems to be no interface.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRFF5WF7TEWL3P66LA24C5NJWDAR5JUP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4YHBCFSFBUDMUHJI7XEHJFPVZILXTHL/


[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Benny Zlotnik
Can you submit a bug for this?

On Wed, Sep 22, 2021 at 3:31 PM Shantur Rathore
 wrote:
>
> Hi all,
>
> Anyone tried using Templates with Managed Block Storage?
> I created a VM on MBS and then took a snapshot.
> This worked but as soon as I created a Template from snapshot, the
> template got created but there is no disk attached to the template.
>
> Anyone seeing something similar?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6SPHZ3XOSXRYE72SWRANTXZCA27RKDY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUZJYWSQ4ZVHZ67BCETJC7MSOOGGNCBT/


[ovirt-users] Re: about the vm disk type

2021-09-22 Thread Benny Zlotnik
file-based domains use RAW for both settings, thin-provisioned on block
domain will use qcow2, otherwise RAW will be used

On Wed, Sep 22, 2021 at 1:22 PM Tommy Sway  wrote:

> For example :
>
>
>
> And I check the file on the storage:
>
>
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# cat
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.meta
>
> CAP=10737418240
>
> CTIME=1632305740
>
> DESCRIPTION={"DiskAlias":"test09222_Disk1","DiskDescription":""}
>
> DISKTYPE=DATA
>
> DOMAIN=f77091d9-aabc-42db-87b1-b8299765482e
>
> *FORMAT=RAW*
>
> GEN=0
>
> IMAGE=51dcbfae-1100-4e43-9e0a-bb8c578623d7
>
> LEGALITY=LEGAL
>
> PUUID=----
>
> TYPE=SPARSE
>
> VOLTYPE=LEAF
>
> EOF
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# ll
>
> total 1025
>
> -rw-rw. 1 vdsm kvm 10737418240 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> -rw-rw. 1 vdsm kvm 1048576 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.lease
>
> -rw-r--r--. 1 vdsm kvm 303 Sep 22 18:15
> 9e4dc022-c450-4f85-89f5-233fa41c07d0.meta
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]# du -h
> ./9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> 0   ./9e4dc022-c450-4f85-89f5-233fa41c07d0
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
> [root@olvms1 51dcbfae-1100-4e43-9e0a-bb8c578623d7]#
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of
> Tommy Sway
> Sent: Wednesday, September 22, 2021 6:07 PM
> To: 'Vojtech Juranek' ; users@ovirt.org
> Subject: [ovirt-users] Re: about the vm disk type
>
>
>
> You mean if it's pre-allocated, it must be RAW, not Qcow2?
>
> The documentation only states that RAW must be pre-allocated, but it does
> not say that qCOW2 cannot use pre-allocation.
>
>
>
>
>
>
>
>
>
>
>
> -Original Message-
>
> From: Vojtech Juranek 
>
> Sent: Wednesday, September 22, 2021 6:04 PM
>
> To: users@ovirt.org
>
> Cc: Tommy Sway 
>
> Subject: Re: [ovirt-users] about the vm disk type
>
>
>
> On Wednesday, 22 September 2021 09:55:26 CEST Tommy Sway wrote:
>
> > When I create the VM's image disk, I am not asked to select the
>
> > following type of disk.
>
>
>
> Actually you are, it's "Allocation Policy" drop down menu.
>
> Thin provisioned == qcow format
>
> Preallocated == raw
>
>
>
> >
>
> >
>
> > What is the default value ?
>
>
>
> Thin provisioned, i.e. qcow.
>
>
>
> >
>
> >
>
> > Thanks.
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > QCOW2 Formatted Virtual Machine Storage
>
> >
>
> > QCOW2 is a storage format for virtual disks. QCOW stands for QEMU
>
> > copy-on-write. The QCOW2 format decouples the physical storage layer
>
> > from the virtual layer by adding a mapping between logical and
>
> > physical
>
> blocks.
>
> > Each logical block is mapped to its physical offset, which enables
>
> > storage over-commitment and virtual machine snapshots, where each QCOW
>
> > volume only represents changes made to an underlying virtual disk.
>
> >
>
> > The initial mapping points all logical blocks to the offsets in the
>
> > backing file or volume. When a virtual machine writes data to a QCOW2
>
> > volume after a snapshot, the relevant block is read from the backing
>
> > volume, modified with the new information and written into a new
>
> > snapshot QCOW2 volume. Then the map is updated to point to the new place.
>
> >
>
> > Raw
>
> >
>
> > The raw storage format has a performance advantage over QCOW2 in that
>
> > no formatting is applied to virtual disks stored in the raw format.
>
> > Virtual machine data operations on virtual disks stored in raw format
>
> > require no additional work from hosts. When a virtual machine writes
>
> > data to a given offset in its virtual disk, the I/O is written to the
>
> > same offset on the backing file or logical volume.
>
> >
>
> > Raw format requires that the entire space of the defined image be
>
> > preallocated unless using externally managed thin provisioned LUNs
>
> > from a storage array.
>
>
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGJX4VUOYVBG6AWPKWVMILXINNOFFO2V/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: Managed Block Storage issues

2021-09-22 Thread Benny Zlotnik
I see the rule is created in the logs:

MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
call add_managed_udev_rule with
('ed1a0e9f-4d30-4896-b965-534861cc0c02',
'/dev/mapper/360014054b727813d1bc4d4cefdade7db') {}
MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,505::udev::124::SuperVdsm.ServerCallback::(add_managed_udev_rule)
Creating rule 
/etc/udev/rules.d/99-vdsm-managed_ed1a0e9f-4d30-4896-b965-534861cc0c02.rules:
'SYMLINK=="mapper/360014054b727813d1bc4d4cefdade7db",
RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}"\n'

While we no longer test backends other than ceph, this used to work
back when we started and it worked for NetApp. Perhaps this rule is
incorrect, can you check this manually?

regarding 2, can you please submit a bug?

On Wed, Sep 22, 2021 at 1:03 PM Shantur Rathore
 wrote:
>
> Hi all,
>
> I am trying to set up Managed block storage and have the following issues.
>
> My setup:
> Latest oVirt Node NG : 4.4.8
> Latest oVirt Engine : 4.4.8
>
> 1. Unable to copy to iSCSI based block storage
>
> I created a MBS with Synology UC3200 as a backend ( supported by
> Cinderlib ). It was created fine but when I try to copy disks to it,
> it fails.
> Upon looking at the logs from SPM, I found "qemu-img" failed with an
> error that it cannot open "/dev/mapper/xx" : Permission Error.
> Had a look through the code and digging out more, I saw that
> a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
> b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
> device always stays at root:root
>
> I added a udev rule
> ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
> OWNER="vdsm", MODE="0660"
>
> and the disk copied correctly when /dev/mapper/x got created.
>
> 2. Copy progress finishes in UI very early than the actual qemu-img process.
> The UI shows the Copy process is completed successfully but it's
> actually still copying the image.
> This happens both for ceph and iscsi based mbs.
>
> Is there any known workaround to get iSCSI MBS working?
>
> Kind regards,
> Shantur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6TMTW23SUAKR4UOXVSZKXHJY3PVMIDD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFELPIEEW2J4DVEBUNJPMQGMAR5JBKL4/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-14 Thread Benny Zlotnik
Yes, but please be aware that 4.3 is EOL for a while, there have been
significant changes between 4.3 and 4.4 in all relevant components

On Tue, Sep 14, 2021 at 2:39 PM Tommy Sway  wrote:
>
> As I understand it now, on 4.3 I need to enable it like this:
> 1. Activate the REPO source and install cinder relevant software
> 2, when run the engine-setup add -s ManagedBlockDomainSupported = true option
> 3. Add ManagedBlockDomain after installation
> Do I understand this correctly?
>
> Thank you very much!
>
>
>
>
> -----Original Message-
> From: Benny Zlotnik 
> Sent: Tuesday, September 14, 2021 7:19 PM
> To: Tommy Sway 
> Cc: users 
> Subject: Re: [ovirt-users] Re: what kind of managed block can oVirt manage ?
>
> On Tue, Sep 14, 2021 at 2:11 PM Tommy Sway  wrote:
> >
> > Do you mean that I don't need to manually add cider related repo in the new 
> > version, the engine-setup process will automatically add cinder related 
> > repo and install the package?
> it will add the repo but will not install the package as we had to revert 
> this for now
> > All I need to do is select CinderLib option during Engine setup?
> in 4.3 you also need to enable:
> $ engine-config -s ManagedBlockDomainSupported=true
> > Then I can add Managed Block Domains in the admin interface?
> after enabling, yes
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYRJKXLFZ7EKNKVRZGCDIMTLK3SDXZKC/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-14 Thread Benny Zlotnik
On Tue, Sep 14, 2021 at 2:11 PM Tommy Sway  wrote:
>
> Do you mean that I don't need to manually add cider related repo in the new 
> version, the engine-setup process will automatically add cinder related repo 
> and install the package?
it will add the repo but will not install the package as we had to
revert this for now
> All I need to do is select CinderLib option during Engine setup?
in 4.3 you also need to enable:
$ engine-config -s ManagedBlockDomainSupported=true
> Then I can add Managed Block Domains in the admin interface?
after enabling, yes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5AO77GRXOBKFWYGMB7LVJMLONYRUKB57/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-14 Thread Benny Zlotnik
No, the difference is that we setup the required repos (we also
installed the dependencies, but we had to revert this for now). The
setup of a Managed Block Storage Domain remains the same

On Tue, Sep 14, 2021 at 1:49 PM Tommy Sway  wrote:
>
> Do you mean that in the new version, I only need to configure CINDERLib when 
> running Ovirt-Engine, and then I can connect to Ceph server directly in the 
> admin interface?
>
>
>
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of Benny 
> Zlotnik
> Sent: Tuesday, September 14, 2021 4:56 PM
> To: Tommy Sway 
> Cc: users 
> Subject: [ovirt-users] Re: what kind of managed block can oVirt manage ?
>
> I see, if you use 4.3 you will have to add the repos manually on the 
> ovirt-engine node and vdsm hosts If you did not enable cinderlib in 
> engine-setup previously you have to do that, yes
>
> On Tue, Sep 14, 2021 at 11:43 AM Tommy Sway  wrote:
> >
> > But my system version is 4.3, so how can I activate it?
> > Do I should install as documented before running engine- Setup?
> > It seems that CiderLib is also given a database to create while running 
> > engine-setup,so I guess the setup-config maybe also important.
> >
> >
> >
> >
> >
> > -Original Message-
> > From: users-boun...@ovirt.org  On Behalf Of
> > Benny Zlotnik
> > Sent: Tuesday, September 14, 2021 4:09 PM
> > To: Tommy Sway 
> > Cc: users 
> > Subject: [ovirt-users] Re: what kind of managed block can oVirt manage ?
> >
> > If it's already enabled there's no need to run it again. I looked at
> > the doc again now, and it's slightly outdated, since 4.4.8 we add the
> > required openstack (victoria) and ceph repos automatically
> >
> > On Tue, Sep 14, 2021 at 8:18 AM Tommy Sway  wrote:
> > >
> > > Thank you very much!
> > >
> > > I read the documentation and found out that you are one of the authors of 
> > > this feature! I guess I asked the right person.
> > >
> > > After installed the CinderLib as you mentioned in the second link, do I 
> > > still need to run engine setup and integrate CinderLib to use Managed 
> > > Block ?
> > >
> > >
> > >
> > >
> > >
> > > [root@olvmm ~]#  engine-setup --reconfigure-optional-components
> > >
> > > [ INFO  ] Stage: Initializing
> > >
> > > [ INFO  ] Stage: Environment setup
> > >
> > >   Configuration files:
> > > ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
> > > '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
> > > '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
> > >
> > >   Log file:
> > > /var/log/ovirt-engine/setup/ovirt-engine-setup-20210914130015-1qjhwx
> > > .l
> > > og
> > >
> > >   Version: otopi-1.8.4 (otopi-1.8.4-1.el7)
> > >
> > > [ INFO  ] Stage: Environment packages setup
> > >
> > > [ INFO  ] Stage: Programs detection
> > >
> > > [ INFO  ] Stage: Environment setup (late)
> > >
> > > [ INFO  ] Stage: Environment customization
> > >
> > >
> > >
> > >  --== PRODUCT OPTIONS ==--
> > >
> > >
> > >
> > >   Set up Cinderlib integration
> > >
> > >   (Currently in tech preview)
> > >
> > >   (Yes, No) [No]: Yes
> > >
> > > [ INFO  ] ovirt-provider-ovn already installed, skipping.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From: Benny Zlotnik 
> > > Sent: Monday, September 13, 2021 9:20 PM
> > > To: Tommy Sway 
> > > Cc: users 
> > > Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
> > >
> > >
> > >
> > > cinderlib (Managed Block Storage) does not use openstack at all, we
> > > have an example of how to add ceph in the feature page[1]
> > >
> > > and docs have instructions on how to set it up[2]
> > >
> > >
> > >
> > >
> > >
> > > [1]
> > > https://www.ovirt.org/develop/release-management/features/storage/ci
> > > nd
> > > erlib-integration.html
> > >
> > > [2]
> > > https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone
> > > _m anager_with_local_databases/#Set_up_Cinderlib
> > >
> 

[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-14 Thread Benny Zlotnik
I see, if you use 4.3 you will have to add the repos manually on the
ovirt-engine node and vdsm hosts
If you did not enable cinderlib in engine-setup previously you have to
do that, yes

On Tue, Sep 14, 2021 at 11:43 AM Tommy Sway  wrote:
>
> But my system version is 4.3, so how can I activate it?
> Do I should install as documented before running engine- Setup?
> It seems that CiderLib is also given a database to create while running 
> engine-setup,so I guess the setup-config maybe also important.
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of Benny 
> Zlotnik
> Sent: Tuesday, September 14, 2021 4:09 PM
> To: Tommy Sway 
> Cc: users 
> Subject: [ovirt-users] Re: what kind of managed block can oVirt manage ?
>
> If it's already enabled there's no need to run it again. I looked at the doc 
> again now, and it's slightly outdated, since 4.4.8 we add the required 
> openstack (victoria) and ceph repos automatically
>
> On Tue, Sep 14, 2021 at 8:18 AM Tommy Sway  wrote:
> >
> > Thank you very much!
> >
> > I read the documentation and found out that you are one of the authors of 
> > this feature! I guess I asked the right person.
> >
> > After installed the CinderLib as you mentioned in the second link, do I 
> > still need to run engine setup and integrate CinderLib to use Managed Block 
> > ?
> >
> >
> >
> >
> >
> > [root@olvmm ~]#  engine-setup --reconfigure-optional-components
> >
> > [ INFO  ] Stage: Initializing
> >
> > [ INFO  ] Stage: Environment setup
> >
> >   Configuration files:
> > ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
> > '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
> > '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
> >
> >   Log file:
> > /var/log/ovirt-engine/setup/ovirt-engine-setup-20210914130015-1qjhwx.l
> > og
> >
> >   Version: otopi-1.8.4 (otopi-1.8.4-1.el7)
> >
> > [ INFO  ] Stage: Environment packages setup
> >
> > [ INFO  ] Stage: Programs detection
> >
> > [ INFO  ] Stage: Environment setup (late)
> >
> > [ INFO  ] Stage: Environment customization
> >
> >
> >
> >  --== PRODUCT OPTIONS ==--
> >
> >
> >
> >   Set up Cinderlib integration
> >
> >   (Currently in tech preview)
> >
> >   (Yes, No) [No]: Yes
> >
> > [ INFO  ] ovirt-provider-ovn already installed, skipping.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Benny Zlotnik 
> > Sent: Monday, September 13, 2021 9:20 PM
> > To: Tommy Sway 
> > Cc: users 
> > Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
> >
> >
> >
> > cinderlib (Managed Block Storage) does not use openstack at all, we
> > have an example of how to add ceph in the feature page[1]
> >
> > and docs have instructions on how to set it up[2]
> >
> >
> >
> >
> >
> > [1]
> > https://www.ovirt.org/develop/release-management/features/storage/cind
> > erlib-integration.html
> >
> > [2]
> > https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone_m
> > anager_with_local_databases/#Set_up_Cinderlib
> >
> >
> >
> > On Mon, Sep 13, 2021 at 2:43 PM Tommy Sway  wrote:
> >
> > You mean to configure ceph in Cinder pages must be connected to real 
> > openstack?
> >
> > Can ceph be connected to a Managed Block page by simply linking to 
> > cenderlib files without accessing the actual openstack?
> >
> >
> >
> > I am very interested in this section, can you send some related guide 
> > documents?
> >
> > Thank you very much!
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Benny Zlotnik 
> > Sent: Monday, September 13, 2021 7:27 PM
> > To: Tommy Sway 
> > Cc: users 
> > Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
> >
> >
> >
> > cinder uses an actual openstack environment setup with cinder,
> > cinderlib does not require it
> >
> >
> >
> > On Mon, Sep 13, 2021 at 2:17 PM Tommy Sway  wrote:
> >
> >
> >
> > What's the difference between cender page and Managed Block page ?
> >
> > If  have to connect them through cinderlib,why not put them all under
> > cinder page ?
> >
> >
> >
> >
> >
> >
> >
>

[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-14 Thread Benny Zlotnik
If it's already enabled there's no need to run it again. I looked at
the doc again now, and it's slightly outdated, since 4.4.8 we add the
required openstack (victoria) and ceph repos automatically

On Tue, Sep 14, 2021 at 8:18 AM Tommy Sway  wrote:
>
> Thank you very much!
>
> I read the documentation and found out that you are one of the authors of 
> this feature! I guess I asked the right person.
>
> After installed the CinderLib as you mentioned in the second link, do I still 
> need to run engine setup and integrate CinderLib to use Managed Block ?
>
>
>
>
>
> [root@olvmm ~]#  engine-setup --reconfigure-optional-components
>
> [ INFO  ] Stage: Initializing
>
> [ INFO  ] Stage: Environment setup
>
>   Configuration files: 
> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', 
> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', 
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>
>   Log file: 
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20210914130015-1qjhwx.log
>
>   Version: otopi-1.8.4 (otopi-1.8.4-1.el7)
>
> [ INFO  ] Stage: Environment packages setup
>
> [ INFO  ] Stage: Programs detection
>
> [ INFO  ] Stage: Environment setup (late)
>
> [ INFO  ] Stage: Environment customization
>
>
>
>  --== PRODUCT OPTIONS ==--
>
>
>
>   Set up Cinderlib integration
>
>   (Currently in tech preview)
>
>   (Yes, No) [No]: Yes
>
> [ INFO  ] ovirt-provider-ovn already installed, skipping.
>
>
>
>
>
>
>
>
>
> From: Benny Zlotnik 
> Sent: Monday, September 13, 2021 9:20 PM
> To: Tommy Sway 
> Cc: users 
> Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> cinderlib (Managed Block Storage) does not use openstack at all, we have an 
> example of how to add ceph in the feature page[1]
>
> and docs have instructions on how to set it up[2]
>
>
>
>
>
> [1] 
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> [2] 
> https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib
>
>
>
> On Mon, Sep 13, 2021 at 2:43 PM Tommy Sway  wrote:
>
> You mean to configure ceph in Cinder pages must be connected to real 
> openstack?
>
> Can ceph be connected to a Managed Block page by simply linking to cenderlib 
> files without accessing the actual openstack?
>
>
>
> I am very interested in this section, can you send some related guide 
> documents?
>
> Thank you very much!
>
>
>
>
>
>
>
>
>
> From: Benny Zlotnik 
> Sent: Monday, September 13, 2021 7:27 PM
> To: Tommy Sway 
> Cc: users 
> Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> cinder uses an actual openstack environment setup with cinder, cinderlib does 
> not require it
>
>
>
> On Mon, Sep 13, 2021 at 2:17 PM Tommy Sway  wrote:
>
>
>
> What's the difference between cender page and Managed Block page ?
>
> If  have to connect them through cinderlib,why not put them all under cinder 
> page ?
>
>
>
>
>
>
>
> From: Benny Zlotnik 
> Sent: Monday, September 13, 2021 6:14 PM
> To: Tommy Sway 
> Cc: users 
> Subject: Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> yes, we support ceph via cinderlib, so in theory any vendor with a storage 
> driver for cinder can work, but we only test ceph
>
>
>
> On Mon, Sep 13, 2021 at 1:06 PM Tommy Sway  wrote:
>
> On the create disk page, there is option to create disk that is from managed 
> block, I want wo know what’s kind of it ?  Ceph block device ?
>
>
>
> Thanks!
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSQSTL5SR5TE6DKAJZLECVI52OW6ZLXZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W4SI72QYXNBL5WMDKTVWFGQ53IOCMFEC/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-13 Thread Benny Zlotnik
cinderlib (Managed Block Storage) does not use openstack at all, we have an
example of how to add ceph in the feature page[1]
and docs have instructions on how to set it up[2]


[1]
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
[2]
https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib

On Mon, Sep 13, 2021 at 2:43 PM Tommy Sway  wrote:

> You mean to configure ceph in Cinder pages must be connected to real
> openstack?
>
> Can ceph be connected to a Managed Block page by simply linking to
> cenderlib files without accessing the actual openstack?
>
>
>
> I am very interested in this section, can you send some related guide
> documents?
>
> Thank you very much!
>
>
>
>
>
>
>
>
>
> *From:* Benny Zlotnik 
> *Sent:* Monday, September 13, 2021 7:27 PM
> *To:* Tommy Sway 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> cinder uses an actual openstack environment setup with cinder, cinderlib
> does not require it
>
>
>
> On Mon, Sep 13, 2021 at 2:17 PM Tommy Sway  wrote:
>
>
>
> What's the difference between cender page and Managed Block page ?
>
> If  have to connect them through cinderlib,why not put them all under
> cinder page ?
>
>
>
>
>
>
>
> *From:* Benny Zlotnik 
> *Sent:* Monday, September 13, 2021 6:14 PM
> *To:* Tommy Sway 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> yes, we support ceph via cinderlib, so in theory any vendor with a storage
> driver for cinder can work, but we only test ceph
>
>
>
> On Mon, Sep 13, 2021 at 1:06 PM Tommy Sway  wrote:
>
> On the create disk page, there is option to create disk that is from
> managed block, I want wo know what’s kind of it ?  Ceph block device ?
>
>
>
> Thanks!
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSQSTL5SR5TE6DKAJZLECVI52OW6ZLXZ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T7ROLYYPKF3DIVSUWYZUHNC3V2DGWCIB/


[ovirt-users] Re: Create template from snapshot of vm using MBS disk

2021-09-13 Thread Benny Zlotnik
use this link https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
Set the component to BLL.Storage and the oVirt Team to Storage

On Mon, Sep 13, 2021 at 4:17 AM  wrote:
>
> How can I file the bug? Do you have a guide?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XCLAD43GLAQES3Q6LRPBRVMWLYVLDHTS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HDHLCGDPPJOWHXN75QWZPTMDO7QYH6X/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-13 Thread Benny Zlotnik
cinder uses an actual openstack environment setup with cinder, cinderlib
does not require it

On Mon, Sep 13, 2021 at 2:17 PM Tommy Sway  wrote:

>
>
> What's the difference between cender page and Managed Block page ?
>
> If  have to connect them through cinderlib,why not put them all under
> cinder page ?
>
>
>
>
>
>
>
> *From:* Benny Zlotnik 
> *Sent:* Monday, September 13, 2021 6:14 PM
> *To:* Tommy Sway 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] what kind of managed block can oVirt manage ?
>
>
>
> yes, we support ceph via cinderlib, so in theory any vendor with a storage
> driver for cinder can work, but we only test ceph
>
>
>
> On Mon, Sep 13, 2021 at 1:06 PM Tommy Sway  wrote:
>
> On the create disk page, there is option to create disk that is from
> managed block, I want wo know what’s kind of it ?  Ceph block device ?
>
>
>
> Thanks!
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSQSTL5SR5TE6DKAJZLECVI52OW6ZLXZ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QC6TOVO4S235X5ZR6G7ECUHZHD5BW6BE/


[ovirt-users] Re: what kind of managed block can oVirt manage ?

2021-09-13 Thread Benny Zlotnik
yes, we support ceph via cinderlib, so in theory any vendor with a storage
driver for cinder can work, but we only test ceph

On Mon, Sep 13, 2021 at 1:06 PM Tommy Sway  wrote:

> On the create disk page, there is option to create disk that is from
> managed block, I want wo know what’s kind of it ?  Ceph block device ?
>
>
>
> Thanks!
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSQSTL5SR5TE6DKAJZLECVI52OW6ZLXZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IK5OUDEIVSOHGY3NLEN3HBP7KVVPZST3/


[ovirt-users] Re: Create template from snapshot of vm using MBS disk

2021-09-10 Thread Benny Zlotnik
I recall a bug was created for this by our QE, but I can't find it.
Can you please file a bug so it is tracked and prioritized?

On Fri, Sep 10, 2021 at 5:42 AM  wrote:
>
> Hi,
>
> Has this problem been resolved?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KMV3UA5YMHPWIOUIUE3RABZDVE2LSCA4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7H74BKMRLWAKAEJ2Q5C7HJIILAB45ZPG/


[ovirt-users] Re: Cinderlib RBD ceph template issues

2021-09-01 Thread Benny Zlotnik
Hi,

Can you please submit a bug[1] with all logs attached?


[1] https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

On Wed, Sep 1, 2021 at 6:04 PM Sketch  wrote:
>
> This is on oVirt 4.4.8, engine on CS8, hosts on C8, cluster and DC are
> both set to 4.6.
>
> With a newly configured cinderlib/ceph RBD setup.  I can create new VM
> images, and copy existing VM images, but I can't copy existing template
> images to RBD.  When I do, I try, I get this error in cinderlib.log (see
> below), which sounds like the disk already exists there, but it definitely
> does not.  This leaves me unable to create new VMs on RBD, only migrate
> existing VM disks.
>
> 2021-09-01 04:31:05,881 - cinder.volume.driver - INFO - Driver hasn't 
> implemented _init_vendor_properties()
> 2021-09-01 04:31:05,882 - cinderlib-client - INFO - Creating volume 
> '0e8b9aca-1eb1-4837-ac9e-cb3d8f4c1676', with size '500' GB [5c5d0a6b]
> 2021-09-01 04:31:05,943 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'create_volume': Entity ' 'cinder.db.sqlalchemy.models.Volume'>' has no property 'glance_metadata' 
> [5c5d0a6b]
> 2021-09-01 04:31:05,944 - cinder - CRITICAL - Unhandled error
> Traceback (most recent call last):
>File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 455, in 
> create
>  self._raise_with_resource()
>File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in 
> _raise_with_resource
>  six.reraise(*exc_info)
>File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
>  raise value
>File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 448, in 
> create
>  model_update = self.backend.driver.create_volume(self._ovo)
>File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 
> 986, in create_volume
>  features=client.features)
>File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in 
> doit
>  result = proxy_call(self._autowrap, f, *args, **kwargs)
>File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in 
> proxy_call
>  rv = execute(f, *args, **kwargs)
>File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in 
> execute
>  six.reraise(c, e, tb)
>File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
>  raise value
>File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in 
> tworker
>  rv = meth(*args, **kwargs)
>File "rbd.pyx", line 629, in rbd.RBD.create
> rbd.ImageExists: [errno 17] RBD image already exists (error creating image)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 
> 399, in _entity_descriptor
>  return getattr(entity, key)
> AttributeError: type object 'Volume' has no attribute 'glance_metadata'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>File "./cinderlib-client.py", line 170, in main
>  args.command(args)
>File "./cinderlib-client.py", line 208, in create_volume
>  backend.create_volume(int(args.size), id=args.volume_id)
>File "/usr/lib/python3.6/site-packages/cinderlib/cinderlib.py", line 175, 
> in create_volume
>  vol.create()
>File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 457, in 
> create
>  self.save()
>File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 628, in 
> save
>  self.persistence.set_volume(self)
>File "/usr/lib/python3.6/site-packages/cinderlib/persistence/dbms.py", 
> line 254, in set_volume
>  self.db.volume_update(objects.CONTEXT, volume.id, changed)
>File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
> 236, in wrapper
>  return f(*args, **kwargs)
>File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
> 184, in wrapper
>  return f(*args, **kwargs)
>File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
> 2570, in volume_update
>  result = query.filter_by(id=volume_id).update(values)
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 
> 3818, in update
>  update_op.exec_()
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", 
> line 1670, in exec_
>  self._do_pre_synchronize()
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", 
> line 1743, in _do_pre_synchronize
>  self._additional_evaluators(evaluator_compiler)
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", 
> line 1912, in _additional_evaluators
>  values = self._resolved_values_keys_as_propnames
>File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", 
> line 1831, in _resolved_values_keys_as_propnames
>  for k, v in self._resolved_values:
>

[ovirt-users] Re: Impossible to move disk after a previous disk move failed

2021-08-24 Thread Benny Zlotnik
c23a5bef-48e0-46c7-9d5b-93c97f0240c0 is the target storage domain?
if the disk is still on the source storage domain in ovirt-engine, you
can remove the LV manually with lvremove, after making sure the source
is correct with
$ vdsm-client Volume getInfo

Do you know why the move failed? When move fails it should cleanup the
target (unless there was no access to the storage)

On Mon, Aug 23, 2021 at 10:47 PM James Wadsworth
 wrote:
>
> This is the log of when it fails
>
> 2021-08-23 21:24:10,667+0200 WARN  (tasks/0) [storage.LVM] Command with 
> specific filter failed or returned no data, retrying with a wider filter: LVM 
> command failed: 'cmd=[\'/sbin/lvm\', \'lvcreate\', \'--config\', \'devices {  
> preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
> write_cache_state=0  disable_after_error_count=3  
> filter=["a|^/dev/mapper/36001405299f83b19569473f9c580660c$|", "r|.*|"]  
> hints="none"  obtain_device_list_from_udev=0 } global {  locking_type=1  
> prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0  use_lvmpolld=1 } 
> backup {  retain_min=50  retain_days=0 }\', \'--autobackup\', \'n\', 
> \'--contiguous\', \'n\', \'--size\', \'40960m\', \'--wipesignatures\', \'n\', 
> \'--addtag\', \'OVIRT_VOL_INITIALIZING\', \'--name\', 
> \'432ceb20-efb7-4a40-8431-1b5c825a6168\', 
> \'c23a5bef-48e0-46c7-9d5b-93c97f0240c0\'] rc=5 out=[] err=[\'  Logical Volume 
> "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group 
> "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']' (l
>  vm:534)
> 2021-08-23 21:24:10,859+0200 WARN  (tasks/0) [storage.LVM] All 2 tries have 
> failed: LVM command failed: 'cmd=[\'/sbin/lvm\', \'lvcreate\', \'--config\', 
> \'devices {  preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
> write_cache_state=0  disable_after_error_count=3  
> filter=["a|^/dev/mapper/36001405299f83b19569473f9c580660c$|^/dev/mapper/36001405cdf35411dd040d4121d9326d1$|^/dev/mapper/36001405df393063de6f0d4451d8a61d3$|",
>  "r|.*|"]  hints="none"  obtain_device_list_from_udev=0 } global {  
> locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0  
> use_lvmpolld=1 } backup {  retain_min=50  retain_days=0 }\', 
> \'--autobackup\', \'n\', \'--contiguous\', \'n\', \'--size\', \'40960m\', 
> \'--wipesignatures\', \'n\', \'--addtag\', \'OVIRT_VOL_INITIALIZING\', 
> \'--name\', \'432ceb20-efb7-4a40-8431-1b5c825a6168\', 
> \'c23a5bef-48e0-46c7-9d5b-93c97f0240c0\'] rc=5 out=[] err=[\'  Logical Volume 
> "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group 
> "c23a5bef-4
>  8e0-46c7-9d5b-93c97f0240c0"\']' (lvm:561)
> 2021-08-23 21:24:10,859+0200 ERROR (tasks/0) [storage.Volume] Failed to 
> create volume 
> /rhev/data-center/mnt/blockSD/c23a5bef-48e0-46c7-9d5b-93c97f0240c0/images/2172a4ac-6992-4cc2-be1b-6b9290bc9798/432ceb20-efb7-4a40-8431-1b5c825a6168:
>  Cannot create Logical Volume: 'vgname=c23a5bef-48e0-46c7-9d5b-93c97f0240c0 
> lvname=432ceb20-efb7-4a40-8431-1b5c825a6168 err=[\'  Logical Volume 
> "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group 
> "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']' (volume:1257)
> 2021-08-23 21:24:10,860+0200 ERROR (tasks/0) [storage.Volume] Unexpected 
> error (volume:1293)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 1254, 
> in create
> add_bitmaps=add_bitmaps)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/blockVolume.py", line 
> 508, in _create
> initialTags=(sc.TAG_VOL_UNINIT,))
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1633, in 
> createLV
> raise se.CannotCreateLogicalVolume(vgName, lvName, err)
> vdsm.storage.exception.CannotCreateLogicalVolume: Cannot create Logical 
> Volume: 'vgname=c23a5bef-48e0-46c7-9d5b-93c97f0240c0 
> lvname=432ceb20-efb7-4a40-8431-1b5c825a6168 err=[\'  Logical Volume 
> "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group 
> "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']'
> 2021-08-23 21:24:10,860+0200 ERROR (tasks/0) [storage.TaskManager.Task] 
> (Task='55a4e8dc-9408-4969-b0ba-b9a556bccba1') Unexpected error (task:877)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 884, in 
> _run
> return fn(*args, **kargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 350, in 
> run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/securable.py", line 79, 
> in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1945, in 
> createVolume
> initial_size=initialSize, add_bitmaps=addBitmaps)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 1216, in 
> createVolume
> initial_size=initial_size, add_bitmaps=add_bitmaps)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 1254, 
> in create
> 

[ovirt-users] Re: Cannot delete pvc attached to pod using ovirt-csi in kubernetes

2021-08-23 Thread Benny Zlotnik
And the full flow, with CSI? I'm trying to determine whether the CSI
driver does something wrong, or something went wrong during that
specific run

On Mon, Aug 23, 2021 at 2:34 PM  wrote:
>
> Yes, that's right.
> I can attach and detach mbs disk to ovirt vm normally through dashboard.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C6Z2TPVX7Z2OBGUAGA4UYCMSWK3RBZK4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A2AI23GC6TJEQ5WJYPB3W4OYLB4V7GF5/


[ovirt-users] Re: Cannot delete pvc attached to pod using ovirt-csi in kubernetes

2021-08-23 Thread Benny Zlotnik
this is a correct run, right? the original flow works with this one?

On Mon, Aug 23, 2021 at 1:05 PM  wrote:
>
> I attached a mbs disk to the running vm through the dashboard.
>
> Here is the engine log:
>
>
> 2021-08-23 19:00:54,912+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default 
> task-209) [28eaa439-0bce-456d-8931-f1edc74ca71b] Lock Acquired to object 
> 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', 
> sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}'
> 2021-08-23 19:00:54,917+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-154035) 
> [28eaa439-0bce-456d-8931-f1edc74ca71b] Running command: 
> HotPlugDiskToVmCommand internal: false. Entities affected :  ID: 
> 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group 
> CONFIGURE_VM_STORAGE with role type USER
> 2021-08-23 19:00:54,922+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand]
>  (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] Running command: 
> ConnectManagedBlockStorageDeviceCommand internal: true.
> 2021-08-23 19:00:59,441+09 INFO  
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] cinderlib output: 
> {"driver_volume_type": "rbd", "data": {"name": 
> "mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26", "hosts": 
> ["172.22.5.6"], "ports": ["6789"], "cluster_name": "ceph", "auth_enabled": 
> true, "auth_username": "admin", "secret_type": "ceph", "secret_uuid": null, 
> "volume_id": "f17702e4-ba97-4f95-a6d4-b89de003bd26", "discard": true, 
> "keyring": "[client.admin]\n\tkey = 
> AQCjBFhgjRWFOBAAMxEaJ3yffC50GDFWnR43DQ==\n", "access_mode": "rw"}}
> 2021-08-23 19:00:59,442+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
>  (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] START, 
> AttachManagedBlockStorageVolumeVDSCommand(HostName = host, 
> AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094',
>  vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 5657b4a1
> 2021-08-23 19:01:02,715+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
>  (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] FINISH, 
> AttachManagedBlockStorageVolumeVDSCommand, return: 
> {attachment={path=/dev/rbd1, conf=/tmp/brickrbd_it_6m0e4, type=block}, 
> path=/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26, 
> vol_id=f17702e4-ba97-4f95-a6d4-b89de003bd26}, log id: 5657b4a1
> 2021-08-23 19:01:02,817+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.managedblock.SaveManagedBlockStorageDiskDeviceCommand]
>  (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Running command: 
> SaveManagedBlockStorageDiskDeviceCommand internal: true.
> 2021-08-23 19:01:09,072+09 INFO  
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] cinderlib output:
> 2021-08-23 19:01:09,077+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] START, 
> HotPlugDiskVDSCommand(HostName = host, 
> HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', 
> vmId='59a7461c-72fe-4e01-86a7-c70243f31596', 
> diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 5acbdc16
> 2021-08-23 19:01:09,111+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Disk hot-plug: 
> 
>   
> 
>   
>dev="/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26">
> 
>   
>   
>   
>   
>   f17702e4-ba97-4f95-a6d4-b89de003bd26
> 
>   
>   http://ovirt.org/vm/1.0;>
> 
>   
> 
> /dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26
>   
> 
>   
> 
>
> 2021-08-23 19:01:09,221+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] FINISH, 
> HotPlugDiskVDSCommand, return: , log id: 5acbdc16
> 2021-08-23 19:01:09,358+09 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] EVENT_ID: 
> USER_HOTPLUG_DISK(2,000), VM centos disk mbs was plugged by 
> admin@internal-authz.
> 2021-08-23 19:01:09,358+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Lock freed to 
> object 
> 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', 
> sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}'
> 2021-08-23 19:01:10,916+09 INFO  
> 

[ovirt-users] Re: Cannot delete pvc attached to pod using ovirt-csi in kubernetes

2021-08-23 Thread Benny Zlotnik
yes, it should indeed defer to
DetachManagedBlockStorageVolumeVDSCommand which is what does the
unmapping, do you have an earlier log that shows the XML (for example,
when it was attached)?

On Mon, Aug 23, 2021 at 10:59 AM  wrote:
>
> There were no error logs in vdsm and supervdsm.
>
> And I found that the 
> [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand]
>  and 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand]
>  functions are being called when the disk is detached from ovirt vm.
>
> However, in the log I gave first, there is no part where the correspoding 
> functions are called, isn't it a bug?
>
> Here is the engine log where detaching the disk:
>
> 2021-08-23 10:29:43,972+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default 
> task-176) [2538ba78-6916-431c-b3bc-b98b26515842] Lock Acquired to object 
> 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', 
> sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}'
> 2021-08-23 10:29:44,054+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-149928) 
> [2538ba78-6916-431c-b3bc-b98b26515842] Running command: 
> HotUnPlugDiskFromVmCommand internal: false. Entities affected :  ID: 
> 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group 
> CONFIGURE_VM_STORAGE with role type USER
> 2021-08-23 10:29:44,076+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-149928) 
> [2538ba78-6916-431c-b3bc-b98b26515842] START, 
> HotUnPlugDiskVDSCommand(HostName = host, 
> HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', 
> vmId='59a7461c-72fe-4e01-86a7-c70243f31596', 
> diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 1c39f09a
> 2021-08-23 10:29:44,078+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-149928) 
> [2538ba78-6916-431c-b3bc-b98b26515842] Disk hot-unplug:  encoding="UTF-8"?>
>   
> 
>   
> 
>   
> 
>
> 2021-08-23 10:29:44,218+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-149928) 
> [2538ba78-6916-431c-b3bc-b98b26515842] FINISH, HotUnPlugDiskVDSCommand, 
> return: , log id: 1c39f09a
> 2021-08-23 10:29:44,471+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand]
>  (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] Running command: 
> DisconnectManagedBlockStorageDeviceCommand internal: true.
> 2021-08-23 10:29:44,514+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand]
>  (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] START, 
> DetachManagedBlockStorageVolumeVDSCommand(HostName = host, 
> AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094',
>  vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 2d6874a5
> 2021-08-23 10:29:46,683+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand]
>  (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] FINISH, 
> DetachManagedBlockStorageVolumeVDSCommand, return: StatusOnlyReturn 
> [status=Status [code=0, message=Done]], log id: 2d6874a5
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LL4WSFNCV6EW6DVDQ3DENEDUVA5MAL6L/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWJW6DBXXZC6I2XMV4XPGS4MAX54U2DA/


[ovirt-users] Re: Cannot delete pvc attached to pod using ovirt-csi in kubernetes

2021-08-23 Thread Benny Zlotnik
It should do this and it's not semantically different from what
happens with non-MBS disks. The log I pasted is what unmaps the
volume, I am not sure why it returned successfully if the volume
wasn't unmapped, if possible please attach vdsm and supervdsm logs
from the relevant, perhaps there's some clue there.
But we essentially use cinderlib's `disconnect`, so perhaps it hasn't errored


On Mon, Aug 23, 2021 at 10:05 AM  wrote:
>
> When I check the status of the rbd volume, watcher still exists. Wathcer is 
> /dev/rbd0 in the ovirt vm.
> $ rbd status mypool/volume-3643db6c-38a6-4a21-abb3-ce8cc15e8c86
> Watchers:
> watcher=192.168.7.18:0/1903159992 client.44942 
> cookie=18446462598732840963
>
> And the attachment information was also left in the volume_attachment of 
> ovirt_cinderlib DB.
>
> After manually unmap /dev/rbd0 in the ovirt vm and delete the db row, the pvc 
> was deleted normally.
> Shouldn't those tasks be done when deleting the pod?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSL4JPAMEQ5NHICWI34YI3HO62J2T3MB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/646F25UD5ARFWPUOYMQABSIMGD63Y7SJ/


[ovirt-users] Re: Cannot delete pvc attached to pod using ovirt-csi in kubernetes

2021-08-23 Thread Benny Zlotnik
pod deletion should invoke unpublish the PVC which detaches it from
the node which is seen in the engine log:
2021-08-20 17:40:35,664+09 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand]
(default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START,
HotUnPlugDiskVDSCommand(HostName = host,
HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094',
vmId='59a7461c-72fe-4e01-86a7-c70243f31596',
diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4
2021-08-20 17:40:35,678+09 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand]
(default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk
hot-unplug: 
  

  

  


2021-08-20 17:40:35,749+09 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand]
(default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH,
HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4
2021-08-20 17:40:35,842+09 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID:
USER_DETACH_DISK_FROM_VM(2,018), Disk
pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached
from VM centos by admin@internal-authz.

I suspect something keeps the volume busy, can run:
$ rbd status /volume-63a64445-1659-4d5f-8847-e7266e64b09e

On Mon, Aug 23, 2021 at 3:56 AM  wrote:
>
> Hi all,
>
> I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest 
> version of the container image.
> (https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets)
>
> After successfully creating pvc and pod, I tried to delete it.
> And the pod is deleted, but the pvc is not deleted. This is because deleting 
> a pod does not unmap /dev/rbd0 attached to the ovirt vm.
>
> How can I delete the pvc successfully?
>
> oVirt engine version is 4.4.7.6-1.el8.
> Here is the engine log when deleting the pod:
>
> 2021-08-20 17:40:35,385+09 INFO  
> [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) 
> [] User admin@internal-authz with profile [internal] successfully logged in 
> with scopes: ovirt-app-api ovirt-ext=token-info:authz-search 
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate 
> ovirt-ext=token:password-access
> 2021-08-20 17:40:35,403+09 INFO  
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) 
> [68ee3182] Running command: CreateUserSessionCommand internal: false.
> 2021-08-20 17:40:35,517+09 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User 
> admin@internal-authz connecting from '192.168.7.169' using session 
> 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ=='
>  logged in.
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,520+09 WARN  
> [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't 
> find relative path for class 
> "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will 
> return null
> 2021-08-20 17:40:35,663+09 INFO  
> [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default 
> task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: 
> DetachDiskFromVmCommand internal: false. Entities affected :  ID: 
> 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group 
> CONFIGURE_VM_STORAGE with role type USER
> 2021-08-20 17:40:35,664+09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default 
> task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, 
> HotUnPlugDiskVDSCommand(HostName = host, 
> HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', 
> vmId='59a7461c-72fe-4e01-86a7-c70243f31596', 
> 

[ovirt-users] Re: Unable to export VM from data storage domain.

2021-08-22 Thread Benny Zlotnik
Sounds like a bug, can you attach the output of:
$ psql -U engine -d engine -c "\x on" -c "select * from disk_profiles"

On Sun, Aug 22, 2021 at 6:47 PM Diggy Mc  wrote:
>
> I'm running oVirt 4.4.4 with two data storage domains.  One domain is where 
> the production VMs run.  The second domain is where I make backup copies of 
> the VMs in the event of problems with the main production VMs.  I make 
> backups using the export option from the GUI's dropdown list.
>
> I just now tried to "restore" a backup copy via the export function and get 
> an error.  In fact, I get errors trying to export (import to my original data 
> domain) any of the VMs that were exported to my backup data domain.  My 
> backup domain is a DATA domain and not an EXPORT domain.
>
> The exact error is:
>   Export VM Failed
>   [Cannot add VM. Cannot find a disk profile defined on storage domain 
> 246c69e9-6f16-489f-8022-8613f6c1c22a.]
>
> Is this a bug or am I doing something wrong?  Help is needed urgently.  In 
> advance, thank you.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFKGR3RIO5YSYVC5BDAMBNFXHMUODWHK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KMBR4H5IZW446ZZSJV7WYFJAIDFLCBNQ/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Benny Zlotnik
> If your vm is temporary and you like to drop the data written while
> the vm is running, you
> could use a temporary disk based on the template. This is called a
> "transient disk" in vdsm.
>
> Arik, maybe you remember how transient disks are used in engine?
> Do we have an API to run a VM once, dropping the changes to the disk
> done while the VM was running?

I think that's how stateless VMs work
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAVA367YF6F3AHHPU7K23PFOR5ZTZBBI/


[ovirt-users] Re: live merge of snapshots failed

2021-08-06 Thread Benny Zlotnik
2021-08-03 15:50:59,040+0300 ERROR (libvirt/events) [virt.vm]
(vmId='1c1d20ed-3167-4be7-bff3-29845142fc57') Block job ACTIVE_COMMIT
for drive 
/rhev/data-center/mnt/blockSD/a5a492a7-f770-4472-baa3-ac7297a581a9/images/2e6e3cd3-f0cb-47a7-8bda-7738bd7c1fb5/b43b7c33-5b53-4332-a2e0-f950debb919b
has failed (vm:5847)

Do you have access to libvirtd logs?
Since you're using an outdated version it's possible you've hit an old
bug that's been fixed

On Wed, Aug 4, 2021 at 10:30 AM  wrote:
>
> here os the vdsm.log from the SPM
> there is a report for the second disk of the vm but the first (the one which 
> failes to merge does not seem to be anywhere)
> 2021-08-03 15:51:40,051+0300 INFO  (jsonrpc/7) [vdsm.api] START 
> getVolumeInfo(sdUUID=u'96000ec9-e181-44eb-893f-e0a36e3a6775', 
> spUUID=u'5da76866-7b7d-11eb-9913-00163e1f2643', 
> imgUUID=u'205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 
> volUUID=u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', options=None) 
> from=:::10.252.80.201,58850, 
> flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
> task_id=be6c50d9-a8e4-4ef5-85cf-87a00d79d77e (api:48)
> 2021-08-03 15:51:40,052+0300 INFO  (jsonrpc/7) [storage.VolumeManifest] Info 
> request: sdUUID=96000ec9-e181-44eb-893f-e0a36e3a6775 
> imgUUID=205a30a3-fc06-4ceb-8ef2-018f16d4ccbb volUUID = 
> 7611ebcf-5323-45ca-b16c-9302d0bdedc6  (volume:240)
> 2021-08-03 15:51:40,081+0300 INFO  (jsonrpc/7) [storage.VolumeManifest] 
> 96000ec9-e181-44eb-893f-e0a36e3a6775/205a30a3-fc06-4ceb-8ef2-018f16d4ccbb/7611ebcf-5323-45ca-b16c-9302d0bdedc6
>  info is {'status': 'OK', 'domain': '96000ec9-e181-44eb-893f-e0a36e3a6775', 
> 'voltype': 'LEAF', 'description': 
> '{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
> '----', 'format': 'RAW', 'generation': 0, 
> 'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
> 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
> [], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
> u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
> 'PREALLOCATED', 'lease': {'path': 
> '/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
> None, 'offset': 105906176}} (volume:279)
> 2021-08-03 15:51:40,081+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH 
> getVolumeInfo return={'info': {'status': 'OK', 'domain': 
> '96000ec9-e181-44eb-893f-e0a36e3a6775', 'voltype': 'LEAF', 'description': 
> '{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
> '----', 'format': 'RAW', 'generation': 0, 
> 'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
> 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
> [], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
> u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
> 'PREALLOCATED', 'lease': {'path': 
> '/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
> None, 'offset': 105906176}}} from=:::10.252.80.201,58850, 
> flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
> task_id=be6c50d9-a8e4-4ef5-85cf-87a00d79d77e (api:54)
> 2021-08-03 15:51:40,083+0300 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC 
> call Volume.getInfo succeeded in 0.04 seconds (__init__:312)
>
> last appearance of this drive on the spm vdsm.log is when the snapshot 
> download finishes:
> 2021-08-03 15:34:18,619+0300 INFO  (jsonrpc/6) [vdsm.api] FINISH 
> get_image_ticket return={'result': {u'timeout': 300, u'idle_time': 0, 
> u'uuid': u'5c1943a9-cac4-4398-9ec1-46ab82cacd04', u'ops': [u'read'], u'url': 
> u'file:///rhev/data-center/mnt/blockSD/a5a492a7-f770-4472-baa3-ac7297a581a9/images/2e6e3cd3-f0cb-47a7-8bda-7738bd7c1fb5/84c005da-cbec-4ace-8619-5a8e2ae5ea75',
>  u'expires': 6191177, u'transferred': 150256746496, u'transfer_id': 
> u'7dcb75c0-4373-4986-b25f-5629b1b68f5d', u'sparse': False, u'active': True, 
> u'size': 150323855360}} from=:::10.252.80.201,58850, 
> flow_id=3035db30-8a8c-48a5-b0c6-0781fda6ac2e, 
> task_id=674028a2-e37c-46e4-a463-eeae1b09aef0 (api:54)
> 2021-08-03 15:34:18,620+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC 
> call Host.get_image_ticket succeeded in 0.00 seconds (__init__:312)
>
> If I can send any more information or test something please let me know.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEJ24BI6PLXYFQHJ6O2AESK3M4SXMUID/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html

[ovirt-users] Re: live merge of snapshots failed

2021-08-03 Thread Benny Zlotnik
2021-08-03 15:51:34,917+03 ERROR
[org.ovirt.engine.core.bll.MergeStatusCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-2)
[3bf9345d-fab2-490f-ba44-6aa014bbb743] Failed to live merge. Top
volume b43b7c33-5b53-4332-a2e0-f950debb919b is still in qemu chain
[b43b7c33-5b53-4332-a2e0-f950debb919b,
84c005da-cbec-4ace-8619-5a8e2ae5ea75]

Can you attach vdsm logs (from SPM and the host running the VM) so we
can understand why it failed?

On Tue, Aug 3, 2021 at 6:07 PM  wrote:
>
> Hello
> I have a situation with a vm in which I cannot delete the snapshot.
> The whole thing is quite strange because I can delete the snapshot when I 
> create and delete it from the web interface but when I do it with a python 
> script through the API it failes.
> The script does create snapshot-> download snapshot-> delete snapshot and I 
> used the examples from ovirt python sdk on githab to create it ,in general it 
> works prety well.
>
> But on a specific machine (so far) it cannot delete the live snapshot
> Ovirt is 4.3.10 and the guest is a windows 10 pc. Windows 10 guest has 2 
> disks attached both on different fc domains one on an ssd  emc and the other 
> on an hdd emc. Both disks are prealocated.
> I cannot figure out what the problem is so far
> the related engine log:
>
> 2021-08-03 15:51:00,385+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Comma
> nd 'RemoveSnapshotSingleDiskLive' (id: 
> '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
> '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
> 2021-08-03 15:51:00,385+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (
> jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
> 2021-08-03 15:51:00,387+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-61) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
> 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
> 2021-08-03 15:51:01,388+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
> 2021-08-03 15:51:07,491+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-38) 
> [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'MoveImageGroup' (id: 
> '1de1b800-873f-405f-805b-f44397740909') waiting on child command id: 
> 'd1136344-2888-4d63-8fe1-b506426bc8aa' type:'CopyImageGroupWithData' to 
> complete
> 2021-08-03 15:51:11,513+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-41) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: 
> '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: 
> '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to 
> complete
> 2021-08-03 15:51:12,522+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 
> '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
> 2021-08-03 15:51:12,523+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
> 2021-08-03 15:51:12,527+03 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-76) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 
> 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
> 2021-08-03 15:51:13,528+03 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete 
> (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
> 2021-08-03 15:51:21,635+03 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-58) 
> [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' 

[ovirt-users] Re: Deploy ovirt-csi in the kubernetes cluster

2021-07-23 Thread Benny Zlotnik
We don't test it on kubernetes, but I know some users use it
successfully with kubernetes by applying the manifests[1] manually

[1] https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets


On Fri, Jul 23, 2021 at 4:12 AM  wrote:
>
> Hi,
>
> I want to deploy ovirt-csi in the kubernetes cluster. But the guide only has 
> how to deploy to openshift.
> How can I deploy the ovirt-csi in the kubernetes cluster? Is there any way to 
> do that?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LXDC4GXKETPQIPHYVYMALHBJLB5XDT4E/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4K5S64KNJPTOKZFD7SDCG3WY7FPDCMKW/


[ovirt-users] Re: Create template from snapshot of vm using MBS disk

2021-07-14 Thread Benny Zlotnik
Sounds like a bug, can you engine.log and cinderlib.log?

On Wed, Jul 14, 2021 at 10:14 AM  wrote:

> Hi,
>
> If I create a template with a snapshot of the vm that uses the mbs disk,
> the template cannot be used to create a new vm.
> Is this normal or a bug?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQ5LXIETEENVWJ7WFEJU34N7WZMAFCIW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KGTH6MLTBISDUJV7JVDDIS33KOVDB3RX/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Benny Zlotnik
Not currently, we do want to support this using rbd-nbd

On Wed, Jul 14, 2021 at 11:26 AM Konstantin Shalygin  wrote:

> It's possible to use librbd instead kernel mount like in OpenStack?
>
> Sent from my iPhone
>
> > On 14 Jul 2021, at 10:41, Sandro Bonazzola  wrote:
> >
> > They are mounted as block storage
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WEGIVAF57ATAGKA4TXITFZFOPCNIGH6/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Benny Zlotnik
In 4.4.6 Copying from regular Storage Domains to Managed Block Storage
Domains was added

On Wed, Jul 14, 2021 at 10:34 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno mer 14 lug 2021 alle ore 08:53 Konstantin Shalygin <
> k0...@k0ste.ru> ha scritto:
>
>> Hi Sandro,
>>
>> - How this image is mounted on oVirt host?
>>
>
> They are mounted as block storage
>
> /rhev/
> `-- data-center
> |-- b55ef7a8-da51-11eb-b619-5254001ce0e4
> |   |-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e ->
> /rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> |   `-- mastersd ->
> /rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> `-- mnt
> `-- blockSD
> `-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> |-- dom_md
> |   |-- ids ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ids
> |   |-- inbox ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/inbox
> |   |-- leases ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/leases
> |   |-- master ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/master
> |   |-- metadata ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/metadata
> |   |-- outbox ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/outbox
> |   `-- xleases ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/xleases
> |-- ha_agent
> |   |-- hosted-engine.lockspace ->
> /run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ac3a245f-e6fe-4159-b0ee-be08d4048bb7/8b4bddc1-1602-45d7-854c-eaeac9549617
> |   `-- hosted-engine.metadata ->
> /run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/dc77bfc2-cecd-4ab5-81f7-e15b81e45994/1927372e-019b-448a-8645-697b8b8ed42a
> `-- images
> |-- 10af85ab-434d-4104-800d-099e05a3653e
> |   `-- 08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689
> |-- ac3a245f-e6fe-4159-b0ee-be08d4048bb7
> |   `-- 8b4bddc1-1602-45d7-854c-eaeac9549617 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/8b4bddc1-1602-45d7-854c-eaeac9549617
> |-- bb667f95-bbb0-41a4-ad15-66f1b9bdda59
> |   `-- 5abcb5f0-2c28-41b4-bfcc-bd41ef730d35 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/5abcb5f0-2c28-41b4-bfcc-bd41ef730d35
> |-- cccd50f6-6e47-43ab-9075-1bbd31d5e3b7
> |   `-- 169eacc2-584c-47ee-a295-ad3aa9c811c5 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/169eacc2-584c-47ee-a295-ad3aa9c811c5
> |-- dc77bfc2-cecd-4ab5-81f7-e15b81e45994
> |   `-- 1927372e-019b-448a-8645-697b8b8ed42a ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/1927372e-019b-448a-8645-697b8b8ed42a
> `-- fc6b0b84-17fa-42e9-80ae-97cf50e8b74d
> `-- 3eaeb1ba-2b36-4c29-b721-da19d3e5784e ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/3eaeb1ba-2b36-4c29-b721-da19d3e5784e
>
>
>> - How to change image features?
>> - How to add upmap option to libvirt domain?
>> - How libvirt domain looks like?
>> - How snapshots works?
>>
>
> Snapshot works fine, going to VM tab and creating snapshot as usual.
>
>
>> - How clones works?
>>
>
> Disk copy can be done from the engine storage -> disk tab.
> VM cloning failed for me, opened *Bug 1982083*
> <https://bugzilla.redhat.com/show_bug.cgi?id=1982083> - Cloning VM with
> managed block storage raise a NPE
>
>
>
>> - How to migrate images from one domain to another?
>>
>
> I would let the storage team answer these questions in detail, +Benny
> Zlotnik  ?
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBKM3RWJMEWTBQ3456EDFWNLPGCDVKXO/


[ovirt-users] Re: Image import to Managed Block Storage

2021-07-12 Thread Benny Zlotnik
Direct import from glance is not possible yet, a workaround would be to
import to a regular storage domain and then copy

On Mon, Jul 12, 2021 at 12:22 PM  wrote:

> Sorry for the lack of information.
> I want to import image from the default glance repository or locally.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4GJ7CSLMBKYRNDX2DHWBBKVMVFEIMBV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5WZENCLDT7U72I7K4QR3VD3PYROLULL/


[ovirt-users] Re: Image import to Managed Block Storage

2021-07-12 Thread Benny Zlotnik
Hi,

Can you elaborate about the requirement? Import from where? From an
existing storage backend?
In 4.4.6 we added support to copy disks from regular storage domains to MBS
domains


On Mon, Jul 12, 2021 at 12:01 PM  wrote:

> Hi,
>
> Any plans to add the function to import images to MBS?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2AF7EBMURTSFMFY5H73KK2QV6XVZVGI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E4CEGPA7PCAOI4RHETIEJNTP2AA3OXQ/


[ovirt-users] Re: Attempting to detach a storage domain

2021-04-25 Thread Benny Zlotnik
I think engine.log would be a good place to start, it would likely
tell us on which host there's a problem (if there is one)


On Sun, Apr 25, 2021 at 10:22 AM matthew.st...@fujitsu.com
 wrote:
>
> Which logs, on which hosts should I be looking through?
>
> I have a hint of that from my research, but all 121 hosts are up and running.
>
> -Original Message-----
> From: Benny Zlotnik 
> Sent: Sunday, April 25, 2021 1:53 AM
> To: Stier, Matthew 
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] Attempting to detach a storage domain
>
> What do the logs say?
> This usually means that not all hosts were able to disconnect from it
>
> On Sun, Apr 25, 2021 at 9:45 AM matthew.st...@fujitsu.com 
>  wrote:
> >
> > Ovirt: 4.3.10
> >
> > Storage: iSCSI
> >
> >
> >
> > Problem: Attempting to place storage domain into ‘maintenance’ in 
> > preparation for detachment and destruction, has left it hung in a 
> > ‘Preparation for maintenance’ state.
> >
> >
> >
> > I have three storage domain I need to put into maintenance, detach and 
> > delete.  When I placed place the first, and smallest (100GB) into 
> > maintenance mode, it switched to ‘Preparing for Maintenance’, and has stuck 
> > there for hours.
> >
> >
> >
> > Early on, I was able to re-activate it, but I do want to remove for use 
> > somewhere else, and I want make sure I can remove it, before I try to do 
> > the same with two 11TB storage domains.
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org Privacy
> > Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/HY3LWX7H
> > MN56SDHNQWB3MDXLLU7GNLGX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7CCKICZ7BLAHLP5COOTUDTN2NF7PRP5L/


[ovirt-users] Re: Attempting to detach a storage domain

2021-04-25 Thread Benny Zlotnik
What do the logs say?
This usually means that not all hosts were able to disconnect from it

On Sun, Apr 25, 2021 at 9:45 AM matthew.st...@fujitsu.com
 wrote:
>
> Ovirt: 4.3.10
>
> Storage: iSCSI
>
>
>
> Problem: Attempting to place storage domain into ‘maintenance’ in preparation 
> for detachment and destruction, has left it hung in a ‘Preparation for 
> maintenance’ state.
>
>
>
> I have three storage domain I need to put into maintenance, detach and 
> delete.  When I placed place the first, and smallest (100GB) into maintenance 
> mode, it switched to ‘Preparing for Maintenance’, and has stuck there for 
> hours.
>
>
>
> Early on, I was able to re-activate it, but I do want to remove for use 
> somewhere else, and I want make sure I can remove it, before I try to do the 
> same with two 11TB storage domains.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HY3LWX7HMN56SDHNQWB3MDXLLU7GNLGX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77LRW25XDEM5GKHOT6FKW3CJ3P6VXB7S/


[ovirt-users] Re: Destroyed VM blocking hosts/filling logs

2021-04-08 Thread Benny Zlotnik
username: vdsm@ovirrt, password: shibboleth

On Thu, Apr 8, 2021 at 5:25 PM David Kerry  wrote:
>
> Hi Shani,
>
> I actually came across that option and attempted it at one point,
> but vdsm has locked me out of using that command it seems.
>
> Eg:
>
> [root@ovirt-node217 ~]# virsh undefine vm-s2
> Please enter your authentication name: admin
> Please enter your password:
> error: failed to connect to the hypervisor
> error: authentication failed: authentication failed
>
> No known username/password seems to work.
>
> Is there some magic user to use for this, or some way
> to bypass the authentication?
>
> Thanks
>
> David
>
> On 2021-04-08 10:10 a.m., Shani Leviim wrote:
> > Hi David,
> > Yes - this one will remove completely the VM from the DB.
> >
> > You can use the virsh command to delete the VM guests:
> > https://www.cyberciti.biz/faq/howto-linux-delete-a-running-vm-guest-on-kvm/ 
> > 
> >
> > *Regards,
> > *
> > *Shani Leviim
> > *
> >
> >
> > On Thu, Apr 8, 2021 at 4:32 PM David Kerry  > > wrote:
> >
> > Hi Shani,
> >
> > These VMs in particular are running just fine on other hosts (and
> > I'd like to keep them that way, preferably).
> >
> > It looks like this command would delete the whole VM from the
> > entire system instead of just removing the stuck/shutdown instances
> > from the hosts it's not running on any more.
> >
> > Can you confirm this is what it would do?  If so, is there another
> > option to remove these stuck "ghost" VM instances from the hosts they 
> > are
> > no longer running on?
> >
> >
> > Thanks
> >
> > David
> >
> >
> > On 2021-04-08 3:20 a.m., Shani Leviim wrote:
> >  > Hi David,
> >  > You can delete the VM from the DB using this command:
> >  > SELECT DeleteVm('');
> >  >
> >  > *Regards,
> >  > *
> >  > *Shani Leviim
> >  > *
> >  >
> >  >
> >  > On Wed, Apr 7, 2021 at 4:23 PM David Kerry  >   > >> wrote:
> >  >
> >  > Hello,
> >  >
> >  > This seems to be what the engine is trying to do, and failing at 
> > for some reason.
> >  >
> >  > eg:
> >  >
> >  > [root@ovirt-node217 ~]# vdsm-client Host getVMList 
> > fullStatus=True
> >  > [
> >  >  "8b3964bc-cd3f-4f13-84c6-1811193c93eb",
> >  >  "132668b6-9992-451f-95ac-dbcbeb03f5f1"
> >  > ]
> >  >
> >  > For reference:
> >  >
> >  > [root@ovirt-node217 ~]# virsh -r list --all
> >  >   IdName   State
> >  > 
> >  >   - vm-s2  shut off
> >  >   - vm-s1  shut off
> >  >
> >  > And in the console, it shows a count of "2" beside this host, 
> > but on the host detail
> >  > page, under the virtual-machine tab, the list is empty (these 
> > VMs are actually
> >  > running on a different host).
> >  >
> >  > [root@ovirt-node217 ~]# vdsm-client VM destroy 
> > vmID="8b3964bc-cd3f-4f13-84c6-1811193c93eb"
> >  > vdsm-client: Command VM.destroy with args {'vmID': 
> > '8b3964bc-cd3f-4f13-84c6-1811193c93eb'} failed:
> >  > (code=100, message=General Exception: ("'1048576'",))
> >  >
> >  > I guess what I need is a way to remove/clean-up these VMs 
> > manually since ovirt
> >  > does not seem to be able to do it by itself.
> >  >
> >  > This condition also blocks the host from being put into 
> > maintenance mode.
> >  >
> >  > When I reboot the host manually and "confirm host was rebooted", 
> > the VMs
> >  > are still there and still stuck.
> >  >
> >  > Sincerely,
> >  >
> >  > David
> >  >
> >  >
> >  > On 2021-04-07 6:01 a.m., Shani Leviim wrote:
> >  >> Hi,
> >  >> You can try with the vdsm-client tool:
> >  >> https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client.html 
> >  
> >  > >
> >  >>
> >  >> Stopping a VM:
> >  >> 1) Get the vmId:
> >  >> # vdsm-client Host getVMList fullStatus=True
> >  >>
> >  >> 2) Destroy the VM
> >  >> # vdsm-client VM destroy vmID=
> >  >>
> >  >> *Regards,
> >  >> *
> >  >> *Shani Leviim
> >  >> *
> >  >>
> >  >>
> >  >> On Sat, Apr 3, 2021 at 7:50 AM  >   > >> wrote:
> >  >>
> >   

[ovirt-users] Re: Cinderlib problem after upgrade from 4.3.10 to 4.4.5

2021-03-29 Thread Benny Zlotnik
There is no upgrade path for cinderlib between 4.3 and 4.4, also in
4.3 cinderlib was installed via pip with the 0.x version, while 4.4 is
1.x.
In ovirt-engine we only create the cinderlib database, we don't
control the tables, this is controlled by the ORM cinderlib uses.

We did get a report about the __DEFAULT__ issue, but I didn't get the
bug report I asked for, so please submit a bug with all the details so
we can properly investigate and provide some solution.


On Thu, Mar 25, 2021 at 9:13 AM Marc-Christian Schröer
 wrote:
>
> Hello all,
> hello Benny,
>
> thank you very much for the helpful answer and pointing me to the 
> documentation. After setting up the server from scratch, installing some 
> dependencies and fixing the issue of a missing __DEFAULT__ volume type in the 
> ovirt_cinderlib database I hit a new bump:
>
> psycopg2.ProgrammingError: column volumes.service_uuid does not exist
>
> The /var/log/ovirt-engine/cinderlib/cinderlib.log file contains a somewhat 
> lengthy entry of a failed SQL query basically telling me that the database 
> structure is not what is expected by ovirt. I upgraded from 4.3 to 4.4 using 
> the backup/restore upgrade method and would have thought oVirt’s engine-setup 
> had migrated the database to a new model.
>
> Do I have to execute some upgrade command manually?
>
> Kind regards and thanks for all the help,
> Marc
>
> --
> 
>
>  Dipl.-Inform. Marc-Christian Schröer  schro...@ingenit.com
>  Geschäftsführer / CEO
>  --
>  ingenit GmbH & Co. KG   Tel. +49 (0)231 58 698-120
>  Emil-Figge-Strasse 76-80Fax. +49 (0)231 58 698-121
>  D-44227 Dortmund   www.ingenit.com
>
>  Registergericht: Amtsgericht Dortmund, HRA 13 914
>  Gesellschafter : Thomas Klute, Marc-Christian Schröer
> ____
>
> Am 23.03.2021 um 08:38 schrieb Benny Zlotnik :
>
> If the log is empty it usually means cinderlib-client.py failed during
> startup, probably because the dependencies are missing.
> python3-cinderlib is required on the engine machine (and ceph-common,
> since you use ceph), python3-os-brick is required on the hosts (and
> ceph-common).
> See the instructions here for ussuri:
> https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib
>
> On Tue, Mar 23, 2021 at 9:20 AM Marc-Christian Schröer
>  wrote:
>
>
> Hello all,
>
> first of all thank you very much for this stable virtualization environment. 
> It has been a pillar for our company’s business for more than 5 years now and 
> after migrating from version 3 to 4 it has been so stable ever since. Anyway, 
> I ran into a problem I cannot fix on my own yesterday:
>
> After a lot of consideration and hesitation since this is a production 
> environment I followed the upgrade guide 
> (https://www.ovirt.org/documentation/upgrade_guide/), configured a vanilla 
> CentOS 8 server as controller, decommissioned the old 4.3 controller and 
> fired up the new one. It worked like a charm until I tried to migrate VMs, 
> start new ones or even create new disks. We use Ceph as managed storage, 
> providing a SSD only and a HDD only pool. The UI simply told me that there 
> was an error.
>
> I started investigating the issue and found corresponding log entries in 
> ovirt-engine.log:
>
> 2021-03-22 10:36:37,247+01 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-24) 
> [67bf193c] cinderlib execution failed:
>
> But that was all the engine had to say about the issue. There was no stack 
> trace or additional information. There is no logfile in 
> /var/log/ovirt-engine/cinderlib/, the directory simply is empty while on the 
> other controller it was frequently filed with annoying „already mounted“ 
> messages.
>
> Can anyone help me with that issue? I searched the web for a solution or 
> someone else with the same problem, but came up empty. Is there a way to turn 
> up the log level for cinderlib? Are there any dependencies I have to install 
> besides the ovirt packages? Any help is very much appreciated!
>
> Kind regards and stay healthy,
>Marc
>
> --
> 
>
> Dipl.-Inform. Marc-Christian Schröer  schro...@ingenit.com
> Geschäftsführer / CEO
> --
> ingenit GmbH 

[ovirt-users] Re: Cinderlib problem after upgrade from 4.3.10 to 4.4.5

2021-03-23 Thread Benny Zlotnik
If the log is empty it usually means cinderlib-client.py failed during
startup, probably because the dependencies are missing.
python3-cinderlib is required on the engine machine (and ceph-common,
since you use ceph), python3-os-brick is required on the hosts (and
ceph-common).
See the instructions here for ussuri:
https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib

On Tue, Mar 23, 2021 at 9:20 AM Marc-Christian Schröer
 wrote:
>
> Hello all,
>
> first of all thank you very much for this stable virtualization environment. 
> It has been a pillar for our company’s business for more than 5 years now and 
> after migrating from version 3 to 4 it has been so stable ever since. Anyway, 
> I ran into a problem I cannot fix on my own yesterday:
>
> After a lot of consideration and hesitation since this is a production 
> environment I followed the upgrade guide 
> (https://www.ovirt.org/documentation/upgrade_guide/), configured a vanilla 
> CentOS 8 server as controller, decommissioned the old 4.3 controller and 
> fired up the new one. It worked like a charm until I tried to migrate VMs, 
> start new ones or even create new disks. We use Ceph as managed storage, 
> providing a SSD only and a HDD only pool. The UI simply told me that there 
> was an error.
>
> I started investigating the issue and found corresponding log entries in 
> ovirt-engine.log:
>
> 2021-03-22 10:36:37,247+01 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-24) 
> [67bf193c] cinderlib execution failed:
>
> But that was all the engine had to say about the issue. There was no stack 
> trace or additional information. There is no logfile in 
> /var/log/ovirt-engine/cinderlib/, the directory simply is empty while on the 
> other controller it was frequently filed with annoying „already mounted“ 
> messages.
>
> Can anyone help me with that issue? I searched the web for a solution or 
> someone else with the same problem, but came up empty. Is there a way to turn 
> up the log level for cinderlib? Are there any dependencies I have to install 
> besides the ovirt packages? Any help is very much appreciated!
>
> Kind regards and stay healthy,
> Marc
>
> --
> 
>
>  Dipl.-Inform. Marc-Christian Schröer  schro...@ingenit.com
>  Geschäftsführer / CEO
>  --
>  ingenit GmbH & Co. KG   Tel. +49 (0)231 58 698-120
>  Emil-Figge-Strasse 76-80Fax. +49 (0)231 58 698-121
>  D-44227 Dortmund   www.ingenit.com
>
>  Registergericht: Amtsgericht Dortmund, HRA 13 914
>  Gesellschafter : Thomas Klute, Marc-Christian Schröer
> 
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/O22IFE3PSFJ6VMCVAPINHMRCAHCRYM2A/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3EXKE44PBNBTMZT7VJP4T7RT2XSNQGGY/


[ovirt-users] Re: Python SDK rename virtual machine

2021-03-18 Thread Benny Zlotnik
Probably with the update method, like in the DC example[1], it
shouldn't be hard to translate this to a VM


[1] 
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/update_data_center.py#L52

On Thu, Mar 18, 2021 at 11:15 AM Gerard Weatherby  wrote:
>
> Is there an API call in the Ovirt Python SDK to rename an existing virtual 
> machine?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DTQKB7C7IBQFESZCWN5JQ3DIGYKC7IYG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAIIKPMPVZTTPQMOWKIHSQNSNTYVEU3K/


[ovirt-users] Re: Ovirt - affinity labels names and virtual machines

2021-02-15 Thread Benny Zlotnik
Hi,

I suggest sending questions to ovirt-users so others could find
answers as well as to allow others who might know better chime in as
well

As to your question, something like this works for me:
vms_service = connection.system_service().vms_service()
vm_service = vms_service.vm_service('vm-id')
affinity_labels = vm_service.affinity_labels_service().list()

for affinity_label in affinity_labels:
hosts = connection.follow_link(affinity_label.hosts)
for host in hosts:
print(connection.follow_link(host).name)

(I am not sure where exactly the problem is in your code)

On Mon, Feb 15, 2021 at 2:32 PM Lavi Bochnik  wrote:
>
> Hello Benny,
>
> Not sure if I can email you directly, but you helped me once.
> I have some VM's which are labeled with specific affinity.
> Via API I would like to get a VM affinity label, in order to later get the 
> hosts defined under that affinity.
> I tried many ways as:
> vm = vms_service.list(search='')[0]
> print(vm.affinity_labels)
>
> or:
> for affinity_label in affinity_labels:
> print("%s:" % affinity_label.name)
> for vm_link in connection.follow_link(affinity_label.vms):
> vm = connection.follow_link(vm_link)
> print(" - %s" % vm.name)
>
> Getting an href error: Error: The URL 
> '/ovirt-engine/api/affinitylabels/0d68b36e-455a-4054-9958-0503d54a18db/vms' 
> isn't compatible with the base URL of the connection
>
> Href's are looking fine, as:
> In [77]: for x in affinity_labels_service.list():
> ...: print(x.href)
>
> /ovirt-engine/api/affinitylabels/0d68b36e-455a-4054-9958-0503d54a18db
> /ovirt-engine/api/affinitylabels/862df742-fdc4-4eb0-933c-027729e2187d
> /ovirt-engine/api/affinitylabels/686d96a6-0c9c-4fb7-b453-a79cbd48790b
> /ovirt-engine/api/affinitylabels/6d42df24-c4b1-45e7-8029-0ce26b8ed73d
>
> Or this:
>
>   1 for affinity_label in affinity_labels:
> > 2 for h in connection.follow_link(affinity_label.hosts):
>   3 print(h)
>   4
>   5
>
> /usr/local/lib64/python3.6/site-packages/ovirtsdk4/__init__.py in 
> follow_link(self, obj)
> 782 raise Error(
> 783 "The URL '%s' isn't compatible with the base URL of 
> the "
> --> 784 "connection" % href
> 785 )
> 786
>
> Error: The URL 
> '/ovirt-engine/api/affinitylabels/0d68b36e-455a-4054-9958-0503d54a18db/hosts' 
> isn't compatible with the base URL of the connection
>
> Not clear what is the right way to get a VM affinity hosts list.
>
> Thanks,
> Lavi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQEXUSXXEGERR3KAWCKJ2E4LLMPDOA5I/


[ovirt-users] Re: lvm tools doesn't show volumes created by ovirt

2021-02-15 Thread Benny Zlotnik
are you using a filter?
try with
$ lvs --config 'devices {filter=["a/.*/"]}'

On Mon, Feb 15, 2021 at 12:36 PM david duchovny  wrote:
>
> after switch to ovirt444, when a vm disk have been created lvs or lvdsplay 
> doesn't show the logical volumes info on the vdsm host.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RKXDMAOXKRDLT46F234SYV6AVZE4DA6Q/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EER65ETC75D7OLR77WQN4IVYAW4BS2CT/


[ovirt-users] Re: OVirt rest api 4.3. How do you get the job id started by the async parameter

2021-02-10 Thread Benny Zlotnik
Sorry, my example URL is messed up, the correct example is:
https://engine/ovirt-engine/api/jobs?search=correlation_id%3Dmycorrelationid

On Wed, Feb 10, 2021 at 2:38 PM Pascal DeMilly  wrote:
>
> Thank you. This is very helpful.  I'll give it a try.
>
> On Wed, Feb 10, 2021, 2:18 AM Benny Zlotnik  wrote:
>>
>> You could do this by setting a custom correlation_id with a query
>> param when invoking the operation, then filter the jobs using the same
>> correlation id:
>> https://engine/ovirt-engine/api/jobs?search%3Dmycorrelationid (it's
>> not a reported field)
>>
>> We do this in our system tests[1]
>>
>> [1] 
>> https://github.com/oVirt/ovirt-system-tests/blob/b4e156f1ee23c8b7d4338c937fa0a1b708a154f3/basic-suite-master/test-scenarios/test_004_basic_sanity.py#L767
>>
>> On Tue, Jan 26, 2021 at 3:54 PM Pascal DeMilly  
>> wrote:
>> >
>> > Hello
>> >
>> > Any ideas how to retrieve the job Id of an async job at the time of 
>> > creation or from a job to know which entities (VM, disk, template) Id it 
>> > was created from
>> >
>> > Thanks
>> >
>> > On Sat, Jan 23, 2021, 10:04 AM Martin Perina  wrote:
>> >>
>> >> Hi Ori,
>> >>
>> >> could you please take a look?
>> >>
>> >> Thanks,
>> >> Martin
>> >>
>> >> On Thu, Jan 21, 2021 at 9:52 PM  wrote:
>> >>>
>> >>> I am using the rest api to create a VM, because the VM is cloned from 
>> >>> the template and it takes a long time, I am also passing the async 
>> >>> parameters hoping to receive back a job id, which I could then query
>> >>>
>> >>> https://x/ovirt-engine/api/vms?async=true=true
>> >>>
>> >>> however I get the new VM record which is fine but then I have no way of 
>> >>> knowing the job id I should query to know when it is finished. And 
>> >>> looking at all jobs there is no reference back to the VM execept for the 
>> >>> description
>> >>>
>> >>>
>> >>>  > >>> id="d17125c7-6668-4b6c-ad22-95121cb66a31">
>> >>> 
>> >>>   > >>> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/clear" 
>> >>> rel="clear"/>
>> >>>   > >>> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/end" 
>> >>> rel="end"/>
>> >>> 
>> >>> Creating VM DEMO-PCC-4 from Template 
>> >>> MASTER-W10-20H2-CDrive in Cluster d1-c2
>> >>> > >>> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/steps" 
>> >>> rel="steps"/>
>> >>> true
>> >>> false
>> >>> 2021-01-21T12:49:06.700-08:00
>> >>> 2021-01-21T12:48:59.453-08:00
>> >>> started
>> >>> > >>> href="/ovirt-engine/api/users/0f2291fa-872a-11e9-b13c-00163e449339" 
>> >>> id="0f2291fa-872a-11e9-b13c-00163e449339"/>
>> >>>   
>> >>> ___
>> >>> Users mailing list -- users@ovirt.org
>> >>> To unsubscribe send an email to users-le...@ovirt.org
>> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>> oVirt Code of Conduct: 
>> >>> https://www.ovirt.org/community/about/community-guidelines/
>> >>> List Archives: 
>> >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGZQLI55EFZOSEBNEU5CCBDZ2EDXMINQ/
>> >>
>> >>
>> >>
>> >> --
>> >> Martin Perina
>> >> Manager, Software Engineering
>> >> Red Hat Czech s.r.o.
>> >> ___
>> >> Users mailing list -- users@ovirt.org
>> >> To unsubscribe send an email to users-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> oVirt Code of Conduct: 
>> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives: 
>> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SETMGF2ZNYGU6OJKJAY7WRPKTWUQGF7F/
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVKF5KCOP27VR6UXHXECEQJGWGZSHTWP/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5SBLTT2W7C777QYNZS6R4VJDAHTCTOP/


[ovirt-users] Re: OVirt rest api 4.3. How do you get the job id started by the async parameter

2021-02-10 Thread Benny Zlotnik
You could do this by setting a custom correlation_id with a query
param when invoking the operation, then filter the jobs using the same
correlation id:
https://engine/ovirt-engine/api/jobs?search%3Dmycorrelationid (it's
not a reported field)

We do this in our system tests[1]

[1] 
https://github.com/oVirt/ovirt-system-tests/blob/b4e156f1ee23c8b7d4338c937fa0a1b708a154f3/basic-suite-master/test-scenarios/test_004_basic_sanity.py#L767

On Tue, Jan 26, 2021 at 3:54 PM Pascal DeMilly  wrote:
>
> Hello
>
> Any ideas how to retrieve the job Id of an async job at the time of creation 
> or from a job to know which entities (VM, disk, template) Id it was created 
> from
>
> Thanks
>
> On Sat, Jan 23, 2021, 10:04 AM Martin Perina  wrote:
>>
>> Hi Ori,
>>
>> could you please take a look?
>>
>> Thanks,
>> Martin
>>
>> On Thu, Jan 21, 2021 at 9:52 PM  wrote:
>>>
>>> I am using the rest api to create a VM, because the VM is cloned from the 
>>> template and it takes a long time, I am also passing the async parameters 
>>> hoping to receive back a job id, which I could then query
>>>
>>> https://x/ovirt-engine/api/vms?async=true=true
>>>
>>> however I get the new VM record which is fine but then I have no way of 
>>> knowing the job id I should query to know when it is finished. And looking 
>>> at all jobs there is no reference back to the VM execept for the description
>>>
>>>
>>>  >> id="d17125c7-6668-4b6c-ad22-95121cb66a31">
>>> 
>>>   >> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/clear" 
>>> rel="clear"/>
>>>   >> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/end" 
>>> rel="end"/>
>>> 
>>> Creating VM DEMO-PCC-4 from Template 
>>> MASTER-W10-20H2-CDrive in Cluster d1-c2
>>> >> href="/ovirt-engine/api/jobs/d17125c7-6668-4b6c-ad22-95121cb66a31/steps" 
>>> rel="steps"/>
>>> true
>>> false
>>> 2021-01-21T12:49:06.700-08:00
>>> 2021-01-21T12:48:59.453-08:00
>>> started
>>> >> href="/ovirt-engine/api/users/0f2291fa-872a-11e9-b13c-00163e449339" 
>>> id="0f2291fa-872a-11e9-b13c-00163e449339"/>
>>>   
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGZQLI55EFZOSEBNEU5CCBDZ2EDXMINQ/
>>
>>
>>
>> --
>> Martin Perina
>> Manager, Software Engineering
>> Red Hat Czech s.r.o.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SETMGF2ZNYGU6OJKJAY7WRPKTWUQGF7F/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVKF5KCOP27VR6UXHXECEQJGWGZSHTWP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQL23WFIC6Q2OET7KZ2QUEHZES3HKRYN/


[ovirt-users] Re: Cinderlib died after upgrade Cluster Compatibility version 4.4 ->4.5

2021-01-20 Thread Benny Zlotnik
Can you submit a bug?

Looks like a problem with `volume_types` is empty, can you run
(assuming you use the default DB name/user):
$ psql -U ovirt_cinderlib -d ovirt_cinderlib -c "\x on" -c "select *
from volume_types"

The output on my system is:
-[ RECORD 1 ]+-
created_at  | 2020-08-05 08:55:20.408367
updated_at | 2020-08-05 08:55:20.408367
deleted_at  |
deleted   | f
id| fc7602c6-cb67-416e-a963-56f7f8f7eb42
name  | __DEFAULT__
qos_specs_id  |
is_public  | t
description  | Default Volume Type


On Wed, Jan 20, 2021 at 5:13 PM Mike Andreev
 wrote:
>
> hi all,
> after upgrade oVirt 4.4.7 Cluster Compatibility version 4.4 ->4.5 cinderlib 
> storage is died:
>
> tail -f /var/log/ovirt-engine/cinderlib/cinderlib.log
> 2021-01-20 12:35:28,781 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'delete_volume': Volume type with name __DEFAULT__ 
> could not be found. [3d4a9aa9-c5f0-498c-b650-bc64d979f194]
> 2021-01-20 12:43:31,209 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'delete_volume': Volume type with name __DEFAULT__ 
> could not be found. [927a1e82-aa21-4358-b213-120682d85e63]
> 2021-01-20 13:05:32,833 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'create_volume': Volume type with name __DEFAULT__ 
> could not be found. [b645c321-5c64-4c54-972a-f81080bb6b0f]
> 2021-01-20 15:16:45,667 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'connect_volume': Volume type with name __DEFAULT__ 
> could not be found. [19d98e01]
> 2021-01-20 15:16:48,232 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'connect_volume': Volume type with name __DEFAULT__ 
> could not be found. [2088beea]
> 2021-01-20 15:16:50,320 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'disconnect_volume': Volume type with name __DEFAULT__ 
> could not be found. [793923ab]
> 2021-01-20 15:37:28,423 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'connect_volume': Volume type with name __DEFAULT__ 
> could not be found. [5c8f0d73]
> 2021-01-20 15:37:30,707 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'connect_volume': Volume type with name __DEFAULT__ 
> could not be found. [35e96dcd]
> 2021-01-20 15:55:36,179 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'create_volume': Volume type with name __DEFAULT__ 
> could not be found. [b5fbb358-36c7-4d3b-9431-16b3a699f300]
> 2021-01-20 15:57:33,733 - cinderlib-client - ERROR - Failure occurred when 
> trying to run command 'create_volume': Volume type with name __DEFAULT__ 
> could not be found. [ae879975-d36a-4b2b-9829-2713266f1c1f]
>
> in engine.log:
> 021-01-20 15:57:31,795+01 INFO  
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-2) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] Running command: AddDiskCommand 
> internal: false. Entities affected :  ID: 
> 32dd7b42-eeb7-4cf5-9bef-bd8f8dd9608e Type: StorageAction group CREATE_DISK 
> with role type USER
> 2021-01-20 15:57:32,014+01 INFO  
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
>  (EE-ManagedExecutorService-commandCoordinator-Thread-1) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] Running command: 
> AddManagedBlockStorageDiskCommand internal: true.
> 2021-01-20 15:57:33,936+01 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedExecutorService-commandCoordinator-Thread-1) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] cinderlib execution failed:
> 2021-01-20 15:57:34,012+01 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-2) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE(2,022), Add-Disk 
> operation failed to complete.
> 2021-01-20 15:57:34,090+01 INFO  
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] Command 'AddDisk' id: 
> 'c7e43317-495e-4b97-96cd-02f02cb20ab2' child commands 
> '[1de7bfa3-09c3-45a5-955a-580236f0296c]' executions were completed, status 
> 'FAILED'
> 2021-01-20 15:57:35,127+01 ERROR 
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] Ending command 
> 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
> 2021-01-20 15:57:35,135+01 ERROR 
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
>  (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) 
> [ae879975-d36a-4b2b-9829-2713266f1c1f] Ending command 
> 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand'
>  with failure.

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for 
>configuring the master domain and still leveraging the same overall storage 
>environment provided by Ceph, correct?

yes, it shouldn't be a problem
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LN6JWSEXX7TTQMWWPUHPFRPTPQQMPUP3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Thanks for pointing out the requirement for Master domain. In theory, will I 
>be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as 
>master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too

>So each node has

>- oVirt Node NG / Centos
>- Ceph cluster member
>- iSCSI or Ceph iSCSI master domain

>How practical is such a setup?
Not sure, it could work, but it hasn't been tested and it's likely you
are going to be the first to try it
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PH6K2B2QMTRZPCRNBHWIV4OZB7X3NLHE/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Just for clarification: when you say Managed Block Storage you mean cinderlib 
>integration, >correct?
>Is still this one below the correct reference page for 4.4?
>https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
yes

>So are the manual steps still needed (and also repo config that seems against 
>pike)?
>Or do you have an updated link for configuring cinderlib in 4.4?
It is slightly outdated, I, and other users have successfully used
ussuri. I will update the feature page today.

>Is this true only for Self Hosted Engine Environment or also if I have an 
>external engine?
External engine as well. The reason this is required is that only
regular domains can serve as master domains which is required for a
host to get the SPM role
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JUV5F6GKRNFOCXB2BPW2ZY4UUZZ25DTV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Benny Zlotnik
Ceph support is available via Managed Block Storage (tech preview), it
cannot be used instead of gluster for hyperconverged setups.

Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a
datacenter

On Mon, Jan 18, 2021 at 11:58 AM Shantur Rathore  wrote:
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
> wrote:
>>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be wise 
>> to consider Gluster also. It has a great integration and it's quite easy to 
>> work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell how 
>> good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>> will need to use a full-blown distro. In general, using extra software on 
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due to 
>> CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>  написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
>> one disk which I plan to partition and use for hyper converged setup. As 
>> this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
> ___
> Users mailing list -- users@ovirt.org
> To 

[ovirt-users] Re: image upload on Managed Block Storage

2021-01-13 Thread Benny Zlotnik
The workaround I tried with ceph is to use `rbd import` and replace
the volume created by ovirt, the complete steps are:
1. Create an MBS disk in ovirt and find its ID
2. rbd import  --dest-pool 
3. rbd rm volume- --pool 
4. rbd mv  volume- --pool 

I only tried it with raw images



On Wed, Jan 13, 2021 at 10:12 AM Henry lol  wrote:
>
> yeah, I'm using ceph as a backend,
> then can oVirt discover/import existing volumes in ceph?
>
> 2021년 1월 13일 (수) 오후 5:00, Benny Zlotnik 님이 작성:
>>
>> It's not implemented yet, there are ways to workaround it with either
>> backend specific tools (like rbd) or by attaching the volume, are you
>> using ceph?
>>
>> On Wed, Jan 13, 2021 at 4:13 AM Henry lol  
>> wrote:
>> >
>> > Hello,
>> >
>> > I've just checked I can't upload an image into the MBS block through 
>> > either UI or restAPI.
>> >
>> > So, is there any method to do that?
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJZZZFO57RPCJKMDGXLKS3DTDD7YCFFK/


[ovirt-users] Re: image upload on Managed Block Storage

2021-01-13 Thread Benny Zlotnik
It's not implemented yet, there are ways to workaround it with either
backend specific tools (like rbd) or by attaching the volume, are you
using ceph?

On Wed, Jan 13, 2021 at 4:13 AM Henry lol  wrote:
>
> Hello,
>
> I've just checked I can't upload an image into the MBS block through either 
> UI or restAPI.
>
> So, is there any method to do that?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4EEGGZRSPPSQGM7GSRQN3YO4PTIHBLH/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-28 Thread Benny Zlotnik
On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin  wrote:
>
> Sandro, FYI we are not against cinderlib integration, more than we are 
> upgrade 4.3 to 4.4 due movement to cinderlib.
>
> But (!) current Managed Storage Block realization support only krbd (kernel 
> RBD) driver - it's also not a option, because kernel client is always lagging 
> behind librbd, and every update\bugfix we should reboot whole host instead 
> simple migration of all VMs and then migrate it back. Also with krbd host 
> will be use kernel page cache, and will not be unmounted if VM will crash 
> (qemu with librbd is one userland process).
>

There was rbd-nbd support at some point in cinderlib[1] which
addresses your concerns, but it was removed because of some issues

+Gorka, are there any plans to pick it up again?

[1] 
https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4



> So for me current situation look like this:
>
> 1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, 
> just update this code...
>
> 2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack 
> project_id related things... and... Done...
>
> 3. I don't care how current cinder + qemu code works, just write new one for 
> linux kernel, it's optimal to use userland apps, just add wrappers (no, it's 
> not);
>
> 4. Current Cinder integration require zero configuration on oVirt hosts. It's 
> lazy, why oVirt administrator do nothing? just write manual how-to install 
> packages - oVirt administrators love anything except "reinstall" from engine 
> (no, it's not);
>
> 5. We broke old code. New features is "Cinderlib is a Technology Preview 
> feature only. Technology Preview features are not supported with Red Hat 
> production service level agreements (SLAs), might not be functionally 
> complete, and Red Hat does not recommend to use them for production".
>
> 6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues 
> (we didn't see anything).
>
>
> And again, we are not hate new cinderlib integration. We just want that new 
> technology don't break all PRODUCTION clustes. Almost two years ago I write 
> on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about 
> "before deprecate, let's help to migrate". For now I see that oVirt totally 
> will disable QEMU RBD support and want to use kernel RBD module + python 
> os-brick + userland mappers + shell wrappers.
>
>
> Thanks, I hope I am writing this for a reason and it will help build bridges 
> between the community and the developers. We have been with oVirt for almost 
> 10 years and now it is a crossroads towards a different virtualization 
> manager.
>
> k
>
>
> So I see only regressions for now, hope we'll found some code owner who can 
> catch this oVirt 4.4 only bugs.
>

I looked at the bugs and I see you've already identified the problem
and have patches attached, if you can submit the patches and verify
them perhaps we can merge the fixes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7QTTECXLUD6LIEE36FBRJ3JSOQO27DP/


[ovirt-users] Re: illegal disk status

2020-12-15 Thread Benny Zlotnik
you can use:
$ vdsm-client Volume delete (you can use --help to see the params)

After this you'll need to remove the corresponding image manually from
the database images table, mark the parent image as active, remove the
snapshot from the snapshots table and fix the parent snapshot

Be sure to backup before trying this


On Sun, Dec 13, 2020 at 5:00 PM Daniel Menzel
 wrote:
>
> Hi,
>
> we have a problem with some VMs which cannot be started anymore due to an 
> illegal disk status of a snapshot.
>
> What happend (most likely)? we tried to snapshot those vms some days ago but 
> the storage domain didn't have enough free space left. Yesterday we shut 
> those vms down - and from then on they didn't start anymore.
>
> What have I tried so far?
>
> Via the web interface I tried to remove the snapshot - didn't work.
> Searched the internet. Found (among other stuff) this: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1649129
> via vdsm-tool dump-volume-chains I managed to list those 5 snapshots (see 
> below).
>
> The output for one machine was:
>
>image:2d707743-4a9e-40bb-b223-83e3be672dfe
>
>  - 9ae6ea73-94b4-4588-9a6b-ea7a58ef93c9
>status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, 
> type: PREALLOCATED, capacity: 32212254720, truesize: 32212254720
>
>  - f7d2c014-e8f5-4413-bfc5-4aa1426cb1e2
>status: ILLEGAL, voltype: LEAF, format: COW, legality: 
> ILLEGAL, type: SPARSE, capacity: 32212254720, truesize: 29073408
>
> So my idea was to follow the said bugzilla thread and update the volume - but 
> I didn't manage to find input for the job_id and generation.
>
> So my question is: Does anyone have an idea on how to (force) remove a given 
> snapshot via vsdm-{tool|client}?
>
> Thanks in advance!
> Daniel
>
> --
> Daniel Menzel
> Geschäftsführer
>
> Menzel IT GmbH
> Charlottenburger Str. 33a
> 13086 Berlin
>
> +49 (0) 30 / 5130 444 - 00
> daniel.men...@menzel-it.net
> https://menzel-it.net
>
> Geschäftsführer: Daniel Menzel, Josefin Menzel
> Unternehmenssitz: Berlin
> Handelsregister: Amtsgericht Charlottenburg
> Handelsregister-Nummer: HRB 149835 B
> USt-ID: DE 309 226 751
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZRHPBH6PKWUXSQIEKT4352D5RVNH6G6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTXEQDZSFUTOHFIAOQBMTCA2NMICCBUQ/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-10 Thread Benny Zlotnik
yes, the VM looks fine... to investigate this further I'd need the
full log vdsm log with error, please share it

On Wed, Dec 9, 2020 at 3:01 PM Joseph Goldman  wrote:
>
> Attached XML dump.
>
> Looks like its let me run a 'reboot' but im afraid to do a shutdown at
> this point.
>
> I have taken just a raw copy of the whole image group folder in the hope
> if worse came to worse I'd be able to recreate the disk with the actual
> files.
>
> All existing files seem to be referenced in the xmldump.
>
> On 9/12/2020 11:54 pm, Benny Zlotnik wrote:
> > The VM is running, right?
> > Can you run:
> > $ virsh -r dumpxml 
> >
> > On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman  wrote:
> >> Looks like the physical files dont exist:
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> >> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> >> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b', u'domainID':
> >> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> >> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> >> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> >> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> >> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> >> from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO  (jsonrpc/4) [api.virt] FINISH merge
> >> return={'status': {'message': 'Drive image file could not be found',
> >> 'code': 13}} from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
> >>
> >> Although looking on the physical file system they seem to exist:
> >>
> >> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> >> total 56637572
> >> -rw-rw. 1 vdsm kvm  15936061440 Dec  9 21:51
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> >> -rw-rw. 1 vdsm kvm  1048576 Dec  8 01:11
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> >> -rw-r--r--. 1 vdsm kvm  252 Dec  9 21:37
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> >> -rw-rw. 1 vdsm kvm  21521825792 Dec  8 01:47
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94
> >> -rw-rw. 1 vdsm kvm  1048576 May 17  2020
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> >> -rw-r--r--. 1 vdsm kvm  256 Dec  8 01:49
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> >> -rw-rw. 1 vdsm kvm 107374182400 Dec  9 01:13
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> >> -rw-rw. 1 vdsm kvm  1048576 Feb 24  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> >> -rw-r--r--. 1 vdsm kvm  320 May 17  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
> >>
> >> The UUID's match the UUID's in the snapshot list.
> >>
> >> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> >> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> >> pretty much those 2 calls and then XML dump.
> >>
> >> Still a bit lost on the most comfortable way forward unfortunately.
> >>
> >> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
> >>>> [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
> >>> grepping error is ok, but it does not show the reason for the failure,
> >>> which will probably be on the vdsm host (you can use flow_id
> >>> 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
> >>> Need to see the underlying error causing: VDSGenericException:
> >>> VDSErrorException: Failed to SnapshotVDS, error =
> >>> Snapshot failed, code = 48
> >>>
> >>>> Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
> >>>> DB) and then trying to create does not change it back to illegal, but
> >>>> trying to delete that snapshot fails and sets it back to 4.
> >>> I see, can you share the removal failure log (similar information as
> >>> requested above)
> >>>
> >>> regarding backup, I don't have a good answer, hopefully someone else
> >>> has suggestions
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct: 
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives: 
> >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGM4MNKBHS7EWOIPS6WGVQSKEKLKDAQ7/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-09 Thread Benny Zlotnik
The VM is running, right?
Can you run:
$ virsh -r dumpxml 

On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman  wrote:
>
> Looks like the physical files dont exist:
>
> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b', u'domainID':
> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> from=:::192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
>
> 2020-12-09 22:01:00,122+1000 INFO  (jsonrpc/4) [api.virt] FINISH merge
> return={'status': {'message': 'Drive image file could not be found',
> 'code': 13}} from=:::192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
>
> Although looking on the physical file system they seem to exist:
>
> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> total 56637572
> -rw-rw. 1 vdsm kvm  15936061440 Dec  9 21:51
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> -rw-rw. 1 vdsm kvm  1048576 Dec  8 01:11
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> -rw-r--r--. 1 vdsm kvm  252 Dec  9 21:37
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> -rw-rw. 1 vdsm kvm  21521825792 Dec  8 01:47
> a6d4533b-b0b0-475d-a436-26ce99a38d94
> -rw-rw. 1 vdsm kvm  1048576 May 17  2020
> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> -rw-r--r--. 1 vdsm kvm  256 Dec  8 01:49
> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> -rw-rw. 1 vdsm kvm 107374182400 Dec  9 01:13
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> -rw-rw. 1 vdsm kvm  1048576 Feb 24  2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> -rw-r--r--. 1 vdsm kvm  320 May 17  2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
>
> The UUID's match the UUID's in the snapshot list.
>
> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> pretty much those 2 calls and then XML dump.
>
> Still a bit lost on the most comfortable way forward unfortunately.
>
> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
> >> [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
> > grepping error is ok, but it does not show the reason for the failure,
> > which will probably be on the vdsm host (you can use flow_id
> > 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
> > Need to see the underlying error causing: VDSGenericException:
> > VDSErrorException: Failed to SnapshotVDS, error =
> > Snapshot failed, code = 48
> >
> >> Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
> >> DB) and then trying to create does not change it back to illegal, but
> >> trying to delete that snapshot fails and sets it back to 4.
> > I see, can you share the removal failure log (similar information as
> > requested above)
> >
> > regarding backup, I don't have a good answer, hopefully someone else
> > has suggestions
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CAKIPJCTQHNNVLZWQLLZXCJPKDLVIKKL/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-08 Thread Benny Zlotnik
>[root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
grepping error is ok, but it does not show the reason for the failure,
which will probably be on the vdsm host (you can use flow_id
9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
Need to see the underlying error causing: VDSGenericException:
VDSErrorException: Failed to SnapshotVDS, error =
Snapshot failed, code = 48

>Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
>DB) and then trying to create does not change it back to illegal, but
>trying to delete that snapshot fails and sets it back to 4.
I see, can you share the removal failure log (similar information as
requested above)

regarding backup, I don't have a good answer, hopefully someone else
has suggestions
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-08 Thread Benny Zlotnik
Do you know why your snapshot creation failed? Do you have logs with the error?

On paper the situation does not look too bad, as the only discrepancy
between the database and vdsm is the status of the image, and since
it's legal on vdsm, changing it legal in database should work (image
status 1)

>Active Image is not the same image that has a parentid of all 0
Can you elaborate on this? The image with the empty parent is usually
the base image (the first active image), the active image will usually
be the leaf (unless the VM is in preview or something similar)

Of course do not make any changes without backing up first
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3FBWNGWNQ3U4UAPXD7CXLLIRP25Y3BS/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
It should be in the images table, there is an it_guid column which
indicates which templates the image is based on

On Wed, Dec 2, 2020 at 2:16 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi,
> if I can ask some other info, probably I find a "ghost disk" related to
> previous problem.
>
> Infact, I still cannot remove the broken template, because its disk is
> still registered somewhere; can You please suggest me where to search for
> it?
>
> Thanks a lot.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXYSDN7LLCSLS4MV37XBCJT3EMX4BUKB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IOK6IUSOOOAIIIZQ6YGXLRKIBCLC75F2/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
These are the available statuses[1], you can change it to 0, assuming the
VM is down


[1]
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/common/src/main/java/org/ovirt/engine/core/common/businessentities/VMStatus.java#L10

On Wed, Dec 2, 2020 at 12:57 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi.
> It's correct.
> But how unlock / change / remove it?
> In the same table, a lot of fields are empty, "0" or NULL.
> Only vm_guid and status have a value.
> Thanks,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIZ7TQV46L3FBOV6PTNMTWQ2O7CF5EGX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AD3SUCPR2TR73VXTINI522M5BDTJ56FU/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
I am not sure what is locked? If everything in the images table is 1, then
the disks are not locked. If the VM is in status 15, which is "Images
Locked" status, then this status is set in the vm_dynamic table

On Wed, Dec 2, 2020 at 12:43 PM <3c.moni...@gruppofilippetti.it> wrote:

> Hi,
> in this table all imagestatus = 1
>
> Any other ideas?
>
> Thanks a lot,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I5E5IMBWORF45BLIKRSQPNSTH2O22WWG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DW43N255FSL62JNJMNFJH57NZTNOOODY/


[ovirt-users] Re: ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom

2020-12-02 Thread Benny Zlotnik
imagestatus is in the images table, not vms

On Wed, Dec 2, 2020 at 11:30 AM <3c.moni...@gruppofilippetti.it> wrote:

> Hi.
> I did a full select on "vms" and field "imagestatus" there isn't!
> May be this the reason for which the tool is unable to manage it?
> Follows full field list:
>
>
> "vm_name","mem_size_mb","max_memory_size_mb","num_of_io_threads","nice_level","cpu_shares","vmt_guid","os","description","free_text_comment","cluster_id","creation_date","auto_startup","lease_sd_id","lease_info","is_stateless","is_smartcard_enabled","is_delete_protected","sso_method","dedicated_vm_for_vds","default_boot_sequence","vm_type","vm_pool_spice_proxy","cluster_name","transparent_hugepages","trusted_service","storage_pool_id","storage_pool_name","cluster_spice_proxy","vmt_name","status","vm_ip","vm_ip_inet_array","vm_host","last_start_time","boot_time","downtime","guest_cur_user_name","console_cur_user_name","runtime_name","guest_os","console_user_id","guest_agent_nics_hash","run_on_vds","migrating_to_vds","app_list","vm_pool_name","vm_pool_id","vm_guid","num_of_monitors","single_qxl_pci","allow_console_reconnect","is_initialized","num_of_sockets","cpu_per_socket","threads_per_cpu","usb_policy","acpi_enable","session","num_of_cpus","quota_id","quota_name","quota_enforcement_
>
>  
> type","boot_sequence","utc_diff","client_ip","guest_requested_memory","time_zone","cpu_user","cpu_sys","elapsed_time","usage_network_percent","disks_usage","usage_mem_percent","usage_cpu_percent","run_on_vds_name","cluster_cpu_name","default_display_type","priority","iso_path","origin","cluster_compatibility_version","initrd_url","kernel_url","kernel_params","pause_status","exit_message","exit_status","migration_support","predefined_properties","userdefined_properties","min_allocated_mem","hash","cpu_pinning","db_generation","host_cpu_flags","tunnel_migration","vnc_keyboard_layout","is_run_and_pause","created_by_user_id","last_watchdog_event","last_watchdog_action","is_run_once","volatile_run","vm_fqdn","cpu_name","emulated_machine","current_cd","reason","exit_reason","instance_type_id","image_type_id","architecture","original_template_id","original_template_name","last_stop_time","migration_downtime","template_version_number","serial_number_policy","custom_serial_number","is_boot_m
>
>  
> enu_enabled","guest_cpu_count","next_run_config_exists","is_previewing_snapshot","numatune_mode","is_spice_file_transfer_enabled","is_spice_copy_paste_enabled","cpu_profile_id","is_auto_converge","is_migrate_compressed","custom_emulated_machine","bios_type","custom_cpu_name","spice_port","spice_tls_port","spice_ip","vnc_port","vnc_ip","ovirt_guest_agent_status","qemu_guest_agent_status","guest_mem_buffered","guest_mem_cached","small_icon_id","large_icon_id","migration_policy_id","provider_id","console_disconnect_action","resume_behavior","guest_timezone_offset","guest_timezone_name","guestos_arch","guestos_codename","guestos_distribution","guestos_kernel_version","guestos_type","guestos_version","custom_compatibility_version","guest_containers","has_illegal_images","multi_queues_enabled"
>
> And just for let You know, its "status = 15"
>
> Please let me know.
> Thanks,
> M.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCJNRY274XLTKNIXJBGKSJAS26YKCROI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRXOGGQ7ZPVIQEEZ4SDNCQXU5CQ6RPFC/


[ovirt-users] Re: Check multipath status using API

2020-11-26 Thread Benny Zlotnik
It is implemented, there is no special API for this, using the events
endpoint (ovirt-engine/api/events) is the way to access this information

On Thu, Nov 26, 2020 at 3:00 PM Paulo Silva  wrote:

> Hi,
>
> Is it possible to check the multipath status using the current REST API on
> ovirt?
>
> There is an old page that hints at this but I'm not sure if this has been
> implemented:
>
>
> https://www.ovirt.org/develop/release-management/features/storage/multipath-events.html
>
> Thanks
> --
> Paulo Silva 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WNCRLXDIYQBXF6HERZBSLU5JSIB75VPJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SR32GTHZ4BCWMUI4LFZFMPQWYAJP2HWG/


[ovirt-users] Re: Backporting of Fixes

2020-11-15 Thread Benny Zlotnik
Hi,

4.3 is no longer maintained.
Regardless, this bug was never reproduced and has no fixes attached to it,
so there is nothing to backport. The related bugs and their fixes are all
related to changes that were introduced in 4.4, so it is unlikely you hit
the same issue.
If you can share more details and attach logs we may know more.

On Thu, Nov 12, 2020 at 11:24 PM Gillingham, Eric J (US 393D) via Users <
users@ovirt.org> wrote:

> I'm still running on oVirt 4.3 due to some hardware that will require some
> extra effort to move to 4.4 we're not quite ready to do yet, and am
> currently hitting what I believe to be
> https://bugzilla.redhat.com/show_bug.cgi?id=1820998 which is fixed in
> 4.4. I'm wondering if there's a process to request a backport, or should I
> just open a new bug against 4.3?
>
> Thank You
> - Eric
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CNDJDE4RG6M24O5IKNNLNHELVBWVLPW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5NBXUOV2NFWPKKM7TTPPUW7FHMGHSTTJ/


[ovirt-users] Re: LiveStorageMigration fail

2020-11-09 Thread Benny Zlotnik
Which version are you using?
Did this happen more than once for the same disk?
A similar bug was fixed in 4.3.10.1[1]
There is another bug with a similar symptom which occurs very rarely and we
were unable to reproduce it

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1758048

On Mon, Nov 9, 2020 at 3:57 PM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hello experts,
>
> perhaps someone has an idea about that error. It appears when in try to
> migrate a disk to another storage, and this live. Generally it works
> good but - this is the log snippet:
>
> HSMGetAllTasksStatusesVDS failed: Error during destination image
> manipulation: u"image=02240cf3-65b6-487c-b5af-c266a1dd18f8, dest
> domain=3c4fbbfe-6796-4007-87ab-d7f205b7fae3: msg=Invalid parameter:
> 'capacity=134217728000'"
>
> Surely the is enough space on the target domain for this operation (~4TB).
>
> Any ideas..?
>
> Greetings from
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KT5HQXEFB477O7GW5KP4BJJUR5YBL6Q/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFDCQPAPIVHC2Z7MMZMORZTOP3O5RGXF/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-05 Thread Benny Zlotnik
You mean the disk physically resides on a different storage domain, but
engine sees it on another?
Which version did this happen on?
Do you have the logs from this failure?

On Tue, Nov 3, 2020 at 5:51 PM  wrote:

>
>
> I used it but it didn't work The disk is still in locked status
>
> when I run the unlock_entity.sh script it doesn't show that the disk is
> locked
>
> but it was possible to identify that the disk was moved to the other
> storage but shows that it is in the old storage
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFLKANS6H2KVOTIJBZ7E2OB4FD3NMYEO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2R77MFXD27LHAABWH343VSJTAAEFUOVZ/


[ovirt-users] Re: locked disk making it impossible to remove vm

2020-11-03 Thread Benny Zlotnik
Do you know why it was stuck?

You can use unlock_entity.sh[1] to unlock the disk


[1]
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html

On Tue, Nov 3, 2020 at 1:38 PM  wrote:

> I have a vm that has two disks one active and another disabling when
> trying to migrate the disk to another storage the task was in a loop
> creating several snapshots, I turned off the VM and the loop stopped and
> after several hours the task disappeared and the VM disk was left blocking
> making it impossible to be deleted and when trying to delete the vm it does
> not exclude from the following message locked disk making it impossible to
> remove vm
>
>
> How can I solve this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5CTA74EKK337ROAS4HT5HU5YYOVSHDB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4RXBSMPBMACO3HMWTJQ2WNXKOIZJ7MQ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
sorry, accidentally hit send prematurely, the database table is
driver_options, the options are json under driver_options

On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik  wrote:
>
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
>
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
> >
> > On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > > to add `rbd default features = 3` to the configuration. I think there
> > > are plans to support rbd-nbd in cinderlib which would allow using
> > > additional features, but I'm not aware of anything concrete.
> > >
> > > Additionally, the path for the cinderlib log is
> > > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > > would appear in the vdsm.log on the relevant host, and would look
> > > something like "RBD image feature set mismatch. You can disable
> > > features unsupported by the kernel with 'rbd feature disable'"
> >
> > Thanks for the pointer!  Indeed,
> > /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> > looking for.  In this case, it was a user error entering the RBDDriver
> > options:
> >
> >
> > 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> > option use_multipath_for_xfer
> >
> > ...it should have been 'use_multipath_for_image_xfer'.
> >
> > Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> > Domains -> Manage Domain', all driver options are unedittable except for
> > 'Name'.
> >
> > Then I thought that maybe I can't edit the driver options while a disk
> > still exists, so I tried removing the one disk in this domain.  But even
> > after multiple attempts, it still fails with:
> >
> > 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> > volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> > 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> > when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> > update or delete on table "volumes" violates foreign key constraint
> > "volume_attachment_volume_id_fkey" on table "volume_attachment"
> > DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> > referenced from table "volume_attachment".
> >
> > See https://pastebin.com/KwN1Vzsp for the full log entries related to
> > this removal.
> >
> > It's not lying, the volume no longer exists in the rbd pool, but the
> > cinder database still thinks it's attached, even though I was never able
> > to get it to attach to a VM.
> >
> > What are my options for cleaning up this stale disk in the cinder database?
> >
> > How can I update the driver options in my storage domain (deleting and
> > recreating the domain is acceptable, if possible)?
> >
> > --Mike
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQPXZYCE5GWKSHDN5FU7I5L4VP75QPEJ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>
> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > to add `rbd default features = 3` to the configuration. I think there
> > are plans to support rbd-nbd in cinderlib which would allow using
> > additional features, but I'm not aware of anything concrete.
> >
> > Additionally, the path for the cinderlib log is
> > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > would appear in the vdsm.log on the relevant host, and would look
> > something like "RBD image feature set mismatch. You can disable
> > features unsupported by the kernel with 'rbd feature disable'"
>
> Thanks for the pointer!  Indeed,
> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> looking for.  In this case, it was a user error entering the RBDDriver
> options:
>
>
> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> option use_multipath_for_xfer
>
> ...it should have been 'use_multipath_for_image_xfer'.
>
> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> Domains -> Manage Domain', all driver options are unedittable except for
> 'Name'.
>
> Then I thought that maybe I can't edit the driver options while a disk
> still exists, so I tried removing the one disk in this domain.  But even
> after multiple attempts, it still fails with:
>
> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> update or delete on table "volumes" violates foreign key constraint
> "volume_attachment_volume_id_fkey" on table "volume_attachment"
> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> referenced from table "volume_attachment".
>
> See https://pastebin.com/KwN1Vzsp for the full log entries related to
> this removal.
>
> It's not lying, the volume no longer exists in the rbd pool, but the
> cinder database still thinks it's attached, even though I was never able
> to get it to attach to a VM.
>
> What are my options for cleaning up this stale disk in the cinder database?
>
> How can I update the driver options in my storage domain (deleting and
> recreating the domain is acceptable, if possible)?
>
> --Mike
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q5IC4SDS5AS64RIOKHBFNQDWCOBKKDJW/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
tachDiskToVmCommand' failed:
> EngineException: java.lang.NullPointerException (Failed with error
> ENGINE and code 5001)
> 2020-10-13 15:15:26,013-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.
> 2020-10-13 15:15:26,021-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-13) [7cb262cc] EVENT_ID:
> USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testvm_disk
> to VM grafana (User: michael.thomas@internal-authz).
> 2020-10-13 15:15:26,021-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Lock freed to object
> 'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]',
> sharedLocks=''}'
>
> The /var/log/cinder/ directory on the ovirt node is empty, and doesn't
> exist on the engine itself.
>
> To verify that it's not a cephx permission issue, I tried accessing the
> block storage from both the engine and the ovirt node using the
> credentials I set up in the ManagedBlockStorage setup page:
>
> [root@ovirt4]# rbd --id ovirt ls rbd.ovirt.data
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> [root@ovirt4]# rbd --id ovirt info
> rbd.ovirt.data/volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> rbd image 'volume-5419640e-445f-4b3f-a29d-b316ad031b7a':
>  size 100 GiB in 25600 objects
>  order 22 (4 MiB objects)
>  snapshot_count: 0
>  id: 68a7cd6aeb3924
>  block_name_prefix: rbd_data.68a7cd6aeb3924
>  format: 2
>  features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>  op_features:
>  flags:
>  create_timestamp: Tue Oct 13 06:53:55 2020
>  access_timestamp: Tue Oct 13 06:53:55 2020
>  modify_timestamp: Tue Oct 13 06:53:55 2020
>
> Where else can I look to see where it's failing?
>
> --Mike
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdated,  but I believe this is what most users
> >>> in the mailing list were using. We held off on updating it because the
> >>> installation instructions have been a moving target, but it is more
> >>> stable now and I will update it soon.
> >>>
> >>> Specifically speaking, the openstack ver

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
Not sure about this, adding +Yedidyah Bar David

On Wed, Sep 30, 2020 at 3:04 PM Michael Thomas  wrote:
>
> I hadn't installed the necessary packages when the engine was first
> installed.
>
> However, running 'engine-setup --reconfigure-optional-components'
> doesn't work at the moment because (by design) my engine does not have a
> network route outside of the cluster.  It fails with:
>
> [ INFO  ] DNF Errors during downloading metadata for repository 'AppStream':
> - Curl error (7): Couldn't connect to server for
> http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
> [ ERROR ] DNF Failed to download metadata for repo 'AppStream': Cannot
> prepare internal mirrorlist: Curl error (7): Couldn't connect to server
> for
> http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra
> [Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
>
>
> I have a proxy set in the engine's /etc/dnf/dnf.conf, but it doesn't
> seem to be obeyed when running engine-setup.  Is there another way that
> I can get engine-setup to use a proxy?
>
> --Mike
>
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview (it will
> > not be enabled by default)?
> > It should handle the creation of the database automatically, if you
> > didn't you can enable it by running:
> > `engine-setup --reconfigure-optional-components`
> >
> >
> > On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
> >>
> >> Hi Benny,
> >>
> >> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> >> Octopus.  Then I tried using these instructions, as well as the deep
> >> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
> >>
> >> I've done this a couple of times, and each time the engine fails when I
> >> try to add the new managed block storage domain.  The error on the
> >> screen indicates that it can't connect to the cinder database.  The
> >> error in the engine log is:
> >>
> >> 2020-09-29 17:02:11,859-05 WARN
> >> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> >> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> >> action 'AddManagedBlockStorageDomain' failed for user
> >> admin@internal-authz. Reasons:
> >> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
> >>
> >> I had created the db on the engine with this command:
> >>
> >> su - postgres -c "psql -d template1 -c \"create database cinder owner
> >> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> >> lc_ctype 'en_US.UTF-8';\""
> >>
> >> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
> >>
> >>   hostcinder  engine  ::0/0   md5
> >>   hostcinder  engine  0.0.0.0/0   md5
> >>
> >> Is there anywhere else I should look to find out what may have gone wrong?
> >>
> >> --Mike
> >>
> >> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> >>> The feature is currently in tech preview, but it's being worked on.
> >>> The feature page is outdated,  but I believe this is what most users
> >>> in the mailing list were using. We held off on updating it because the
> >>> installation instructions have been a moving target, but it is more
> >>> stable now and I will update it soon.
> >>>
> >>> Specifically speaking, the openstack version should be updated to
> >>> train (it is likely ussuri works fine too, but I haven't tried it) and
> >>> cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> >>> installed instead of using pip, same goes for os-brick. The rest of
> >>> the information is valid.
> >>>
> >>>
> >>> [1] 
> >>> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >>>
> >>> On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>>>
> >>>> I'm looking for the latest documentation for setting up a Managed Block
> >>>> Device storage domain so that I can move some of my VM images to ceph 
> >>>> rbd.
> >>>>
> >>>> I found this:
&

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Benny Zlotnik
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:
>
> Hi Benny,
>
> Thanks for the confirmation.  I've installed openstack-ussuri and ceph
> Octopus.  Then I tried using these instructions, as well as the deep
> dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.
>
> I've done this a couple of times, and each time the engine fails when I
> try to add the new managed block storage domain.  The error on the
> screen indicates that it can't connect to the cinder database.  The
> error in the engine log is:
>
> 2020-09-29 17:02:11,859-05 WARN
> [org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
> (default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
> action 'AddManagedBlockStorageDomain' failed for user
> admin@internal-authz. Reasons:
> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED
>
> I had created the db on the engine with this command:
>
> su - postgres -c "psql -d template1 -c \"create database cinder owner
> engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
> lc_ctype 'en_US.UTF-8';\""
>
> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
>
>  hostcinder  engine  ::0/0   md5
>  hostcinder  engine  0.0.0.0/0   md5
>
> Is there anywhere else I should look to find out what may have gone wrong?
>
> --Mike
>
> On 9/29/20 3:34 PM, Benny Zlotnik wrote:
> > The feature is currently in tech preview, but it's being worked on.
> > The feature page is outdated,  but I believe this is what most users
> > in the mailing list were using. We held off on updating it because the
> > installation instructions have been a moving target, but it is more
> > stable now and I will update it soon.
> >
> > Specifically speaking, the openstack version should be updated to
> > train (it is likely ussuri works fine too, but I haven't tried it) and
> > cinderlib has an RPM now (python3-cinderlib)[1], so it can be
> > installed instead of using pip, same goes for os-brick. The rest of
> > the information is valid.
> >
> >
> > [1] 
> > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
> >
> > On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
> >>
> >> I'm looking for the latest documentation for setting up a Managed Block
> >> Device storage domain so that I can move some of my VM images to ceph rbd.
> >>
> >> I found this:
> >>
> >> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> >>
> >> ...but it has a big note at the top that it is "...not user
> >> documentation and should not be treated as such."
> >>
> >> The oVirt administration guide[1] does not talk about managed block 
> >> devices.
> >>
> >> I've found a few mailing list threads that discuss people setting up a
> >> Managed Block Device with ceph, but didn't see any links to
> >> documentation steps that folks were following.
> >>
> >> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> >> and if so, where is the documentation for using it?
> >>
> >> --Mike
> >> [1]ovirt.org/documentation/administration_guide/
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHHOMSMDUWBHXZC77SQE4R3MAK7M4ZCN/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Benny Zlotnik
The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:
>
> I'm looking for the latest documentation for setting up a Managed Block
> Device storage domain so that I can move some of my VM images to ceph rbd.
>
> I found this:
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> ...but it has a big note at the top that it is "...not user
> documentation and should not be treated as such."
>
> The oVirt administration guide[1] does not talk about managed block devices.
>
> I've found a few mailing list threads that discuss people setting up a
> Managed Block Device with ceph, but didn't see any links to
> documentation steps that folks were following.
>
> Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
> and if so, where is the documentation for using it?
>
> --Mike
> [1]ovirt.org/documentation/administration_guide/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQ7QQOP5T6UBFRXGWHNUN2SYN2CBPIZZ/


[ovirt-users] Re: Problem with "ceph-common" pkg for oVirt Node 4.4.1

2020-08-19 Thread Benny Zlotnik
I think it would be easier to get an answer for this on a ceph mailing
list, but why do you need specifically 12.2.7?

On Wed, Aug 19, 2020 at 4:08 PM  wrote:
>
> Hi!
> I have a problem with install ceph-common package(needed for cinderlib 
> Managed Block Storage) in  oVirt Node 4.4.1 - oVirt doc say: "$ yum install 
> -y ceph-common" but no Repo with ceph-common ver 12.2.7 for CentOS8 - 
> official CentOS has only "ceph-common-10.2.5-4.el7.x86_64.rpm"  and CEPH has 
> only ceph-common ver. 14.2 for EL8
> How can I install ceph-common ver. 12.2.7?
>
> BR
> Mike
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJ4UFLRMLS7GMTTMUGUM4QHSVNX5CZRV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OKUXQ4DM3FNO77BF236C3PRIMLVDCGP/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-23 Thread Benny Zlotnik
I think you can remove 6197b30d-0732-4cc7-aef0-12f9f6e9565b from images and
the corresponding snapshot, and set the parent,
8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 as active (active = 't' field), and
change its snapshot to be active snapshot. That is if I correctly
understand the current layout, that 6197b30d-0732-4cc7-aef0-12f9f6e9565b
was removed from the storage and 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 is
now the only volume for the disk

On Wed, Jul 22, 2020 at 1:32 PM Arsène Gschwind 
wrote:

> Please find the result:
>
> psql -d engine -c "\x on" -c "select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';"
>
> Expanded display is on.
>
> -[ RECORD 1 ]-+-
>
> image_guid| 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> creation_date | 2020-04-23 14:59:23+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | ----
>
> imagestatus   | 1
>
> lastmodified  | 2020-07-06 20:38:36.093+02
>
> vm_snapshot_id| 6bc03db7-82a3-4b7e-9674-0bdd76933eb8
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-04-23 14:59:20.919344+02
>
> _update_date  | 2020-07-06 20:38:36.093788+02
>
> active| f
>
> volume_classification | 1
>
> qcow_compat   | 2
>
> -[ RECORD 2 ]-+-
>
> image_guid| 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> creation_date | 2020-07-06 20:38:38+02
>
> size  | 161061273600
>
> it_guid   | ----
>
> parentid  | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> imagestatus   | 1
>
> lastmodified  | 1970-01-01 01:00:00+01
>
> vm_snapshot_id| fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> volume_type   | 2
>
> volume_format | 4
>
> image_group_id| d7bd480d-2c51-4141-a386-113abf75219e
>
> _create_date  | 2020-07-06 20:38:36.093788+02
>
> _update_date  | 2020-07-06 20:38:52.139003+02
>
> active| t
>
> volume_classification | 0
>
> qcow_compat   | 2
>
>
> psql -d engine -c "\x on" -c "SELECT s.* FROM snapshots s, images i where 
> i.vm_snapshot_id = s.snapshot_id and i.image_guid = 
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
> Expanded display is on.
>
> -[ RECORD 1 
> ]---+--
>
> snapshot_id | fd5193ac-dfbc-4ed2-b86c-21caa8009bb2
>
> vm_id   | b5534254-660f-44b1-bc83-d616c98ba0ba
>
> snapshot_type   | ACTIVE
>
> status  | OK
>
> description | Active VM
>
> creation_date   | 2020-04-23 14:59:20.171+02
>
> app_list| 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt-guest-agent-common-1.0.14-1.el7
>
> vm_configuration|
>
> _create_date| 2020-04-23 14:59:20.154023+02
>
> _update_date| 2020-07-03 17:33:17.483215+02
>
> memory_metadata_disk_id |
>
> memory_dump_disk_id |
>
> vm_configuration_broken | f
>
>
> Thanks.
>
>
>
> On Tue, 2020-07-21 at 13:45 +0300, Benny Zlotnik wrote:
>
> I forgot to add the `\x on` to make the output readable, can you run it
> with:
> $ psql -U engine -d engine -c "\x on" -c ""
>
> On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
> wrote:
>
> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+-++-

[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-23 Thread Benny Zlotnik
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939


On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot  wrote:

>
>
>
>
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots, things seem in order.
> Removing snapshots shows warning 'disk in illegal state'
>
> Mouse hover shows .. please do not shutdown before succesfully remove
> snapshot
>
>
> ovirt-engine log
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
> Merge failed
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
> node2.lab,
> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> MergeVDS, error = Merge failed, code = 52
> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
> EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52 (Failed with error mergeErr and code 52)
> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52
>   
>io='threads'/>
> 2020-07-22 16:40:39,659+02 ERROR
> [org.ovirt.engine.core.bll.MergeStatusCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
> 2020-07-22 16:40:41,524+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
> 'MERGE_STATUS'
> 2020-07-22 16:40:42,597+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
> failed. Images have been marked illegal and can no longer be previewed or
> reverted to. Please retry Live Merge on the snapshot to complete the
> operation.
> 2020-07-22 16:40:42,603+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
> with failure.
> 2020-07-22 16:40:43,679+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2020-07-22 16:40:43,774+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>
>
> VDSM on hypervisor
> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
> {"driver": "qcow2", "file": {"driver": "file", "filename":
> 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-21 Thread Benny Zlotnik
I forgot to add the `\x on` to make the output readable, can you run it
with:
$ psql -U engine -d engine -c "\x on" -c ""

On Mon, Jul 20, 2020 at 2:50 PM Arsène Gschwind 
wrote:

> Hi,
>
> Please find the output:
>
> select * from images where image_group_id = 
> 'd7bd480d-2c51-4141-a386-113abf75219e';
>
>
>   image_guid  | creation_date  | size 
> |   it_guid|   parentid   
> | imagestatus |lastmodified|vm_snapshot_id
> | volume_type | volume_for
>
> mat |image_group_id| _create_date  |  
>_update_date  | active | volume_classification | qcow_compat
>
> --++--+--+--+-++--+-+---
>
> +--+---+---++---+-
>
>  8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 | 2020-04-23 14:59:23+02 | 161061273600 
> | ---- | ---- 
> |   1 | 2020-07-06 20:38:36.093+02 | 
> 6bc03db7-82a3-4b7e-9674-0bdd76933eb8 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-04-23 14:59:20.919344+02 | 
> 2020-07-06 20:38:36.093788+02 | f  | 1 |   2
>
>  6197b30d-0732-4cc7-aef0-12f9f6e9565b | 2020-07-06 20:38:38+02 | 161061273600 
> | ---- | 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 
> |   1 | 1970-01-01 01:00:00+01 | 
> fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 |   2 |
>
>   4 | d7bd480d-2c51-4141-a386-113abf75219e | 2020-07-06 20:38:36.093788+02 | 
> 2020-07-06 20:38:52.139003+02 | t  | 0 |   2
>
> (2 rows)
>
>
>
> SELECT s.* FROM snapshots s, images i where i.vm_snapshot_id = s.snapshot_id 
> and i.image_guid = '6197b30d-0732-4cc7-aef0-12f9f6e9565b';
>
>  snapshot_id  |vm_id 
> | snapshot_type | status | description |   creation_date| 
>   app_list
>
>  | vm_configuration | _create_date
>   | _update_date  | memory_metadata_disk_id | 
> memory_dump_disk_id | vm_configuration_broken
>
> --+--+---++-++--
>
> -+--+---+---+-+-+-
>
>  fd5193ac-dfbc-4ed2-b86c-21caa8009bb2 | b5534254-660f-44b1-bc83-d616c98ba0ba 
> | ACTIVE| OK | Active VM   | 2020-04-23 14:59:20.171+02 | 
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt
>
> -guest-agent-common-1.0.14-1.el7 |  | 2020-04-23 
> 14:59:20.154023+02 | 2020-07-03 17:33:17.483215+02 | 
> | | f
>
> (1 row)
>
>
> Thanks,
> Arsene
>
> On Sun, 2020-07-19 at 16:34 +0300, Benny Zlotnik wrote:
>
> Sorry, I only replied to the question, in addition to removing the
>
> image from the images table, you may also need to set the parent as
>
> the active image and remove the snapshot referenced by this image from
>
> the database. Can you provide the output of:
>
> $ psql -U engine -d engine -c "select * from images where
>
> image_group_id = ";
>
>
> As well as
>
> $ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
>
> where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
>
>
> On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik <
>
> bzlot...@redhat.com
>
> > wrote:
>
>
> It can be done by deleting from the images table:
>
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
>
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
>
> of course the database should be backed up before doing this
>
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer <
>
> nsof...@red

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
Sorry, I only replied to the question, in addition to removing the
image from the images table, you may also need to set the parent as
the active image and remove the snapshot referenced by this image from
the database. Can you provide the output of:
$ psql -U engine -d engine -c "select * from images where
image_group_id = ";

As well as
$ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b';"

On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik  wrote:
>
> It can be done by deleting from the images table:
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
> of course the database should be backed up before doing this
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
> >
> > On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
> >  wrote:
> >
> > > It looks like the Pivot completed successfully, see attached vdsm.log.
> > > Is there a way to recover that VM?
> > > Or would it be better to recover the VM from Backup?
> >
> > This what we see in the log:
> >
> > 1. Merge request recevied
> >
> > 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> > merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> > u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> > u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> > u'0002-0002-0002-0002-0289'},
> > baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> > topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> > from=:::10.34.38.31,39226,
> > flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> > vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
> >
> > To track this job, we can use the jobUUID: 
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> > and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
> >
> > 2. Starting the merge
> >
> > 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> > chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> > top=None, bandwidth=0, flags=12 (vm:5945)
> >
> > We see the original chain:
> > 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
> >
> > 3. The merge was completed, ready for pivot
> >
> > 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > is ready (vm:5847)
> >
> > At this point parent volume contains all the data in top volume and we can 
> > pivot
> > to the parent volume.
> >
> > 4. Vdsm detect that the merge is ready, and start the clean thread
> > that will complete the merge
> >
> > 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> > for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
> >
> > 5. Requesting pivot to parent volume:
> >
> > 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> > complete active layer commit (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
> >
> > 6. Pivot was successful
> >
> > 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > has completed (vm:5838)
> >
> > 7. Vdsm wait until libvirt updates the xml:
> >
> > 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
> >
> > 8. Syncronizing vdsm metadata
> >
> > 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> > imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> > img

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
It can be done by deleting from the images table:
$ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b'";

of course the database should be backed up before doing this



On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
>
> On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
>  wrote:
>
> > It looks like the Pivot completed successfully, see attached vdsm.log.
> > Is there a way to recover that VM?
> > Or would it be better to recover the VM from Backup?
>
> This what we see in the log:
>
> 1. Merge request recevied
>
> 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> u'0002-0002-0002-0002-0289'},
> baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> from=:::10.34.38.31,39226,
> flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
>
> To track this job, we can use the jobUUID: 
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 2. Starting the merge
>
> 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> We see the original chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 3. The merge was completed, ready for pivot
>
> 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> is ready (vm:5847)
>
> At this point parent volume contains all the data in top volume and we can 
> pivot
> to the parent volume.
>
> 4. Vdsm detect that the merge is ready, and start the clean thread
> that will complete the merge
>
> 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
>
> 5. Requesting pivot to parent volume:
>
> 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> complete active layer commit (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
>
> 6. Pivot was successful
>
> 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> has completed (vm:5838)
>
> 7. Vdsm wait until libvirt updates the xml:
>
> 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
>
> 8. Syncronizing vdsm metadata
>
> 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
> volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
> newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
> task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)
>
> We can see the new chain:
> ['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']
>
> 2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
> Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)
>
> The old chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 2020-07-13 11:19:07,006+0200 INFO  (merge/720410c3) [storage.Image]
> Unlinking subchain: ['6197b30d-0732-4cc7-aef0-12f9f6e9565b']
> (image:1231)
> 2020-07-13 11:19:07,017+0200 INFO  (merge/720410c3) [storage.Image]
> Leaf volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b is being removed from
> the chain. Marking it ILLEGAL to prevent data corruption (image:1239)
>
> This matches what we see on storage.
>
> 9. Merge job is untracked
>
> 2020-07-13 11:19:21,134+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Cleanup thread
> 
> successfully completed, untracking job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> (base=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8,
> top=6197b30d-0732-4cc7-aef0-12f9f6e9565b) (vm:5752)
>
> This was a successful 

[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Benny Zlotnik
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939

On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>  wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them 
> > to Google Drive. Please go to the link below to get them:
> >
> > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>
> I did get them, but not engine logs. Can you please attach them as well? 
> Thanks.
>
> vdsm.log.61 has:
>
> 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
> bandwidth, flags)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirt.libvirtError: internal error: qemu block name
> 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> "filename": 
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> doesn't match expected
> '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>
> Adding Eyal. Eyal, can you please have a look? Thanks.
>
> >
> > Best regards,
> >
> > Minnie Du--Presales & Business Development
> >
> > Mob  : +86-15244932162
> > Tel: +86-28-85530156
> > Skype :minnie...@vinchin.com
> > Email: minnie...@vinchin.com
> > Website: www.vinchin.com
> >
> > F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> > Road, Hi-Tech Zone, Chengdu, China
> >
> >
> > From: Yedidyah Bar David
> > Date: 2020-06-15 15:42
> > To: minnie.du
> > CC: users
> > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > >
> > > We have met a problem when testing oVirt 4.4.
> > >
> > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
> > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
> > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
> > > state of corresponding disk became illegal. Removing the snapshot in this 
> > > state requires a lot of risky work in the background, leading to the 
> > > inability to free up snapshot space. Long-term backups will cause the 
> > > target VM to create a large number of unrecoverable snapshots, thus 
> > > taking up a large amount of production storage. So we need your help.
> >
> > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > open a bug and attach all of them, just in case.
> >
> > Thanks!
> > --
> > Didi
> >
> >
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYVFRUWNYE2NFRZAYSIL2WQN72TYROT3/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-08 Thread Benny Zlotnik
yes, that's because cinderlib uses KRBD, so it has less features, I
should add this to the documentation.
I was told cinderlib has plans to add support for rbd-nbd, this would
eventually allow use of newer features

On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke
 wrote:
>
> > It looks like a configuration issue, you can use plain `rbd` to check 
> > connectivity.
> Yes, it was a configuration error. I fixed it.
> Also, I had to adapt different rbd feature sets between ovirt nodes and ceph 
> images. Now it seems to work.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/72OOSCUSTZAGYIDTEDIINDO47EBL2GLM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JHFAZNGY3OM2EIAMISABNOVBRGUDS4H/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-07 Thread Benny Zlotnik
yes, it looks like a configuration issue, you can use plain `rbd` to
check connectivity.
regarding starting vms and live migration, are there bug reports for these?
there is an issue we're aware of with live migration[1], it can be
worked around by blacklisting rbd devices in the multipath.conf

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801


On Thu, Jun 4, 2020 at 11:49 PM Mathias Schwenke
 wrote:
>
> Thanks vor your replay.
> Yes, I have some issues. In some cases starting or migrating a virtual 
> machine failed.
>
> At the moment it seems that I have a misconfiguration of my ceph connection:
> 2020-06-04 22:44:07,685+02 ERROR 
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
> (EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution 
> failed: Traceback (most recent call last):
>   File "./cinderlib-client.py", line 179, in main
> args.command(args)
>   File "./cinderlib-client.py", line 232, in connect_volume
> backend = load_backend(args)
>   File "./cinderlib-client.py", line 210, in load_backend
> return cl.Backend(**json.loads(args.driver))
>   File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in 
> __init__
> self.driver.check_for_setup_error()
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 295, in check_for_setup_error
> with RADOSClient(self):
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 177, in __init__
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 353, in _connect_to_rados
> return _do_conn(pool, remote, timeout)
>   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in 
> _wrapper
> return r.call(f, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
> raise attempt.get()
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
> six.reraise(self.value[0], self.value[1], self.value[2])
>   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
> attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
>   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
> 351, in _do_conn
> raise exception.VolumeBackendAPIException(data=msg)
> VolumeBackendAPIException: Bad or unexpected response from the storage volume 
> backend API: Error connecting to ceph cluster.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4BMALG7MPMPS3JJU23OCQUMOCSO2D27/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YZPGW7IAUZMTNWY5FP5KOEWAGVBPVFE/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Benny Zlotnik
I've used successfully rocky with 4.3 in the past, the main caveat
with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip
install cinderlib==0.9.0).
Let me know if you have any issues.

Hopefully during 4.4 we will have the repositories with the RPMs and
installation will be much easier


On Thu, Jun 4, 2020 at 10:00 PM Mathias Schwenke
 wrote:
>
> At 
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>  ist described the cinderlib integration into oVirt:
> Installation:
> - install centos-release-openstack-pike on engine and all hosts
> - install openstack-cinder and python-pip on engine
> - pip install cinderlib on engine
> - install python2-os-brick on all hosts
> - install ceph-common on engine and on all hosts
>
> Which software versions do you use on CentOS 7 whith oVirt 4.3.10?
> The package centos-release-openstack-pike, as described at the 
> above-mentioned Managed Block Storage feature page, doesn't exist anymore in 
> the CentOS repositories, so I have to switch to 
> centos-release-openstack-queens or newer (rocky, stein, train). So I get (for 
> using with ceph luminous 12):
> - openstack-cinder 12.0.10
> - cinderlib 1.0.1
> - ceph-common 12.2.11
> - python2-os-brick 2.3.9
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5BRKSYAHJBLI65G6JEDZIWSQ72OCF3S/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FELJ2X2N74Q3SM2ZC3MV4ERWZWUM5ZUO/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-01 Thread Benny Zlotnik
Sorry for the late reply, but you may have hit this bug[1], I forgot about it.
The bug happens when you live migrate a VM in post-copy mode, vdsm
stops monitoring the VM's jobs.
The root cause is an issue in libvirt, so it depends on which libvirt
version you have

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1774230

On Fri, May 29, 2020 at 3:54 PM David Sekne  wrote:
>
> Hello,
>
> I tried the live migrate as well and it didn't help (it failed).
>
> The VM disks were in a illegal state so I ended up restoring the VM from 
> backup (It was least complex solution for my case).
>
> Thank you both for the help.
>
> Regards,
>
> On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov  wrote:
>>
>> I used  to have a similar issue and when I live migrated  (from 1  host to 
>> another)  it  automatically completed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik  
>> написа:
>> >Sorry, by overloaded I meant in terms of I/O, because this is an
>> >active layer merge, the active layer
>> >(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
>> >(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
>> >it as the active layer. So if there is constantly additional data
>> >written to the current active layer, vdsm may have trouble finishing
>> >the synchronization
>> >
>> >
>> >On Wed, May 27, 2020 at 4:55 PM David Sekne 
>> >wrote:
>> >>
>> >> Hello,
>> >>
>> >> Yes, no problem. XML is attached (I ommited the hostname and IP).
>> >>
>> >> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not
>> >overloaded. We have multiple servers with the same specs with no
>> >issues.
>> >>
>> >> Regards,
>> >>
>> >> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik 
>> >wrote:
>> >>>
>> >>> Can you share the VM's xml?
>> >>> Can be obtained with `virsh -r dumpxml `
>> >>> Is the VM overloaded? I suspect it has trouble converging
>> >>>
>> >>> taskcleaner only cleans up the database, I don't think it will help
>> >here
>> >>>
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQWZXFW622OIZLB27AHULO52CWYTVL2S/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Sorry, by overloaded I meant in terms of I/O, because this is an
active layer merge, the active layer
(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
it as the active layer. So if there is constantly additional data
written to the current active layer, vdsm may have trouble finishing
the synchronization


On Wed, May 27, 2020 at 4:55 PM David Sekne  wrote:
>
> Hello,
>
> Yes, no problem. XML is attached (I ommited the hostname and IP).
>
> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not overloaded. We 
> have multiple servers with the same specs with no issues.
>
> Regards,
>
> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik  wrote:
>>
>> Can you share the VM's xml?
>> Can be obtained with `virsh -r dumpxml `
>> Is the VM overloaded? I suspect it has trouble converging
>>
>> taskcleaner only cleans up the database, I don't think it will help here
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging

taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LCPJ2C2MW76MKVFBC4QAMRPSRRQQDC3U/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
 sda --info` (if it's still running)




On Wed, May 27, 2020 at 2:03 PM David Sekne  wrote:
>
> Hello,
>
> Thank you for the reply.
>
> Unfortunately I cant see the task on any on the hosts:
>
> vdsm-client Task getInfo taskID=f694590a-1577-4dce-bf0c-3a8d74adf341
> vdsm-client: Command Task.getInfo with args {'taskID': 
> 'f694590a-1577-4dce-bf0c-3a8d74adf341'} failed:
> (code=401, message=Task id unknown: 
> (u'f694590a-1577-4dce-bf0c-3a8d74adf341',))
>
> I can see it starting in VDSM log on the host runnig the VM:
>
> /var/log/vdsm/vdsm.log.2:2020-05-26 12:15:09,349+0200 INFO  (jsonrpc/6) 
> [virt.vm] (vmId='e113ff18-5687-4e03-8a27-b12c82ad6d6b') Starting merge with 
> jobUUID=u'f694590a-1577-4dce-bf0c-3a8d74adf341', original 
> chain=a78c7505-a949-43f3-b3d0-9d17bdb41af5 < 
> aabf3788-8e47-4f8b-84ad-a7eb311659fa (top), disk='sda', base='sda[1]', 
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> Also running vdsm-client Host getAllTasks I don't see any runnig tasks (on 
> any host).
>
> Am I missing something?
>
> Regards,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBTD3HLXPK7F7MBJCQEQV6E2KA3H7FZK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4HOFIS26PTTT56HNOUCG4MTOFFFAXSK/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-05-27 Thread Benny Zlotnik
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host

On Wed, May 27, 2020 at 9:02 AM David Sekne  wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live storage migration a VM got stuck with snapshot. Checking 
> the engine logs I can see that the snapshot removal task is waiting for Merge 
> to complete and vice versa.
>
> 2020-05-26 18:34:04,826+02 INFO  
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshotSingleDiskLive' 
> (id: '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40') waiting on child command id: 
> 'f7d1de7b-9e87-47ba-9ba0-ee04301ba3b1' type:'Merge' to complete
> 2020-05-26 18:34:04,827+02 INFO  
> [org.ovirt.engine.core.bll.MergeCommandCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Waiting on merge command to complete 
> (jobId = f694590a-1577-4dce-bf0c-3a8d74adf341)
> 2020-05-26 18:34:04,845+02 INFO  
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) 
> [90f428b0-9c4e-4ac0-8de6-1103fc13da9e] Command 'RemoveSnapshot' (id: 
> '47c9a847-5b4b-4256-9264-a760acde8275') waiting on child command id: 
> '60ce36c1-bf74-40a9-9fb0-7fcf7eb95f40' type:'RemoveSnapshotSingleDiskLive' to 
> complete
> 2020-05-26 18:34:14,277+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmJobsMonitoring] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM Job 
> [f694590a-1577-4dce-bf0c-3a8d74adf341]: In progress (no change)
>
> I cannot see any runnig tasks on the SPM (vdsm-client Host getAllTasksInfo). 
> I also cannot find the task ID in any of the other node's logs.
>
> I already tried restarting the Engine (didn't help).
>
> To start I'm puzzled as to where this task is queueing?
>
> Any Ideas on how I could resolve this?
>
> Thank you.
> Regards,
> David
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJBI3SMVXTPSGGJ66P55MU2ERN3HBCTH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZILERZCGSPOGPOSPM3GHVURC5CVVBVZU/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
> 1. The engine didn't clean it up itself - after all , no mater the reason, 
> the operation has failed?
can't really answer without looking at the logs, engine should cleanup
in case of a failure, there can be numerous reasons for cleanup to
fail (connectivity issues, bug, etc)
> 2. Why the query fail to see the disk , but I have managed to unlock it?
could be a bug, but it would need some way to reproduce
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TV5LJU6URKS2D5FZ5BOFVYV2EAJRBJGN/


[ovirt-users] Re: New VM disk - failed to create, state locked in UI, nothing in DB

2020-04-20 Thread Benny Zlotnik
anything in the logs (engine,vdsm)?
if there's nothing on the storage, removing from the database should
be safe, but it's best to check why it failed

On Mon, Apr 20, 2020 at 5:39 PM Strahil Nikolov  wrote:
>
> Hello All,
>
> did anyone observe the following behaviour:
>
> 1. Create a new disk from the VM -> disks UI tab
> 2. Disk creation failes , but stays in locked state
> 3. Gluster storage has no directory with that uuid
> 4. /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh doesn't find 
> anything:
> [root@engine ~]# /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t 
> all
>
> Locked VMs
>
>
>
> Locked templates
>
>
>
> Locked disks
>
>
>
> Locked snapshots
>
>
>
> Illegal images
>
>
> Should I just delete the entry from the DB or I have another option ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6E4RJM7I3BT33CU3CAB74C2Q4QNBS5BW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THYLVMX65VAZ2YTA5GL2SR2LKHF2KRJC/


[ovirt-users] Re: does SPM still exist?

2020-03-24 Thread Benny Zlotnik
it hasn't disappeared, there has been work done to move operations
that used to run only on SPM to run on regular hosts as well
(copy/move disk)
Currently the main operations performed by SPM are
create/delete/extend volume and more[1]


[1] 
https://github.com/oVirt/ovirt-engine/tree/master/backend/manager/modules/vdsbroker/src/main/java/org/ovirt/engine/core/vdsbroker/irsbroker






On Tue, Mar 24, 2020 at 11:14 AM yam yam  wrote:
>
> Hello,
>
> I heard some say SPM disappeared since 3.6.
> nevertheless, SPM still exists in oVirt admin portal or even in RHV's manual.
> So, I am wondering whether SPM still exists now.
>
> And could I know how to get more detailed information for oVirt internals??
> is the code review the best way?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNZ4KGZTWHFSUNDDVVPBMYK3U7Y3QZPF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGB6C4OLF4SH3PJCR5F4TEAHN4LGHSPL/


[ovirt-users] Re: oVirt behavior with thin provision/deduplicated block storage

2020-02-24 Thread Benny Zlotnik
we use the stats API in the engine, currently only to check if the
backend is accessible, we have plans to use it for monitoring and
validations but it is not implemented yet

On Mon, Feb 24, 2020 at 3:35 PM Nir Soffer  wrote:
>
> On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor  wrote:
> >
> > On 22/02, Nir Soffer wrote:
> > > On Sat, Feb 22, 2020, 13:02 Alan G  wrote:
> > > >
> > > > I'm not really concerned about the reporting aspect, I can look in the 
> > > > storage vendor UI to see that. My concern is: will oVirt stop 
> > > > provisioning storage in the domain because it *thinks* the domain is 
> > > > full. De-dup is currently running at about 2.5:1 so I'm concerned that 
> > > > oVirt will think the domain is full way before it actually is.
> > > >
> > > > Not clear if this is handled natively in oVirt or by the underlying lvs?
> > >
> > > Because oVirt does not know about deduplication or actual allocation
> > > on the storage side,
> > > it will let you allocate up the size of the LUNs that you added to the
> > > storage domain, minus
> > > the size oVirt uses for its own metadata.
> > >
> > > oVirt uses about 5G for its own metadata on the first LUN in a storage
> > > domain. The rest of
> > > the space can be used by user disks. Disks are LVM logical volumes
> > > created in the VG created
> > > from the LUN.
> > >
> > > If you create a storage domain with 4T LUN, you will be able to
> > > allocate about 4091G on this
> > > storage domain. If you use preallocated disks, oVirt will stop when
> > > you allocated all the space
> > > in the VG. Actually it will stop earlier based on the minimal amount
> > > of free space configured for
> > > the storage domain when creating the storage domain.
> > >
> > > If you use thin disks, oVirt will allocate only 1G per disk (by
> > > default), so you can allocate
> > > more storage than you actually have, but when VMs will write to the
> > > disk, oVirt will extend
> > > the disks. Once you use all the available space in this VG, you will
> > > not be able to allocate
> > > more without extending the storage domain with new LUN, or resizing
> > > the  LUN on storage.
> > >
> > > If you use Managed Block Storage (cinderlib) every disk is a LUN with
> > > the exact size you
> > > ask when you create the disk. The actual allocation of this LUN
> > > depends on your storage.
> > >
> > > Nir
> > >
> >
> > Hi,
> >
> > I don't know anything about the oVirt's implementation, so I'm just
> > going to provide some information from cinderlib's point of view.
> >
> > Cinderlib was developed as a dumb library to abstract access to storage
> > backends, so all the "smart" functionality is pushed to the user of the
> > library, in this case oVirt.
> >
> > In practice this means that cinderlib will NOT limit the number of LUNs
> > or over-provisioning done in the backend.
> >
> > Cinderlib doesn't care if we are over-provisioning because we have dedup
> > and decompression or because we are using thin volumes where we don't
> > consume all the allocated space, it doesn't even care if we cannot do
> > over-provisioning because we are using thick volumes.  If it gets a
> > request to create a volume, it will try to do so.
> >
> > From oVirt's perspective this is dangerous if not controlled, because we
> > could end up consuming all free space in the backend and then running
> > VMs will crash (I think) when they could no longer write to disks.
> >
> > oVirt can query the stats of the backend [1] to see how much free space
> > is available (free_capacity_gb) at any given time in order to provide
> > over-provisioning limits to its users.  I don't know if oVirt is already
> > doing that or something similar.
> >
> > If is important to know that stats gathering is an expensive operation
> > for most drivers, and that's why we can request cached stats (cache is
> > lost as the process exits) to help users not overuse it.  It probably
> > shouldn't be gathered more than once a minute.
> >
> > I hope this helps.  I'll be happy to answer any cinderlib questions. :-)
>
> Thanks Gorka, good to know we already have API to get backend
> allocation info. Hopefully we will use this in future version.
>
> Nir
>
> >
> > Cheers,
> > Gorka.
> >
> > [1]: https://docs.openstack.org/cinderlib/latest/topics/backends.html#stats
> >
> > > >  On Fri, 21 Feb 2020 21:35:06 + Nir Soffer  
> > > > wrote 
> > > >
> > > >
> > > >
> > > > On Fri, Feb 21, 2020, 17:14 Alan G  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I have an oVirt cluster with a storage domain hosted on a FC storage 
> > > > array that utilises block de-duplication technology. oVirt reports the 
> > > > capacity of the domain as though the de-duplication factor was 1:1, 
> > > > which of course is not the case. So what I would like to understand is 
> > > > the likely behavior of oVirt when the used space approaches the 
> > > > reported capacity. Particularly around the critical action space 
> > > > blocker.
> > > >
> > > >

[ovirt-users] Re: iSCSI Domain Addition Fails

2020-02-23 Thread Benny Zlotnik
anything in the vdsm or engine logs?

On Sun, Feb 23, 2020 at 4:23 PM Robert Webb  wrote:
>
> Also, I did do the “Login” to connect to the target without issue, from what 
> I can tell.
>
>
>
> From: Robert Webb
> Sent: Sunday, February 23, 2020 9:06 AM
> To: users@ovirt.org
> Subject: iSCSI Domain Addition Fails
>
>
>
> So I am messing around with FreeNAS and iSCSI. FreeNAS has a target 
> configured, it is discoverable in oVirt, but then I click “OK” nothing 
> happens.
>
>
>
> I have a name for the domain defined and have expanded the advanced features, 
> but cannot find it anything showing an error.
>
>
>
> oVirt 4.3.8
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMAXFDMNHVGTMJUGU5FK26K6PNBAW3FP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KSLKO6ZP55ZSFCSXRONAPVCEOMZTE24M/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-05 Thread Benny Zlotnik
The vdsm logs are not the correct ones.
I assume this is the failure:
2020-02-04 22:04:53,631+05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-9)
[1e9f5492-095c-48ed-9aa0-1a899eedeab7] Command 'MergeVDSCommand(HostName =
iondelsvr72.iontrading.com,
MergeVDSCommandParameters:{hostId='22502af7-f157-40dc-bd5c-6611951be729',
vmId='4957c5d4-ca5e-4db7-8c78-ae8f4b694646',
storagePoolId='c5e0f32e-0131-11ea-a48f-00163e0fe800',
storageDomainId='70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419',
imageGroupId='737b5628-e9fe-42ec-9bce-38db80981107',
imageId='31c5e807-91f1-4f73-8a60-f97a83c6f471',
baseImageId='e4160ffe-2734-4305-8bf9-a7217f3049b6',
topImageId='31c5e807-91f1-4f73-8a60-f97a83c6f471', bandwidth='0'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
MergeVDS, error = Drive image file could not be found, code = 13

please find the vdsm logs containing flow_id
1e9f5492-095c-48ed-9aa0-1a899eedeab7 and provide output for `vdsm-tool
dump-volume-chains 70edd0ef-e4ec-4bc5-af66-f7fb9c4eb419` so we can see the
status of the chain on vdsm
As well as `virsh -r dumpxml ind-co-ora-ee-02` (assuming ind-co-ora-ee-02
is the VM with the issue)

Changing the snapshot status with unlock_entity will likely work only if
the chain is fine on the storage



On Tue, Feb 4, 2020 at 7:40 PM Crazy Ayansh 
wrote:

> please find the attached the logs.
>
> On Tue, Feb 4, 2020 at 10:23 PM Benny Zlotnik  wrote:
>
>> back to my question then, can you check what made the snapshot illegal?
>> and attach the vdsm and engine logs from the occurrence so we can assess
>> the damage
>>
>> also run `dump-volume-chains ` where the image resides so we can
>> see what's the status of the image on vdsm
>>
>> On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
>> wrote:
>>
>>> Hi,
>>>
>>> Yes VM is running but i scared if i shutdown the VM and it not came back.
>>> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
>>> persists. I am also unable to take snapshot of the the same VM as the new
>>> snapshot failing. Please help.
>>>
>>> Thanks
>>> Shashank
>>>
>>>
>>>
>>> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik 
>>> wrote:
>>>
>>>> Is the VM running? Can you remove it when the VM is down?
>>>> Can you find the reason for illegal status in the logs?
>>>>
>>>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh <
>>>> shashank123rast...@gmail.com> wrote:
>>>>
>>>>> Hey Guys,
>>>>>
>>>>> Any help on it ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>>>> shashank123rast...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>   Hi Team,
>>>>>>
>>>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>>>> below error :-
>>>>>>
>>>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>>>> índ-co-ora-ee-02'
>>>>>>
>>>>>>
>>>>>>
>>>>>> [image: image.png]
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>>>
>>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YF3J3K66N5HORUYZP3HZEJWOU64IDNAS/


[ovirt-users] Re: Recover VM if engine down

2020-02-04 Thread Benny Zlotnik
you need to go to the "import vm" tab on the storage domain and import them

On Tue, Feb 4, 2020 at 7:30 PM matteo fedeli  wrote:
>
> it does automatically when I attach or should I execute particular operations?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVC674C7RF3JZXCOW4SRJL5OQRBE5RZD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4O4YN5RDOQEGBGD4DEHXFY7R72WGQYB/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
back to my question then, can you check what made the snapshot illegal? and
attach the vdsm and engine logs from the occurrence so we can assess the
damage

also run `dump-volume-chains ` where the image resides so we can see
what's the status of the image on vdsm

On Tue, Feb 4, 2020 at 6:46 PM Crazy Ayansh 
wrote:

> Hi,
>
> Yes VM is running but i scared if i shutdown the VM and it not came back.
> I have also upgraded engine from 4.3.6.6 to 4.3.8. but still the issue
> persists. I am also unable to take snapshot of the the same VM as the new
> snapshot failing. Please help.
>
> Thanks
> Shashank
>
>
>
> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik  wrote:
>
>> Is the VM running? Can you remove it when the VM is down?
>> Can you find the reason for illegal status in the logs?
>>
>> On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
>> wrote:
>>
>>> Hey Guys,
>>>
>>> Any help on it ?
>>>
>>> Thanks
>>>
>>> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh <
>>> shashank123rast...@gmail.com> wrote:
>>>
>>>>
>>>>   Hi Team,
>>>>
>>>> I am trying to delete a old snapshot of a virtual machine and getting
>>>> below error :-
>>>>
>>>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>>>> índ-co-ora-ee-02'
>>>>
>>>>
>>>>
>>>> [image: image.png]
>>>>
>>>> Thanks
>>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34YWQHVGTXSZZR6DKGE477AS7GDRHJ2Y/


[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Benny Zlotnik
Is the VM running? Can you remove it when the VM is down?
Can you find the reason for illegal status in the logs?

On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh 
wrote:

> Hey Guys,
>
> Any help on it ?
>
> Thanks
>
> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh 
> wrote:
>
>>
>>   Hi Team,
>>
>> I am trying to delete a old snapshot of a virtual machine and getting
>> below error :-
>>
>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>> índ-co-ora-ee-02'
>>
>>
>>
>> [image: image.png]
>>
>> Thanks
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4IWIYIHGD3FEQ52Z4P5KHDDA424MIWK/


[ovirt-users] Re: Recover VM if engine down

2020-02-03 Thread Benny Zlotnik
you can attach the storage domain to another engine and import it

On Mon, Feb 3, 2020 at 11:45 PM matteo fedeli  wrote:
>
> Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data 
> storage domain.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVSJPYVBTQOQGGKT4HNETW453ZUPDL2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSUAEXSX3WP5XGI32NMD2RBOSA2ZWM6C/


[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well?


On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
> volume specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',

  1   2   3   >