[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-22 Thread Nir Soffer
On Tue, Sep 22, 2020 at 4:18 AM Jeremey Wise  wrote:
>
>
> Well.. to know how to do it with Curl is helpful.. but I think I did
>
> [root@odin ~]#  curl -s -k --user admin@internal:blahblah 
> https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep 
> ''
> data
> hosted_storage
> ovirt-image-repository
>
> What I guess I did is translated that field --sd-name my-storage-domain \
> to " volume" name... My question is .. where do those fields come from?  And 
> which would you typically place all your VMs into?
>
>
>
>
> I just took a guess..  and figured "data" sounded like a good place to stick 
> raw images to build into VM...
>
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 11574706176
> Disk name: ns02.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Disk ID: 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
> Creating image transfer...
> Transfer ID: 3a382f0b-1e7d-4397-ab16-4def0e9fe890
> Transfer host name: medusa
> Uploading image...
> [ 100.00% ] 20.00 GiB, 249.86 seconds, 81.97 MiB/s
> Finalizing image transfer...
> Upload completed successfully
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_v^C
> [root@medusa thorst.penguinpages.local:_vmstore]# ls
> example.log  f118dcae-6162-4e9a-89e4-f30ffcfb9ccf  ns02_20200910.tgz  
> ns02.qcow2  ns02_var.qcow2
> [root@medusa thorst.penguinpages.local:_vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
>  --cafile 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
>  --sd-name data --disk-sparse 
> /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_var.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 107374182400
> Disk initial size: 107390828544
> Disk name: ns02_var.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Disk ID: 26def4e7-1153-417c-88c1-fd3dfe2b0fb9
> Creating image transfer...
> Transfer ID: 41518eac-8881-453e-acc0-45391fd23bc7
> Transfer host name: medusa
> Uploading image...
> [  16.50% ] 16.50 GiB, 556.42 seconds, 30.37 MiB/s
>
> Now with those ID numbers and that it kept its name (very helpful)... I am 
> able to re-constitute the VM
>
>
> VM boots fine.  Fixing VLANs and manual macs on vNICs.. but this process 
> worked fine.
>
> Thanks for input.   Would be nice to have a GUI "upload" via http into system 
> :)

We have upload via GUI, but from your mail I understood the images are on
the hypervisor, so copying them to the machine running the browser would be
wasted of time.

Go to Storage > Disks and click "Upload" or "Download".

But this is less efficient, less correct, and not supporting all the
features like converting
image format and controlling sparseness.

For uploading and downloading qcow2 images it should be fine, but if
you have a qcow2
and want to upload to raw format this can be done only using the API,
for example with
upload_disk.py.

> On Mon, Sep 21, 2020 at 2:19 PM Nir Soffer  wrote:
>>
>> On Mon, Sep 21, 2020 at 8:37 PM penguin pages  wrote:
>> >
>> >
>> > I pasted old / file path not right example above.. But here is a cleaner 
>> > version with error i am trying to root cause
>> >
>> > [root@odin vmstore]# python3 
>> > /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py 
>> > --engine-url https://ovirte01.penguinpages.local/ --username 
>> > admin@internal --password-file 
>> > /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
>> > /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name 
>> > vmstore --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
>> > Checking image...
>> 

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Jeremey Wise
Well.. to know how to do it with Curl is helpful.. but I think I did

[root@odin ~]#  curl -s -k --user admin@internal:blahblah
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
''
data
hosted_storage
ovirt-image-repository

What I guess I did is translated that field --sd-name my-storage-domain \
  to " volume" name... My question is .. where do those fields come from?
And which would you typically place all your VMs into?
[image: image.png]



I just took a guess..  and figured "data" sounded like a good place to
stick raw images to build into VM...

[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 11574706176
Disk name: ns02.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
Creating image transfer...
Transfer ID: 3a382f0b-1e7d-4397-ab16-4def0e9fe890
Transfer host name: medusa
Uploading image...
[ 100.00% ] 20.00 GiB, 249.86 seconds, 81.97 MiB/s
Finalizing image transfer...
Upload completed successfully
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_v^C
[root@medusa thorst.penguinpages.local:_vmstore]# ls
example.log  f118dcae-6162-4e9a-89e4-f30ffcfb9ccf  ns02_20200910.tgz
 ns02.qcow2  ns02_var.qcow2
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_var.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 107374182400
Disk initial size: 107390828544
Disk name: ns02_var.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 26def4e7-1153-417c-88c1-fd3dfe2b0fb9
Creating image transfer...
Transfer ID: 41518eac-8881-453e-acc0-45391fd23bc7
Transfer host name: medusa
Uploading image...
[  16.50% ] 16.50 GiB, 556.42 seconds, 30.37 MiB/s

Now with those ID numbers and that it kept its name (very helpful)... I am
able to re-constitute the VM
[image: image.png]

VM boots fine.  Fixing VLANs and manual macs on vNICs.. but this process
worked fine.

Thanks for input.   Would be nice to have a GUI "upload" via http into
system :)







On Mon, Sep 21, 2020 at 2:19 PM Nir Soffer  wrote:

> On Mon, Sep 21, 2020 at 8:37 PM penguin pages 
> wrote:
> >
> >
> > I pasted old / file path not right example above.. But here is a cleaner
> version with error i am trying to root cause
> >
> > [root@odin vmstore]# python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> --engine-url https://ovirte01.penguinpages.local/ --username
> admin@internal --password-file
> /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile
> /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name
> vmstore --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> > Checking image...
> > Image format: qcow2
> > Disk format: cow
> > Disk content type: data
> > Disk provisioned size: 21474836480
> > Disk initial size: 431751168
> > Disk name: ns01.qcow2
> > Disk backup: False
> > Connecting...
> > Creating disk...
> > Traceback (most recent call last):
> >   File
> "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line
> 262, in 
> > name=args.sd_name
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line
> 7697, in add
> > return self._internal_add(disk, headers, query, wait)
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
> 232, in _internal_add
> > return future.wait() if wait else future
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
> 55, in wait
> > return self._code(resp

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Strahil Nikolov via Users
 Have you tried to upload your qcow2 disks via the UI ?
Maybe you can create a blank VM (same size of disks) and then replacing the 
disk with your qcow2 from KVM (works only of file-based storages like 
Gluster/NFS).

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 09:12:09 Гринуич+3, Jeremey Wise 
 написа: 






I rebuilt my lab environment.   And their are four or five VMs that really 
would help if I did not have to rebuild.

oVirt as I am now finding when it creates infrastructure, sets it out such that 
I cannot just use older  means of placing .qcow2 files in folders and .xml 
files in other folders and they show up on services restarting.

How do I import VMs from files?  

I found this article but implies VM is running: 
https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider

I need a way to import a file.  Even if it means temporarily hosting on "KVM on 
one of the hosts to then bring in once it is up.


Thanks
-- 

penguinpages 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LSE4MNEBGODIRPVAQCUNBO2KGCCQTM5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKBJ755HJ6DAFQKU5TBTARJSKTH4RW3A/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 8:37 PM penguin pages  wrote:
>
>
> I pasted old / file path not right example above.. But here is a cleaner 
> version with error i am trying to root cause
>
> [root@odin vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
> /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
> --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 431751168
> Disk name: ns01.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Traceback (most recent call last):
>   File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", 
> line 262, in 
> name=args.sd_name
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
> in add
> return self._internal_add(disk, headers, query, wait)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, 
> in _internal_add
> return future.wait() if wait else future
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
> wait
> return self._code(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, 
> in callback
> self._check_fault(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, 
> in _check_fault
> self._raise_error(response, body)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, 
> in _raise_error
> raise error
> ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
> "Entity not found: vmstore". HTTP response code is 404.

You used:

--sd-name vmstore

But there is no such storage domain in this setup.

Check the storage domains on this setup. One (ugly) way is is:

$ curl -s -k --user admin@internal:password
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |
grep ''
export1
iscsi1
iscsi2
nfs1
nfs2
ovirt-image-repository

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7K2LBSN5POKIKYQ3CXJZEJQCGNG26VFV/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages

I pasted old / file path not right example above.. But here is a cleaner 
version with error i am trying to root cause

[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
https://ovirte01.penguinpages.local/ --username admin@internal --password-file 
/gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
/gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
--disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 431751168
Disk name: ns01.qcow2
Disk backup: False
Connecting...
Creating disk...
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
262, in 
name=args.sd_name
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
in add
return self._internal_add(disk, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in 
_internal_add
return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
wait
return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in 
callback
self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in 
_check_fault
self._raise_error(response, body)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in 
_raise_error
raise error
ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
"Entity not found: vmstore". HTTP response code is 404.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMLWBA4FSQUMPH4CTAXSADIKD46PDQQZ/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages
Thanks for reply.  I read this late at night and assumed the "engine url" meant 
old KVM. system .. but this implies the oVirt engine.  I then translated your 
helpful notes... but likely missing some parameter.

#
# Install import client
dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

# save oVirt engine cert on gluster share (have to use GUI for now as I could 
not figure out wget means)
https://ovirte01.penguinpages.local/ovirt-engine/



mv /gluster_bricks/engine/engine/ovirte01_pki-resource.cer 
/gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chmod 440 /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chown root:kvm /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
# Put oVirt Password in a file for use
echo "blahblahblah" > /gluster_bricks/engine/engine/.ovirt.password
chmod 440 /gluster_bricks/engine/engine/.ovirt.password
chown root:kvm /gluster_bricks/engine/engine/.ovirt.password
# upload the qcow2 images to oVirt
[root@odin vmstore]# pwd
/gluster_bricks/vmstore/vmstore
[root@odin vmstore]# ls -alh
total 385M
drwxr-xr-x.   7 vdsm kvm  8.0K Sep 21 13:20 .
drwxr-xr-x.   3 root root   21 Sep 16 23:42 ..
-rw-r--r--.   1 root root0 Sep 21 13:20 example.log
drwxr-xr-x.   6 vdsm kvm64 Sep 17 21:28 f118dcae-6162-4e9a-89e4-f30ffcfb9ccf
drw---. 262 root root 8.0K Sep 17 01:29 .glusterfs
drwxr-xr-x.   2 root root   45 Sep 17 08:15 isos
-rwxr-xr-x.   2 root root  64M Sep 17 00:08 ns01_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:20 ns01.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns01_var.qcow2
-rwxr-xr-x.   2 root root  64M Sep 17 00:09 ns02_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:21 ns02.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns02_var.qcow2
drwxr-xr-x.   2 root root   38 Sep 17 10:19 qemu
drwxr-xr-x.   3 root root 280K Sep 21 08:21 .shard
[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
> --engine-url https://ovirte01.penguinpages.local/ \
> --username admin@internal \
> --password-file /gluster_bricks/engine/engine/.ovirt.password \
> --cafile /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer \
> --sd-name vmstore \
> --disk-sparse \
> /gluster_bricks/vmstore/vmstore.qcow2
Checking image...
qemu-img: Could not open '/gluster_bricks/vmstore/vmstore.qcow2': Could not 
open '/gluster_bricks/vmstore/vmstore.qcow2': No such file or directory
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
210, in 
image_info = get_image_info(args.filename)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
133, in get_image_info
["qemu-img", "info", "--output", "json", filename])
  File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['qemu-img', 'info', '--output', 
'json', '/gluster_bricks/vmstore/vmstore.qcow2']' returned non-zero exit status 
1.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LA4CAJLVHZKU4ZNXGMWUBVRP37PS5OBL/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Jeremey Wise
Ugh.. this is bad.

On the hypervisor where the files are located ...

My customers send me tar files with VMs all the time.   And I send them.
This will make it much more difficult, if I can't import xml / qcow2



This cluster .. is my home cluster and so..  three servers.. and they were
CentOs 7 + VDO + Gluster...  I use to have link qemu directory from all
three on gluster so if one server died .. or I messed it up and it was
needing repair... I could still start up and run the VMs.

Old cluster notes:

 Optional: Redirect Default KVM VM Storage location.  Ex:  /data/gv0/vms
on thor

# < Broken with HCI.. not sure process here yet….. hold off till oVirt
HCI engine issues worked out on how it enables new VM definitions to be
shared if one or more nodes goes down  2020-09-17 

#  Pool default XML configuration edited.

virsh pool-edit default



  default

  d3ae9e9a-8bc8-4a17-8476-3fe3334204f3

  37734498304

  27749486592

  9985011712

  

  

  

#   /var/lib/libvirt/images

/data/gv0/vms



  0711

  0

  0



  





#  For now each KVM host has shared folder linked.  Not sure how with out
restart of libvirtd to get peers to easily see configuration file. Can run
import command but need to test.

# To enable multiple KVM nodes in a shared environment to be able to take
over the roles of peers in the event of one failing the XML files stored in
/etc/libvirt/qemu/  need to be on a shared device.

# Ex:  Move medusa /etc/libvirt/qemu/   to be on gluster share volume space
/data/gv0/vms/medusa

systemctl stop libvirtd

mkdir -p /media/vmstore/qemu

mv -f /etc/libvirt/qemu/* /media/vmstore/qemu

ln -s /media/vmstore/qemu /etc/libvirt/qemu



systemctl daemon-reload

systemctl start libvirt-guests.service

systemctl enable libvirt-guests.service

systemctl status libvirt-guests.service



As I tried to use setup of engine it became apparent, my manual use of
libvirtd setup was NOT going to be any way helpful with oVirt way of using
it...  Ok... I can learn new things..


I had to backup and remove all data (see other post about errors for HCI
wizard failing if it detected existing VDO volume)...  So I moved my four
or so important VMs off to an external mount.

I now need a way to bring them back.  I really can't spend weeks rebuilding
those infrastructure VMs.  And I don't have a fourth server to rebuild
hosting KVM system to import and then with oVirt to LibVirt connection..
slurp vm out.
Plus.. that means anytime someone sends me a tar of qcow2 and xml..  I have
to re-host to export..  :P



On Mon, Sep 21, 2020 at 8:18 AM Nir Soffer  wrote:

> On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise 
> wrote:
> >
> >
> > I rebuilt my lab environment.   And their are four or five VMs that
> really would help if I did not have to rebuild.
> >
> > oVirt as I am now finding when it creates infrastructure, sets it out
> such that I cannot just use older  means of placing .qcow2 files in folders
> and .xml files in other folders and they show up on services restarting.
> >
> > How do I import VMs from files?
>
> You did not share the oVirt version, so I'm assuming 4.4.
>
> The simplest way is to upload the qcow2 images to oVirt, and create a new
> VM with the new disk.
>
> On the hypervisor where the files are located, install the required
> packages:
>
> dnf install ovirt-imageio-client python3-ovirt-engine-sdk4
>
> And upload the image:
>
> python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
> --engine-url https://my.engine/ \
> --username admin@internal \
> --password-file /path/to/password/file \
> --cafile /path/to/cafile \
> --sd-name my-storage-domain \
> --disk-sparse \
> /path/to/image.qcow2
>
> This will upload the file in qcow2 format to whatever type of storage you
> have. You can change the format if you like using --disk-format. See --help
> for all the options.
>
> We also support importing from libvirt, but for this you need to have the
> vm
> defined in libvirt. If you don't have this, It will probably be easier to
> upload
> the images and create a new vm in oVirt.
>
> Nir
>
> > I found this article but implies VM is running:
> https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
> >
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider
> >
> > I need a way to import a file.  Even if it means temporarily hosting on
> "KVM on one of the hosts to then bring in once it is up.
> >
> >
> > Thanks
> > --
> >
> > penguinpages
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise  wrote:
>
>
> I rebuilt my lab environment.   And their are four or five VMs that really 
> would help if I did not have to rebuild.
>
> oVirt as I am now finding when it creates infrastructure, sets it out such 
> that I cannot just use older  means of placing .qcow2 files in folders and 
> .xml files in other folders and they show up on services restarting.
>
> How do I import VMs from files?

You did not share the oVirt version, so I'm assuming 4.4.

The simplest way is to upload the qcow2 images to oVirt, and create a new
VM with the new disk.

On the hypervisor where the files are located, install the required packages:

dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

And upload the image:

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
--engine-url https://my.engine/ \
--username admin@internal \
--password-file /path/to/password/file \
--cafile /path/to/cafile \
--sd-name my-storage-domain \
--disk-sparse \
/path/to/image.qcow2

This will upload the file in qcow2 format to whatever type of storage you
have. You can change the format if you like using --disk-format. See --help
for all the options.

We also support importing from libvirt, but for this you need to have the vm
defined in libvirt. If you don't have this, It will probably be easier to upload
the images and create a new vm in oVirt.

Nir

> I found this article but implies VM is running: 
> https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider
>
> I need a way to import a file.  Even if it means temporarily hosting on "KVM 
> on one of the hosts to then bring in once it is up.
>
>
> Thanks
> --
>
> penguinpages
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LSE4MNEBGODIRPVAQCUNBO2KGCCQTM5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H25R25XLNHEP2EQQ4X62PESPRXUTGX4Y/