[ovirt-users] Cinder scheduled snapshots

2016-04-10 Thread Bond, Darryl
Is there any plan to provide a means to schedule snapshots for cinder?



Darryl




The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: Re: ***UNCHECKED*** Re: kvm vcpu0 unhandled rdmsr

2016-04-10 Thread Yaniv Kaul
On Sun, Apr 10, 2016 at 6:05 PM, gregor  wrote:

> Hi,
>
> has anybody a last tip. Now the third installed Windows Server 2012 R2
> VM is damaged and I will move tomorrow my host back to VMWare and leave
> oVirt.
>

It's a QEMU/KVM issue - let me see if I can get someone from KVM
development team to get the details from you.
Y.



>
> regards
> gregor
>
>  Forwarded Message 
> Subject: Re: [ovirt-users] ***UNCHECKED*** Re:  kvm vcpu0 unhandled rdmsr
> Date: Mon, 4 Apr 2016 15:06:52 +0200
> From: gregor 
> To: Yaniv Kaul 
> CC: users 
>
> Hi,
>
> the host and VM are all up-to-date with latest packages for CentOS 7.*.
>
> In /proc/cpuinfo I see "nx" in the flags list, the full list is on the
> and of the mail.
>
> Is it possible that this problem destroy the Windows Server 2012 R2 VM?
> Now I start the third installation, hopefully this time it will not get
> damaged. If it fails again I have to use another virtualization provider
> and leave oVirt, and I was so happy to leave VMWare :°(
>
> This is the command line for a VM (got with ps aux ...):
> /usr/libexec/qemu-kvm -name srv02 -S -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Westmere -m
> size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0,mem=2048 -uuid 6765fd03-ac0d-49ea-b8ba-cf10c60d3968
> -smbios type=1,manufacturer=oVirt,product=oVirt
>
> Node,version=7-2.1511.el7.centos.2.10,serial=39343937-3439-5A43-3135-353130324542,uuid=6765fd03-ac0d-49ea-b8ba-cf10c60d3968
> -no-user-config -nodefaults -chardev
>
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-srv02/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2016-04-03T21:24:06,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>
> file=/rhev/data-center/0001-0001-0001-0001-033d/4443edf0-54aa-4ef5-84c2-a433813f304a/images/f596f9a8-c6c4-41b8-b547-7f83829807fe/5028abbd-35c8-4dcd-95a0-3d0c61dfc2b7,if=none,id=drive-virtio-disk0,format=raw,serial=f596f9a8-c6c4-41b8-b547-7f83829807fe,cache=none,werror=stop,rerror=stop,aio=threads
> -device
>
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
>
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:57,bus=pci.0,addr=0x3
> -chardev
>
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.com.redhat.rhevm.vdsm,server,nowait
> -device
>
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
>
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.org.qemu.guest_agent.0,server,nowait
> -device
>
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
>
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
>
> port=5904,tls-port=5905,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on
> -device
>
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
>
> Here are the full flags list:
> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
> clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx
> smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe
> popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm arat epb
> pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust
> bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc
>
> On 04/04/16 08:52, Yaniv Kaul wrote:
> >
> >
> > On Sun, Apr 3, 2016 at 10:07 PM, gregor  > > wrote:
> >
> > Update: The problem occur when a VM reboots.
> > When I change the CPU Type from default "Intel Haswell-noTSX" to
> > "Westmere" the error is gone.
> >
> >
> > The error ""kvm ... vcpu0 unhandled rdmsr ..." is quite harmless.
> > I assume you are running the latest qemu/kvm packages.
> > Can you ensure NX is enabled on your host?
> > In any case, this is most likely a qemu/kvm issue - 

[ovirt-users] Fwd: Re: ***UNCHECKED*** Re: kvm vcpu0 unhandled rdmsr

2016-04-10 Thread gregor
Hi,

has anybody a last tip. Now the third installed Windows Server 2012 R2
VM is damaged and I will move tomorrow my host back to VMWare and leave
oVirt.

regards
gregor

 Forwarded Message 
Subject: Re: [ovirt-users] ***UNCHECKED*** Re:  kvm vcpu0 unhandled rdmsr
Date: Mon, 4 Apr 2016 15:06:52 +0200
From: gregor 
To: Yaniv Kaul 
CC: users 

Hi,

the host and VM are all up-to-date with latest packages for CentOS 7.*.

In /proc/cpuinfo I see "nx" in the flags list, the full list is on the
and of the mail.

Is it possible that this problem destroy the Windows Server 2012 R2 VM?
Now I start the third installation, hopefully this time it will not get
damaged. If it fails again I have to use another virtualization provider
and leave oVirt, and I was so happy to leave VMWare :°(

This is the command line for a VM (got with ps aux ...):
/usr/libexec/qemu-kvm -name srv02 -S -machine
pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Westmere -m
size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
node,nodeid=0,cpus=0,mem=2048 -uuid 6765fd03-ac0d-49ea-b8ba-cf10c60d3968
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=39343937-3439-5A43-3135-353130324542,uuid=6765fd03-ac0d-49ea-b8ba-cf10c60d3968
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-srv02/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-04-03T21:24:06,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/0001-0001-0001-0001-033d/4443edf0-54aa-4ef5-84c2-a433813f304a/images/f596f9a8-c6c4-41b8-b547-7f83829807fe/5028abbd-35c8-4dcd-95a0-3d0c61dfc2b7,if=none,id=drive-virtio-disk0,format=raw,serial=f596f9a8-c6c4-41b8-b547-7f83829807fe,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:57,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
port=5904,tls-port=5905,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

Here are the full flags list:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx
smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe
popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm arat epb
pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust
bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc

On 04/04/16 08:52, Yaniv Kaul wrote:
> 
> 
> On Sun, Apr 3, 2016 at 10:07 PM, gregor  > wrote:
> 
> Update: The problem occur when a VM reboots.
> When I change the CPU Type from default "Intel Haswell-noTSX" to
> "Westmere" the error is gone.
> 
> 
> The error ""kvm ... vcpu0 unhandled rdmsr ..." is quite harmless. 
> I assume you are running the latest qemu/kvm packages.
> Can you ensure NX is enabled on your host?
> In any case, this is most likely a qemu/kvm issue - the command line of
> the VM and information regarding the qemu packages and host versions
> will be needed.
> Y.
>  
> 
> 
> But which CPU type is now the best so I don't lose performance.
> 
> Host CPU: Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz
> 
> regards
> gregor
> 
> On 03/04/16 20:36, gregor wrote:
> > Hi,
> >
> > on one Host I get very often the 

Re: [ovirt-users] Creating templates in a blocking fashion from Python SDK

2016-04-10 Thread nicolas

El 2016-04-10 11:16, Yaniv Kaul escribió:

On Sun, Apr 10, 2016 at 11:02 AM, Yedidyah Bar David 
wrote:


On Sun, Apr 10, 2016 at 10:43 AM, Barak Korren 
wrote:

Hi there, I use the following Python SDK snippet to create a

template

from an existing VM:

     templ = ovirt.templates.add(
         ovirtsdk.xml.Template(vm=vm, name=vm.name [1])
     )

This seems to launch a template creation task in a non-blocking
manner, which makes the next command I run, which tries to delete

the

VM, fail because the VM is still locked by the template creation

task.


Is there a way to block on the template creation task and not

return

to the code until it finishes?


I don't think so, but you can loop waiting, see e.g.:



http://www.ovirt.org/develop/api/pythonapi/#create-a-template-from-vm

[2]


I wish we could have an extra parameters on the Python SDK that would
do this exact loop for us, since essentially most use cases require
this.
I'm not sure it is relevant only for template creation, btw.
What is also more annoying, is that this loop (IIRC) will never break,
if something bad happens and the template ends up in a state != down
(such as locked).
(Same issue I've just had with host installation - Iv'e waited
endlessly for it to be in 'up' state, only to find out it ended in
'installed_failed' state).
Y.



I have had some issues related to this, I've opened a couple of BZs (for 
different issues, though):


https://bugzilla.redhat.com/show_bug.cgi?id=1245630

https://bugzilla.redhat.com/show_bug.cgi?id=1315874

Currently the "poorman's wait method" that I detailed on the second BZ 
is the method I'm following to check whether state is "Ok".


Regards.
 



--
Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [3]




Links:
--
[1] http://vm.name
[2] 
http://www.ovirt.org/develop/api/pythonapi/#create-a-template-from-vm

[3] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] looking for some ISV backup software which integrates the backup API

2016-04-10 Thread Liron Aravot
On Wed, Apr 6, 2016 at 1:55 PM, Pavel Gashev  wrote:

> Nathanaël,
>
>
> I can share my backup experience. The current backup/restore api
> integration is not optimal for incremental backups. It's ok to read the
> whole disk if you backup it first time, but you have to read the whole disk
> each time to find changes for incremental backups. It creates huge disk
> load.
>
> For optimal backups it's necessary to keep a list of changed blocks since
> previous backup.
> See http://wiki.qemu.org/Features/IncrementalBackup
>
> Since it's not implemented in oVirt the best option is to run a backup
> agent inside VM. Yes, it's not very convenient, but it's necessary yet.
>
>
>
>
> On 05/04/16 21:32, "users-boun...@ovirt.org on behalf of Nathanaël
> Blanchet"  wrote:
>
> >Hello,
> >
> >We are about to change our backup provider, and I find it is a great
> >chance to choose a full supported ovirt backup solution.
> >I currently use this python script vm-backup-scheduler
> >(https://github.com/wefixit-AT/oVirtBackup) but it is not the workflow
> >officially suggested by the community
> >(
> https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration/
> ).
> >
> >I've been looking for a long time an ISV who supports such an API, but
> >the only one I found  is this one :
> >Acronis Backup Advanced suggested here
> >
> https://access.redhat.com/ecosystem/search/#/ecosystem/Red%20Hat%20Enterprise%20Virtualization?category=Software
> >I ran the trial version, but it doesn't seem to do better than the
> >vm-backup-scheduler script, and it doesn't seem to use the backup API
> >(attach a clone as a disk to an existing vm).
> >Can you suggest me some other ISV solutions, if they ever exist... or
> >share me your backup experience?
>

Hi Nathnael,
In addition to Acronis, you can also check
SEP - Hybrid Backup
Symantec Net Backup
CommVault - Simpana

thanks,
Liron


> >___
> >Users mailing list
> >Users@ovirt.org
> >http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Creating templates in a blocking fashion from Python SDK

2016-04-10 Thread Nir Soffer
On Sun, Apr 10, 2016 at 1:16 PM, Yaniv Kaul  wrote:
>
>
> On Sun, Apr 10, 2016 at 11:02 AM, Yedidyah Bar David 
> wrote:
>>
>> On Sun, Apr 10, 2016 at 10:43 AM, Barak Korren  wrote:
>> > Hi there, I use the following Python SDK snippet to create a template
>> > from an existing VM:
>> >
>> > templ = ovirt.templates.add(
>> > ovirtsdk.xml.Template(vm=vm, name=vm.name)
>> > )
>> >
>> > This seems to launch a template creation task in a non-blocking
>> > manner, which makes the next command I run, which tries to delete the
>> > VM, fail because the VM is still locked by the template creation task.
>> >
>> > Is there a way to block on the template creation task and not return
>> > to the code until it finishes?
>>
>> I don't think so, but you can loop waiting, see e.g.:
>>
>> http://www.ovirt.org/develop/api/pythonapi/#create-a-template-from-vm
>
>
> I wish we could have an extra parameters on the Python SDK that would do
> this exact loop for us, since essentially most use cases require this.
> I'm not sure it is relevant only for template creation, btw.
> What is also more annoying, is that this loop (IIRC) will never break, if
> something bad happens and the template ends up in a state != down (such as
> locked).
> (Same issue I've just had with host installation - Iv'e waited endlessly for
> it to be in 'up' state, only to find out it ended in 'installed_failed'
> state).

What we need is way to wait for events. For example, perform a request that
never completes, sending events as json/xml fragments and chunked encoding.

Here is an example:
https://dev.twitter.com/streaming/overview

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Creating templates in a blocking fashion from Python SDK

2016-04-10 Thread Yaniv Kaul
On Sun, Apr 10, 2016 at 11:02 AM, Yedidyah Bar David 
wrote:

> On Sun, Apr 10, 2016 at 10:43 AM, Barak Korren  wrote:
> > Hi there, I use the following Python SDK snippet to create a template
> > from an existing VM:
> >
> > templ = ovirt.templates.add(
> > ovirtsdk.xml.Template(vm=vm, name=vm.name)
> > )
> >
> > This seems to launch a template creation task in a non-blocking
> > manner, which makes the next command I run, which tries to delete the
> > VM, fail because the VM is still locked by the template creation task.
> >
> > Is there a way to block on the template creation task and not return
> > to the code until it finishes?
>
> I don't think so, but you can loop waiting, see e.g.:
>
> http://www.ovirt.org/develop/api/pythonapi/#create-a-template-from-vm


I wish we could have an extra parameters on the Python SDK that would do
this exact loop for us, since essentially most use cases require this.
I'm not sure it is relevant only for template creation, btw.
What is also more annoying, is that this loop (IIRC) will never break, if
something bad happens and the template ends up in a state != down (such as
locked).
(Same issue I've just had with host installation - Iv'e waited endlessly
for it to be in 'up' state, only to find out it ended in 'installed_failed'
state).
Y.


>
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error: Storage format V3 is not supported

2016-04-10 Thread Allon Mureinik
Patch was merged to master, should be available to you the next release.

On Thu, Apr 7, 2016 at 7:33 PM, Allon Mureinik  wrote:

> I've reproduced the issue on my env, and cooked up a patch that seems to
> solve it for me, in case anyone wants to cherry-pick and help verify it:
> https://gerrit.ovirt.org/#/c/55836
>
>
> On Wed, Apr 6, 2016 at 4:06 AM, Alex R  wrote:
>
>> Thank you!  This worked, though I wish I documented the process better
>>
>> Becuase I am not sure what I did exactly that helped?
>>
>> # cat
>> /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
>>
>> CLASS=Backup
>> # changed
>> DESCRIPTION=eport_storage
>> IOOPTIMEOUTSEC=10
>> LEASERETRIES=3
>> LEASETIMESEC=60
>> LOCKPOLICY=
>> LOCKRENEWALINTERVALSEC=5
>> POOL_UUID=
>> # I removed this
>> REMOTE_PATH=..com:/mnt/export_ovirt/images # I
>> changed this to what is listed
>> ROLE=Regular
>> SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
>> TYPE=NFS
>> # changed
>> VERSION=0
>> # changed from 3 to 0   ### I have tried this before with no succes, so it
>> must be a combonation of other changes?
>> _SHA_CKSUM=16dac1d1c915c4d30433f35dd668dd35f60dc22c   # I
>> changed this to what was found in the logs
>>
>>
>>
>> -Alex
>>
>>
>>
>> On Sun, Apr 3, 2016 at 2:31 AM, Vered Volansky  wrote:
>>
>>> I've reported the issue:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1323462
>>>
>>> A verified workaround is to change the metadata Version to 0.
>>>
>>> Checksum should be compromised by it, so follow
>>> http://lists.ovirt.org/pipermail/users/2012-April/007149.html if you
>>> have any issues with adjusting it.
>>>
>>> Please let me know how this worked out for you.
>>>
>>> Regards,
>>> Vered
>>>
>>> On Thu, Mar 31, 2016 at 5:28 AM, Alex R  wrote:
>>>
 I am trying to import a domain that I have used as an export on a
 previous install.  The previous install was no older then v3.5 and was
 built with the all-in-one-plugin.  Before destroying that system I took a
 portable drive and made an export domain to export my VMs and templates.

 The new system is up to date an was built as a hosted engine.  When I
 try to import the domain I get the following error:

 "Error while executing action: Cannot add Storage. Storage format V3 is
 not supported on the selected host version."

 I just need to recover the VMs.

 I connect the USB hard drive to the host and make an export directory
 just like I did on the old host.

 # ls -ld /mnt/export_ovirt
 drwxr-xr-x. 5 vdsm kvm 4096 Mar  6 11:27 /mnt/export_ovirt

 I have tried both doing an NFS mount
 # cat /etc/exports.d/ovirt.exports
 /home/engineha  127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
 /mnt/backup-vm/ 10.3.1.0/24(rw,anonuid=36,anongid=36,all_squash)
 127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)

 # cat
 /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
 CLASS=Backup
 DESCRIPTION=eport_storage
 IOOPTIMEOUTSEC=10
 LEASERETRIES=3
 LEASETIMESEC=60
 LOCKPOLICY=
 LOCKRENEWALINTERVALSEC=5
 POOL_UUID=053926e4-e63d-450e-8aa7-6f1235b944c6
 REMOTE_PATH=/mnt/export_ovirt/images
 ROLE=Regular
 SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
 TYPE=LOCALFS
 VERSION=3
 _SHA_CKSUM=2e6e203168bd84f3dc97c953b520ea8f78119bf0

 # ls -l
 /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf

 -rw-r--r--. 1 vdsm kvm 9021 Mar  6 11:50
 /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf

 Thanks,
 Alex

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Creating templates in a blocking fashion from Python SDK

2016-04-10 Thread Yedidyah Bar David
On Sun, Apr 10, 2016 at 10:43 AM, Barak Korren  wrote:
> Hi there, I use the following Python SDK snippet to create a template
> from an existing VM:
>
> templ = ovirt.templates.add(
> ovirtsdk.xml.Template(vm=vm, name=vm.name)
> )
>
> This seems to launch a template creation task in a non-blocking
> manner, which makes the next command I run, which tries to delete the
> VM, fail because the VM is still locked by the template creation task.
>
> Is there a way to block on the template creation task and not return
> to the code until it finishes?

I don't think so, but you can loop waiting, see e.g.:

http://www.ovirt.org/develop/api/pythonapi/#create-a-template-from-vm
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Creating templates in a blocking fashion from Python SDK

2016-04-10 Thread Barak Korren
Hi there, I use the following Python SDK snippet to create a template
from an existing VM:

templ = ovirt.templates.add(
ovirtsdk.xml.Template(vm=vm, name=vm.name)
)

This seems to launch a template creation task in a non-blocking
manner, which makes the next command I run, which tries to delete the
VM, fail because the VM is still locked by the template creation task.

Is there a way to block on the template creation task and not return
to the code until it finishes?

Thanks,

-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users