[PATCH 0/3] x86: clear vmcss on all cpus when doing kdump if necessary

2012-10-12 Thread Zhang Yanfei
Currently, kdump just makes all the logical processors leave VMX operation by
executing VMXOFF instruction, so any VMCSs active on the logical processors may
be corrupted. But, sometimes, we need the VMCSs to debug guest images contained
in the host vmcore. To prevent the corruption, we should VMCLEAR the VMCSs 
before
executing the VMXOFF instruction.

The patch set provides an alternative way to clear VMCSs related to guests
on all cpus when host is doing kdump.

zhangyanfei (3):
  x86/kexec: clear vmcss on all cpus if necessary
  KVM: make crash_clear_loaded_vmcss valid when kvm_intel is loaded
  sysctl: introduce a new interface to control kdump-vmcs-clear
behaviour

 Documentation/sysctl/kernel.txt |8 
 arch/x86/include/asm/kexec.h|3 +++
 arch/x86/kernel/crash.c |   23 +++
 arch/x86/kvm/vmx.c  |9 +
 kernel/sysctl.c |   10 ++
 5 files changed, 53 insertions(+), 0 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] x86/kexec: clear vmcss on all cpus if necessary

2012-10-12 Thread Zhang Yanfei
This patch provides an alternative way to clear vmcss related to guests
on all cpus when doing kdump.

Signed-off-by: zhangyanfei zhangyan...@cn.fujitsu.com
---
 arch/x86/include/asm/kexec.h |3 +++
 arch/x86/kernel/crash.c  |   23 +++
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 317ff17..0692921 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -163,6 +163,9 @@ struct kimage_arch {
 };
 #endif
 
+extern int clear_loaded_vmcs_enabled;
+extern void (*crash_clear_loaded_vmcss)(void);
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_KEXEC_H */
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 13ad899..947550e 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -16,6 +16,7 @@
 #include linux/delay.h
 #include linux/elf.h
 #include linux/elfcore.h
+#include linux/module.h
 
 #include asm/processor.h
 #include asm/hardirq.h
@@ -30,6 +31,24 @@
 
 int in_crash_kexec;
 
+/*
+ * If clear_loaded_vmcs_enabled is set, vmcss
+ * that are loaded on all cpus will be cleared
+ * via crash_clear_loaded_vmcss.
+ */
+int clear_loaded_vmcs_enabled;
+void (*crash_clear_loaded_vmcss)(void) = NULL;
+EXPORT_SYMBOL_GPL(crash_clear_loaded_vmcss);
+
+static void cpu_emergency_clear_loaded_vmcss(void)
+{
+   if (clear_loaded_vmcs_enabled 
+   crash_clear_loaded_vmcss 
+   cpu_has_vmx()  cpu_vmx_enabled()) {
+   crash_clear_loaded_vmcss();
+   }
+}
+
 #if defined(CONFIG_SMP)  defined(CONFIG_X86_LOCAL_APIC)
 
 static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
@@ -46,6 +65,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
 #endif
crash_save_cpu(regs, cpu);
 
+   cpu_emergency_clear_loaded_vmcss();
+
/* Disable VMX or SVM if needed.
 *
 * We need to disable virtualization on all CPUs.
@@ -88,6 +109,8 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
 
kdump_nmi_shootdown_cpus();
 
+   cpu_emergency_clear_loaded_vmcss();
+
/* Booting kdump kernel with VMX or SVM enabled won't work,
 * because (among other limitations) we can't disable paging
 * with the virt flags.
-- 
1.7.1
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/3] KVM: make crash_clear_loaded_vmcss valid when kvm_intel is loaded

2012-10-12 Thread Zhang Yanfei
Signed-off-by: zhangyanfei zhangyan...@cn.fujitsu.com
---
 arch/x86/kvm/vmx.c |9 +
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4ff0ab9..f6a16b2 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -41,6 +41,7 @@
 #include asm/i387.h
 #include asm/xcr.h
 #include asm/perf_event.h
+#include asm/kexec.h
 
 #include trace.h
 
@@ -7230,6 +7231,10 @@ static int __init vmx_init(void)
if (r)
goto out3;
 
+#ifdef CONFIG_KEXEC
+   crash_clear_loaded_vmcss = vmclear_local_loaded_vmcss;
+#endif
+
vmx_disable_intercept_for_msr(MSR_FS_BASE, false);
vmx_disable_intercept_for_msr(MSR_GS_BASE, false);
vmx_disable_intercept_for_msr(MSR_KERNEL_GS_BASE, true);
@@ -7265,6 +7270,10 @@ static void __exit vmx_exit(void)
free_page((unsigned long)vmx_io_bitmap_b);
free_page((unsigned long)vmx_io_bitmap_a);
 
+#ifdef CONFIG_KEXEC
+   crash_clear_loaded_vmcss = NULL;
+#endif
+
kvm_exit();
 }
 
-- 
1.7.1
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] sysctl: introduce a new interface to control kdump-vmcs-clear behaviour

2012-10-12 Thread Zhang Yanfei
This patch exports the variable clear_loaded_vmcs_enabled to userspace.

Signed-off-by: zhangyanfei zhangyan...@cn.fujitsu.com
---
 Documentation/sysctl/kernel.txt |8 
 kernel/sysctl.c |   10 ++
 2 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 6d78841..038148b 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -25,6 +25,7 @@ show up in /proc/sys/kernel:
 - bootloader_version[ X86 only ]
 - callhome  [ S390 only ]
 - cap_last_cap
+- clear_loaded_vmcs [ X86 only ]
 - core_pattern
 - core_pipe_limit
 - core_uses_pid
@@ -164,6 +165,13 @@ CAP_LAST_CAP from the kernel.
 
 ==
 
+clear_loaded_vmcs
+
+Controls if VMCSs should be cleared when host is doing kdump.  Exports
+clear_loaded_vmcs_enabled from the kernel.
+
+==
+
 core_pattern:
 
 core_pattern is used to specify a core dumpfile pattern name.
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 4ab1187..3ab7d9c 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -63,6 +63,7 @@
 
 #include asm/uaccess.h
 #include asm/processor.h
+#include asm/kexec.h
 
 #ifdef CONFIG_X86
 #include asm/nmi.h
@@ -994,6 +995,15 @@ static struct ctl_table kern_table[] = {
.proc_handler   = proc_dointvec,
},
 #endif
+#ifdef CONFIG_KEXEC
+   {
+   .procname   = clear_loaded_vmcs,
+   .data   = clear_loaded_vmcs_enabled,
+   .maxlen = sizeof(int),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec,
+   },
+#endif
{ }
 };
 
-- 
1.7.1
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Hey all,

I have a simple user question. I have a few LVM based KVM guests and 
wan't to backup them to files. The simple and nasty way would be to 
create a complete output file with dd, which wastes very much space. So 
I would like to create a backup of the LVM to a file which only locates 
the space which is used on the LVM. Would be create when the output file 
would be something like a qcow2 file which could be also simply startet 
with KVM.


Is this supported by qemu_backup or what would be the best way to do this?

PS: Sorry for my bed english

Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/3] virtio-net: inline header support

2012-10-12 Thread Paolo Bonzini
Il 12/10/2012 00:37, Rusty Russell ha scritto:
 Michael S. Tsirkin m...@redhat.com writes:
 On Thu, Oct 11, 2012 at 10:33:31AM +1030, Rusty Russell wrote:
 OK.  Well, Anthony wants qemu to be robust in this regard, so I am
 tempted to rework all the qemu drivers to handle arbitrary layouts.
 They could use a good audit anyway.

 I agree here. Still trying to understand whether we can agree to use
 a feature bit for this, or not.
 
 I'd *like* to imply it by the new PCI layout, but if it doesn't work
 we'll add a new feature bit.
 
 I'm resisting a feature bit, since it constrains future implementations
 which could otherwise assume it.

Future implementations may certainly refuse to start if the feature is
not there.  Whether it's a good idea or not, well, that depends on how
much future they are.

Paolo

 This would become a glaring exception, but I'm tempted to fix it to 32
 bytes at the same time as we get the new pci layout (ie. for the virtio
 1.0 spec).

 But this isn't a virtio-pci only issue, is it?
 qemu has s390 bus with same limmitation.
 How can we tie it to pci layout?
 
 They can use a transport feature if they need to, of course.  But
 perhaps the timing with ccw will coincide with the fix, in which they
 don't need to, but it might be a bit late.
 
 Cornelia?
 
 Cheers,
 Rusty.
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:
 I have a simple user question. I have a few LVM based KVM guests and
 wan't to backup them to files. The simple and nasty way would be to
 create a complete output file with dd, which wastes very much space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when the
 output file would be something like a qcow2 file which could be also
 simply startet with KVM.

If the VM is not running you can use qemu-img convert:

  qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2, you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 10:58, schrieb Lukas Laukamp:

Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests and
wan't to backup them to files. The simple and nasty way would be to
create a complete output file with dd, which wastes very much space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when the
output file would be something like a qcow2 file which could be also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

   qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2, you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it 
would be a offline backup. For the online backup I think about a 
seperated system which every day makes incremental backups and once a 
week a full backup. The main problem is, that the systems are in a WAN 
network and I need encryption between the systems. Would it be 
possible to do something like this: create the LVM snapshot for the 
backup, read this LVM snapshot with the remote backup system via ssh 
tunnel and save the output of this to qcow2 files on the backup 
system? And in which format could be the incremental backups be stored?


Best Regards


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using PCI config space to indicate config location

2012-10-12 Thread Michael S. Tsirkin
On Fri, Oct 12, 2012 at 08:59:36AM +1030, Rusty Russell wrote:
  For writes, the standard seems to be a commit latch.  We could abuse the
  generation count for this: the driver writes to it to commit config
  changes.
 
  I think this will work. There are a couple of things that bother me:
 
  This assumes read accesses have no side effects, and these are sometimes 
  handy.
  Also the semantics for write aren't very clear to me.
  I guess device must buffer data until generation count write?
  This assumes the device has a buffer to store writes,
  and it must track each byte written. I kind of dislike this
  tracking of accessed bytes. Also, device would need to resolve conflicts
  if any in some device specific way.
 
 It should be trivial to implement: you keep a scratch copy of the config
 space, and copy it to the master copy when they hit the latch.
 
 Implementation of this will show whether I've missed anything here, I
 think.

What I refer to: what happens if driver does:
- write offset 1
- write offset 3
- hit commit latch

?

-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using PCI config space to indicate config location

2012-10-12 Thread Michael S. Tsirkin
On Fri, Oct 12, 2012 at 08:21:50PM +1030, Rusty Russell wrote:
 Michael S. Tsirkin m...@redhat.com writes:
  On Fri, Oct 12, 2012 at 08:59:36AM +1030, Rusty Russell wrote:
   For writes, the standard seems to be a commit latch.  We could abuse the
   generation count for this: the driver writes to it to commit config
   changes.
  
   I think this will work. There are a couple of things that bother me:
  
   This assumes read accesses have no side effects, and these are sometimes 
   handy.
   Also the semantics for write aren't very clear to me.
   I guess device must buffer data until generation count write?
   This assumes the device has a buffer to store writes,
   and it must track each byte written. I kind of dislike this
   tracking of accessed bytes. Also, device would need to resolve conflicts
   if any in some device specific way.
  
  It should be trivial to implement: you keep a scratch copy of the config
  space, and copy it to the master copy when they hit the latch.
  
  Implementation of this will show whether I've missed anything here, I
  think.
 
  What I refer to: what happens if driver does:
  - write offset 1
  - write offset 3
  - hit commit latch
 
 - nothing
 - nothing
 - effect of offset 1 and offset 3 writes

OK so this means that you also need to track which bytes where written
in order to know to skip byte 2.
This is what I referred to. If instead we ask driver to specify
offset/length explicitly device only needs to remember that.

Not a big deal anyway, just pointing this out.

 Now, since there's nothing published by the *driver* at the moment
 which can't be trivially atomically written, this scheme is overkill
 (sure, it means you could do a byte-at-a-time write to some 4-byte
 field, but why?).
 
 But perhaps it's overkill: no other bus has this feature, so we'd need a
 feature bit for them anyway in future if we create a device which needs
 such atomicity.
 
 Cheers,
 Rusty.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests and
 wan't to backup them to files. The simple and nasty way would be to
 create a complete output file with dd, which wastes very much space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when the
 output file would be something like a qcow2 file which could be also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see the
 --sparse option in the man page).  So you don't need to use qcow2, you
 can use cp(1) to copy the LVM volume to a raw file.  It will not use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now it would
 be a offline backup. For the online backup I think about a seperated system
 which every day makes incremental backups and once a week a full backup. The
 main problem is, that the systems are in a WAN network and I need encryption
 between the systems. Would it be possible to do something like this: create
 the LVM snapshot for the backup, read this LVM snapshot with the remote
 backup system via ssh tunnel and save the output of this to qcow2 files on
 the backup system? And in which format could be the incremental backups be
 stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests and
wan't to backup them to files. The simple and nasty way would be to
create a complete output file with dd, which wastes very much space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when the
output file would be something like a qcow2 file which could be also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2, you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it would
be a offline backup. For the online backup I think about a seperated system
which every day makes incremental backups and once a week a full backup. The
main problem is, that the systems are in a WAN network and I need encryption
between the systems. Would it be possible to do something like this: create
the LVM snapshot for the backup, read this LVM snapshot with the remote
backup system via ssh tunnel and save the output of this to qcow2 files on
the backup system? And in which format could be the incremental backups be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on 
the host, transfer this to the backup server will result in the weekly 
full backup. So do you mean I could use rsync to read the LVM from the 
host, compare the LVM data with the data in the qcow2 on the backup 
server and simply transfer the differences to the file? Or does it work 
on another way?


Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests and
 wan't to backup them to files. The simple and nasty way would be to
 create a complete output file with dd, which wastes very much space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when the
 output file would be something like a qcow2 file which could be also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

 qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see the
 --sparse option in the man page).  So you don't need to use qcow2, you
 can use cp(1) to copy the LVM volume to a raw file.  It will not use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now it
 would
 be a offline backup. For the online backup I think about a seperated
 system
 which every day makes incremental backups and once a week a full backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the remote
 backup system via ssh tunnel and save the output of this to qcow2 files
 on
 the backup system? And in which format could be the incremental backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.  The
 qcow2 file does not contain zero regions and will therefore save a lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on the
 host, transfer this to the backup server will result in the weekly full
 backup. So do you mean I could use rsync to read the LVM from the host,
 compare the LVM data with the data in the qcow2 on the backup server and
 simply transfer the differences to the file? Or does it work on another way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests and
wan't to backup them to files. The simple and nasty way would be to
create a complete output file with dd, which wastes very much space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when the
output file would be something like a qcow2 file which could be also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

 qemu-img convert -f raw -O qcow2 /dev/vg/vm001 vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2, you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it
would
be a offline backup. For the online backup I think about a seperated
system
which every day makes incremental backups and once a week a full backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like this:
create
the LVM snapshot for the backup, read this LVM snapshot with the remote
backup system via ssh tunnel and save the output of this to qcow2 files
on
the backup system? And in which format could be the incremental backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on the
host, transfer this to the backup server will result in the weekly full
backup. So do you mean I could use rsync to read the LVM from the host,
compare the LVM data with the data in the qcow2 on the backup server and
simply transfer the differences to the file? Or does it work on another way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which get 
mounted while the backup?


Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests and
 wan't to backup them to files. The simple and nasty way would be to
 create a complete output file with dd, which wastes very much space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when the
 output file would be something like a qcow2 file which could be also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

  qemu-img convert -f raw -O qcow2 /dev/vg/vm001
 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see the
 --sparse option in the man page).  So you don't need to use qcow2,
 you
 can use cp(1) to copy the LVM volume to a raw file.  It will not use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now it
 would
 be a offline backup. For the online backup I think about a seperated
 system
 which every day makes incremental backups and once a week a full
 backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the
 remote
 backup system via ssh tunnel and save the output of this to qcow2
 files
 on
 the backup system? And in which format could be the incremental
 backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.  The
 qcow2 file does not contain zero regions and will therefore save a lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on the
 host, transfer this to the backup server will result in the weekly full
 backup. So do you mean I could use rsync to read the LVM from the host,
 compare the LVM data with the data in the qcow2 on the backup server and
 simply transfer the differences to the file? Or does it work on another
 way?

 When using rsync you can skip qcow2.  Only two objects are needed:
 1. The LVM volume on the host.
 2. The last full backup on the backup client.

 rsync compares #1 and #2 efficiently over the network and only
 transfers data from #1 which has changed.

 After rsync completes your full backup image is identical to the LVM
 volume.  Next week you can use it as the last image to rsync
 against.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 So I simply update the full backup, which is simply a raw file which get
 mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image file.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests and
wan't to backup them to files. The simple and nasty way would be to
create a complete output file with dd, which wastes very much space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when the
output file would be something like a qcow2 file which could be also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

  qemu-img convert -f raw -O qcow2 /dev/vg/vm001
vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2,
you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it
would
be a offline backup. For the online backup I think about a seperated
system
which every day makes incremental backups and once a week a full
backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like this:
create
the LVM snapshot for the backup, read this LVM snapshot with the
remote
backup system via ssh tunnel and save the output of this to qcow2
files
on
the backup system? And in which format could be the incremental
backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on the
host, transfer this to the backup server will result in the weekly full
backup. So do you mean I could use rsync to read the LVM from the host,
compare the LVM data with the data in the qcow2 on the backup server and
simply transfer the differences to the file? Or does it work on another
way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which get
mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image file.

Stefan


Ah, thats great so to have a complete task in mind:

1. Create a qcow2 of a snapshot of an running VM
2. Transfer the qcow2 to the backup node
3. For incremental backup sync a daily LVM snapshot with the image on 
the backup node via rsync


Is that right?

Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/3] virtio-net: inline header support

2012-10-12 Thread Cornelia Huck
On Fri, 12 Oct 2012 09:07:46 +1030
Rusty Russell ru...@rustcorp.com.au wrote:

 Michael S. Tsirkin m...@redhat.com writes:
  On Thu, Oct 11, 2012 at 10:33:31AM +1030, Rusty Russell wrote:
  OK.  Well, Anthony wants qemu to be robust in this regard, so I am
  tempted to rework all the qemu drivers to handle arbitrary layouts.
  They could use a good audit anyway.
 
  I agree here. Still trying to understand whether we can agree to use
  a feature bit for this, or not.
 
 I'd *like* to imply it by the new PCI layout, but if it doesn't work
 we'll add a new feature bit.
 
 I'm resisting a feature bit, since it constrains future implementations
 which could otherwise assume it.
 
  This would become a glaring exception, but I'm tempted to fix it to 32
  bytes at the same time as we get the new pci layout (ie. for the virtio
  1.0 spec).
 
  But this isn't a virtio-pci only issue, is it?
  qemu has s390 bus with same limmitation.
  How can we tie it to pci layout?
 
 They can use a transport feature if they need to, of course.  But
 perhaps the timing with ccw will coincide with the fix, in which they
 don't need to, but it might be a bit late.
 
 Cornelia?

My virtio-ccw host code is still going through a bit of rework, so it
might well go in after the fix.

There's also the existing (non-spec'ed) s390-virtio transport. While it
will likely be deprecated sometime in the future, it should probably
get a feature bit for consistency's sake.

 
 Cheers,
 Rusty.
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 1:51 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests
 and
 wan't to backup them to files. The simple and nasty way would be
 to
 create a complete output file with dd, which wastes very much
 space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when
 the
 output file would be something like a qcow2 file which could be
 also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

   qemu-img convert -f raw -O qcow2 /dev/vg/vm001
 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see the
 --sparse option in the man page).  So you don't need to use qcow2,
 you
 can use cp(1) to copy the LVM volume to a raw file.  It will not
 use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now it
 would
 be a offline backup. For the online backup I think about a seperated
 system
 which every day makes incremental backups and once a week a full
 backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the
 remote
 backup system via ssh tunnel and save the output of this to qcow2
 files
 on
 the backup system? And in which format could be the incremental
 backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.  The
 qcow2 file does not contain zero regions and will therefore save a lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on
 the
 host, transfer this to the backup server will result in the weekly full
 backup. So do you mean I could use rsync to read the LVM from the host,
 compare the LVM data with the data in the qcow2 on the backup server
 and
 simply transfer the differences to the file? Or does it work on another
 way?

 When using rsync you can skip qcow2.  Only two objects are needed:
 1. The LVM volume on the host.
 2. The last full backup on the backup client.

 rsync compares #1 and #2 efficiently over the network and only
 transfers data from #1 which has changed.

 After rsync completes your full backup image is identical to the LVM
 volume.  Next week you can use it as the last image to rsync
 against.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 So I simply update the full backup, which is simply a raw file which get
 mounted while the backup?

 The image file does not need to be mounted.  Just rsync the raw image
 file.

 Stefan


 Ah, thats great so to have a complete task in mind:

 1. Create a qcow2 of a snapshot of an running VM
 2. Transfer the qcow2 to the backup node
 3. For incremental backup sync a daily LVM snapshot with the image on the
 backup node via rsync

Yes.  Only make sure not to rsync the raw LVM image onto the qcow2
image - the backup client needs to have a raw image if you're syncing
directly against the raw LVM volume.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo 

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 14:59, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 1:51 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests
and
wan't to backup them to files. The simple and nasty way would be
to
create a complete output file with dd, which wastes very much
space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when
the
output file would be something like a qcow2 file which could be
also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

   qemu-img convert -f raw -O qcow2 /dev/vg/vm001
vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2,
you
can use cp(1) to copy the LVM volume to a raw file.  It will not
use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it
would
be a offline backup. For the online backup I think about a seperated
system
which every day makes incremental backups and once a week a full
backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like this:
create
the LVM snapshot for the backup, read this LVM snapshot with the
remote
backup system via ssh tunnel and save the output of this to qcow2
files
on
the backup system? And in which format could be the incremental
backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on
the
host, transfer this to the backup server will result in the weekly full
backup. So do you mean I could use rsync to read the LVM from the host,
compare the LVM data with the data in the qcow2 on the backup server
and
simply transfer the differences to the file? Or does it work on another
way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which get
mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image
file.

Stefan


Ah, thats great so to have a complete task in mind:

1. Create a qcow2 of a snapshot of an running VM
2. Transfer the qcow2 to the backup node
3. For incremental backup sync a daily LVM snapshot with the image on the
backup node via rsync

Yes.  Only make sure not to rsync the raw LVM image onto the qcow2
image - the backup client needs to have a raw image if you're syncing
directly against the raw LVM volume.

Stefan


So I should use a raw file for the syncing and convert it on the backup 
node to a qcow2 to have a good file format?


Best Regards
--
To unsubscribe from this list: send the line 

Re: [PATCH] vhost-blk: Add vhost-blk support v2

2012-10-12 Thread Asias He
Hello Michael,

Thanks for the review!

On 10/11/2012 08:41 PM, Michael S. Tsirkin wrote:
 On Tue, Oct 09, 2012 at 04:05:18PM +0800, Asias He wrote:
 vhost-blk is an in-kernel virito-blk device accelerator.

 Due to lack of proper in-kernel AIO interface, this version converts
 guest's I/O request to bio and use submit_bio() to submit I/O directly.
 So this version any supports raw block device as guest's disk image,
 e.g. /dev/sda, /dev/ram0. We can add file based image support to
 vhost-blk once we have in-kernel AIO interface. There are some work in
 progress for in-kernel AIO interface from Dave Kleikamp and Zach Brown:

http://marc.info/?l=linux-fsdevelm=133312234313122

 Performance evaluation:
 -
 1) LKVM
 Fio with libaio ioengine on Fusion IO device using kvm tool
 IOPS   Before   After   Improvement
 seq-read   107  121 +13.0%
 seq-write  130  179 +37.6%
 rnd-read   102  122 +19.6%
 rnd-write  125  159 +27.0%

 2) QEMU
 Fio with libaio ioengine on Fusion IO device using QEMU
 IOPS   Before   After   Improvement
 seq-read   76   123 +61.8%
 seq-write  139  173 +24.4%
 rnd-read   73   120 +64.3%
 rnd-write  75   156 +108.0%

 Userspace bits:
 -
 1) LKVM
 The latest vhost-blk userspace bits for kvm tool can be found here:
 g...@github.com:asias/linux-kvm.git blk.vhost-blk

 2) QEMU
 The latest vhost-blk userspace prototype for QEMU can be found here:
 g...@github.com:asias/qemu.git blk.vhost-blk

 Signed-off-by: Asias He as...@redhat.com
 ---
  drivers/vhost/Kconfig |   1 +
  drivers/vhost/Kconfig.blk |  10 +
  drivers/vhost/Makefile|   2 +
  drivers/vhost/blk.c   | 641 
 ++
  drivers/vhost/blk.h   |   8 +
  5 files changed, 662 insertions(+)
  create mode 100644 drivers/vhost/Kconfig.blk
  create mode 100644 drivers/vhost/blk.c
  create mode 100644 drivers/vhost/blk.h

 diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
 index 202bba6..acd8038 100644
 --- a/drivers/vhost/Kconfig
 +++ b/drivers/vhost/Kconfig
 @@ -11,4 +11,5 @@ config VHOST_NET
  
  if STAGING
  source drivers/vhost/Kconfig.tcm
 +source drivers/vhost/Kconfig.blk
  endif
 diff --git a/drivers/vhost/Kconfig.blk b/drivers/vhost/Kconfig.blk
 new file mode 100644
 index 000..ff8ab76
 --- /dev/null
 +++ b/drivers/vhost/Kconfig.blk
 @@ -0,0 +1,10 @@
 +config VHOST_BLK
 +tristate Host kernel accelerator for virtio blk (EXPERIMENTAL)
 +depends on BLOCK   EXPERIMENTAL  m
 +---help---
 +  This kernel module can be loaded in host kernel to accelerate
 +  guest block with virtio_blk. Not to be confused with virtio_blk
 +  module itself which needs to be loaded in guest kernel.
 +
 +  To compile this driver as a module, choose M here: the module will
 +  be called vhost_blk.
 diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
 index a27b053..1a8a4a5 100644
 --- a/drivers/vhost/Makefile
 +++ b/drivers/vhost/Makefile
 @@ -2,3 +2,5 @@ obj-$(CONFIG_VHOST_NET) += vhost_net.o
  vhost_net-y := vhost.o net.o
  
  obj-$(CONFIG_TCM_VHOST) += tcm_vhost.o
 +obj-$(CONFIG_VHOST_BLK) += vhost_blk.o
 +vhost_blk-y := blk.o
 diff --git a/drivers/vhost/blk.c b/drivers/vhost/blk.c
 new file mode 100644
 index 000..6b2445a
 --- /dev/null
 +++ b/drivers/vhost/blk.c
 @@ -0,0 +1,641 @@
 +/*
 + * Copyright (C) 2011 Taobao, Inc.
 + * Author: Liu Yuan tailai...@taobao.com
 + *
 + * Copyright (C) 2012 Red Hat, Inc.
 + * Author: Asias He as...@redhat.com
 + *
 + * This work is licensed under the terms of the GNU GPL, version 2.
 + *
 + * virtio-blk server in host kernel.
 + */
 +
 +#include linux/miscdevice.h
 +#include linux/module.h
 +#include linux/vhost.h
 +#include linux/virtio_blk.h
 +#include linux/mutex.h
 +#include linux/file.h
 +#include linux/kthread.h
 +#include linux/blkdev.h
 +
 +#include vhost.c
 +#include vhost.h
 +#include blk.h
 +
 +#define BLK_HDR 0
 
 What's this for, exactly? Please add a comment.


The block headr is in the first and separate buffer.

 +
 +static DEFINE_IDA(vhost_blk_index_ida);
 +
 +enum {
 +VHOST_BLK_VQ_REQ = 0,
 +VHOST_BLK_VQ_MAX = 1,
 +};
 +
 +struct req_page_list {
 +struct page **pages;
 +int pages_nr;
 +};
 +
 +struct vhost_blk_req {
 +struct llist_node llnode;
 +struct req_page_list *pl;
 +struct vhost_blk *blk;
 +
 +struct iovec *iov;
 +int iov_nr;
 +
 +struct bio **bio;
 +atomic_t bio_nr;
 +
 +sector_t sector;
 +int write;
 +u16 head;
 +long len;
 +
 +u8 *status;
 
 Is this a userspace pointer? If yes it must be tagged as such.

Will fix.

 Please run code checker - it will catch other bugs for you too.

Could you name one that you use?

 +};
 +
 +struct vhost_blk {
 +struct task_struct *host_kick;
 +struct vhost_blk_req *reqs;
 +struct vhost_virtqueue vq;
 +

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 3:02 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 14:59, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 1:51 PM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests
 and
 wan't to backup them to files. The simple and nasty way would be
 to
 create a complete output file with dd, which wastes very much
 space.
 So I would like to create a backup of the LVM to a file which
 only
 locates the space which is used on the LVM. Would be create when
 the
 output file would be something like a qcow2 file which could be
 also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

qemu-img convert -f raw -O qcow2 /dev/vg/vm001
 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see
 the
 --sparse option in the man page).  So you don't need to use
 qcow2,
 you
 can use cp(1) to copy the LVM volume to a raw file.  It will not
 use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now
 it
 would
 be a offline backup. For the online backup I think about a
 seperated
 system
 which every day makes incremental backups and once a week a full
 backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like
 this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the
 remote
 backup system via ssh tunnel and save the output of this to qcow2
 files
 on
 the backup system? And in which format could be the incremental
 backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.
 The
 qcow2 file does not contain zero regions and will therefore save a
 lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only
 transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on
 the
 host, transfer this to the backup server will result in the weekly
 full
 backup. So do you mean I could use rsync to read the LVM from the
 host,
 compare the LVM data with the data in the qcow2 on the backup server
 and
 simply transfer the differences to the file? Or does it work on
 another
 way?

 When using rsync you can skip qcow2.  Only two objects are needed:
 1. The LVM volume on the host.
 2. The last full backup on the backup client.

 rsync compares #1 and #2 efficiently over the network and only
 transfers data from #1 which has changed.

 After rsync completes your full backup image is identical to the LVM
 volume.  Next week you can use it as the last image to rsync
 against.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 So I simply update the full backup, which is simply a raw file which
 get
 mounted while the backup?

 The image file does not need to be mounted.  Just rsync the raw image
 file.

 Stefan


 Ah, thats great so to have a complete task in mind:

 1. Create a qcow2 of a snapshot of an running VM
 2. Transfer the qcow2 to the backup node
 3. For incremental backup sync a daily LVM snapshot with the image on the
 backup node via rsync

 Yes.  Only make sure not to rsync the raw LVM image onto the qcow2
 image - the backup client needs to have a raw image if you're syncing
 directly against the raw LVM volume.

 

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Javier Guerra Giraldez
On Fri, Oct 12, 2012 at 9:25 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 I would leave them raw as long as they are sparse (zero regions do not
 take up space).  If you need to copy them you can either convert to
 qcow2 or use tools that preserve sparseness (BTW compression tools are
 good at this).

note that free blocks previously used by deleted files won't be
sparse, won't be zero and won't be much reduced by compression.

i'd say the usual advice stays:

A: if you run any non-trivial application on that VM, then use a real
network backup tool 'from the inside' of the VM
B: if real point-in-time application-cache-storage consistency is not
important, then you can:

- make a read-write LVM snapshot
- mount that and fsck.  (it will appear as not-cleanly unmounted)
- backup the files. (i like rsync, especially if you have an
uncompressed previous backup)
- umount and destroy the snapshot
- optionally compress the backup


but seriously consider option A before.  especially important if you
run any DB on that VM

-- 
Javier
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 16:36, schrieb Javier Guerra Giraldez:

On Fri, Oct 12, 2012 at 9:25 AM, Stefan Hajnoczi stefa...@gmail.com wrote:

I would leave them raw as long as they are sparse (zero regions do not
take up space).  If you need to copy them you can either convert to
qcow2 or use tools that preserve sparseness (BTW compression tools are
good at this).

note that free blocks previously used by deleted files won't be
sparse, won't be zero and won't be much reduced by compression.

i'd say the usual advice stays:

A: if you run any non-trivial application on that VM, then use a real
network backup tool 'from the inside' of the VM
B: if real point-in-time application-cache-storage consistency is not
important, then you can:

- make a read-write LVM snapshot
- mount that and fsck.  (it will appear as not-cleanly unmounted)
- backup the files. (i like rsync, especially if you have an
uncompressed previous backup)
- umount and destroy the snapshot
- optionally compress the backup


but seriously consider option A before.  especially important if you
run any DB on that VM



I already thought about such situations. So DB systems are a real 
problem. I know that some data can be better backup directly but for 
example a VM for a Web- or Mailserver I want to backup completely to get 
this services get to work again very fast if there are problems. So it's 
a complex problem I think and not every machine can be backuped as another.


I think that I must start by the hardware and software configuration of 
the backup node.


Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests and
wan't to backup them to files. The simple and nasty way would be to
create a complete output file with dd, which wastes very much space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when the
output file would be something like a qcow2 file which could be also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

  qemu-img convert -f raw -O qcow2 /dev/vg/vm001
vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2,
you
can use cp(1) to copy the LVM volume to a raw file.  It will not use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it
would
be a offline backup. For the online backup I think about a seperated
system
which every day makes incremental backups and once a week a full
backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like this:
create
the LVM snapshot for the backup, read this LVM snapshot with the
remote
backup system via ssh tunnel and save the output of this to qcow2
files
on
the backup system? And in which format could be the incremental
backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on the
host, transfer this to the backup server will result in the weekly full
backup. So do you mean I could use rsync to read the LVM from the host,
compare the LVM data with the data in the qcow2 on the backup server and
simply transfer the differences to the file? Or does it work on another
way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which get
mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image file.

Stefan


I tested the qemu-img command now, but it does not do that what I want. 
I have a VM with a 5GB disk, this disk is not allocated with 1GB of 
data. When I do the convert command the output is a 5GB qcow2 disk. What 
do I have to do to get a qcow2 file with only the allocated space/data 
from the LVM? I also tried the -c option of qemu-img convert but the 
result was nearly the same.


Best Regards
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 0/1] target-i386: Add missing kvm bits.

2012-10-12 Thread Don Slutz
This was part of [PATCH v6 00/16] Allow changing of Hypervisor CPUIDs.

Since it is no longer in use by any of the patches in v7, I have split it off.

Don Slutz (1):
  target-i386: Add missing kvm bits.

 target-i386/cpu.c |   12 
 1 files changed, 8 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 8:14 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests
 and
 wan't to backup them to files. The simple and nasty way would be
 to
 create a complete output file with dd, which wastes very much
 space.
 So I would like to create a backup of the LVM to a file which only
 locates the space which is used on the LVM. Would be create when
 the
 output file would be something like a qcow2 file which could be
 also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

   qemu-img convert -f raw -O qcow2 /dev/vg/vm001
 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see the
 --sparse option in the man page).  So you don't need to use qcow2,
 you
 can use cp(1) to copy the LVM volume to a raw file.  It will not
 use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now it
 would
 be a offline backup. For the online backup I think about a seperated
 system
 which every day makes incremental backups and once a week a full
 backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the
 remote
 backup system via ssh tunnel and save the output of this to qcow2
 files
 on
 the backup system? And in which format could be the incremental
 backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.  The
 qcow2 file does not contain zero regions and will therefore save a lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on
 the
 host, transfer this to the backup server will result in the weekly full
 backup. So do you mean I could use rsync to read the LVM from the host,
 compare the LVM data with the data in the qcow2 on the backup server
 and
 simply transfer the differences to the file? Or does it work on another
 way?

 When using rsync you can skip qcow2.  Only two objects are needed:
 1. The LVM volume on the host.
 2. The last full backup on the backup client.

 rsync compares #1 and #2 efficiently over the network and only
 transfers data from #1 which has changed.

 After rsync completes your full backup image is identical to the LVM
 volume.  Next week you can use it as the last image to rsync
 against.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 So I simply update the full backup, which is simply a raw file which get
 mounted while the backup?

 The image file does not need to be mounted.  Just rsync the raw image
 file.

 Stefan


 I tested the qemu-img command now, but it does not do that what I want. I
 have a VM with a 5GB disk, this disk is not allocated with 1GB of data. When
 I do the convert command the output is a 5GB qcow2 disk. What do I have to
 do to get a qcow2 file with only the allocated space/data from the LVM? I
 also tried the -c option of qemu-img convert but the result was nearly the
 same.

Please show the exact command-lines you are using and the qemu-img
info filename output afterwards.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 4:36 PM, Javier Guerra Giraldez
jav...@guerrag.com wrote:
 On Fri, Oct 12, 2012 at 9:25 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 I would leave them raw as long as they are sparse (zero regions do not
 take up space).  If you need to copy them you can either convert to
 qcow2 or use tools that preserve sparseness (BTW compression tools are
 good at this).

 note that free blocks previously used by deleted files won't be
 sparse, won't be zero and won't be much reduced by compression.

 i'd say the usual advice stays:

 A: if you run any non-trivial application on that VM, then use a real
 network backup tool 'from the inside' of the VM
 B: if real point-in-time application-cache-storage consistency is not
 important, then you can:

 - make a read-write LVM snapshot
 - mount that and fsck.  (it will appear as not-cleanly unmounted)
 - backup the files. (i like rsync, especially if you have an
 uncompressed previous backup)
 - umount and destroy the snapshot
 - optionally compress the backup


 but seriously consider option A before.  especially important if you
 run any DB on that VM

Valid points.  People seem to like crash-consistent image file backup
though because it's convenient.  It may not be backing up all
application state though :(.

Regarding option B, your idea is better than what I've been
suggesting.  By mounting the image on the host and rsyncing the
mounted files you avoid syncing dirty blocks in the image file (e.g.
deleted data in the guest file system).

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 21:13, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 8:14 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests
and
wan't to backup them to files. The simple and nasty way would be
to
create a complete output file with dd, which wastes very much
space.
So I would like to create a backup of the LVM to a file which only
locates the space which is used on the LVM. Would be create when
the
output file would be something like a qcow2 file which could be
also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

   qemu-img convert -f raw -O qcow2 /dev/vg/vm001
vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see the
--sparse option in the man page).  So you don't need to use qcow2,
you
can use cp(1) to copy the LVM volume to a raw file.  It will not
use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now it
would
be a offline backup. For the online backup I think about a seperated
system
which every day makes incremental backups and once a week a full
backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like this:
create
the LVM snapshot for the backup, read this LVM snapshot with the
remote
backup system via ssh tunnel and save the output of this to qcow2
files
on
the backup system? And in which format could be the incremental
backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.  The
qcow2 file does not contain zero regions and will therefore save a lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on
the
host, transfer this to the backup server will result in the weekly full
backup. So do you mean I could use rsync to read the LVM from the host,
compare the LVM data with the data in the qcow2 on the backup server
and
simply transfer the differences to the file? Or does it work on another
way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which get
mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image
file.

Stefan


I tested the qemu-img command now, but it does not do that what I want. I
have a VM with a 5GB disk, this disk is not allocated with 1GB of data. When
I do the convert command the output is a 5GB qcow2 disk. What do I have to
do to get a qcow2 file with only the allocated space/data from the LVM? I
also tried the -c option of qemu-img convert but the result was nearly the
same.

Please show the exact command-lines you are using and the qemu-img
info filename output afterwards.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org

[PATCH v7 1/1] target-i386: Add missing kvm bits.

2012-10-12 Thread Don Slutz
Currently -cpu host,-kvmclock,-kvm_nopiodelay,-kvm_mmu does not
turn off all bits in CPUID 0x4001 EAX.

The missing ones are KVM_FEATURE_STEAL_TIME and
KVM_FEATURE_CLOCKSOURCE_STABLE_BIT.

This adds the names kvm_steal_time and kvm_clock_stable for these
bits.

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   12 
 1 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index f3708e6..e9c760d 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -87,10 +87,14 @@ static const char *ext3_feature_name[] = {
 };
 
 static const char *kvm_feature_name[] = {
-kvmclock, kvm_nopiodelay, kvm_mmu, kvmclock, kvm_asyncpf, NULL, 
kvm_pv_eoi, NULL,
-NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
-NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
-NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+kvmclock, kvm_nopiodelay, kvm_mmu, kvmclock,
+kvm_asyncpf, kvm_steal_time, kvm_pv_eoi, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+kvm_clock_stable, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
 };
 
 static const char *svm_feature_name[] = {
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 0/1] target-i386: Add missing kvm bits.

2012-10-12 Thread Don Slutz
This was part of [PATCH v6 00/16] Allow changing of Hypervisor CPUIDs.

Since it is no longer in use by any of the patches in v7, I have split it off.

Don Slutz (1):
  target-i386: Add missing kvm bits.

 target-i386/cpu.c |   12 
 1 files changed, 8 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 00/17] target-i386: Add way to expose VMWare CPUID

2012-10-12 Thread Don Slutz
Also known as Paravirtualization CPUIDs.

This is primarily done so that the guest will think it is running
under vmware when hypervisor-vendor=vmware is specified as a
property of a cpu.

Patches 1 to 3 define new cpu properties.
Patches 4 to 6 Add QOM access to the new properties.
Patches 7 to 9 Add setting of these when cpu features hv_spinlocks,
  hv_relaxed, or hv_vapic are specified.
Patches 10 to 12 Change kvm to use these.
Patch 13 Add VMware timing info to kvm.
Patch 14 Makes it easier to use hypervisor-vendor=vmware.
Patches 15 to 17 Change tcg to use the new properties.

This depends on:

http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg01400.html

As far as I know it is #4. It depends on (1) and (2) and (3).

This change is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Changes from v6 to v7:
  Subject changed from Allow changing of Hypervisor CPUIDs. to target-i386: 
Add way to expose VMWare CPUID
  Split out 01/16 target-i386: Add missing kvm bits.
It is no longer related to this patch set.  Will be top posted as a 
seperate patch.
Marcelo Tosatti:
  Better commit messages.
  Reorder patches.


Changes from v5 to v6:
  Split out 01/17: target-i386: Allow tsc-frequency to be larger then 2.147G
It has been accepted as a trivial patch:
http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg03959.html
Blue Swirl:
  Fix 2 checkpatch.pl WARNING: line over 80 characters.

Changes from v4 to v5:
  Undo kvm_clock2 change.
  Add cpuid_hv_level_set; cpuid_hv_level == 0 is now valid.
  Add cpuid_hv_vendor_set; the null string is now valid.
  Handle kvm and cpuid_hv_level == 0.
  hypervisor-vendor=kvm,hypervisor-level=0 and 
hypervisor-level=0,hypervisor-vendor=kvm
now do the same thing.

Changes from v3 to v4:
  Added CPUID_HV_LEVEL_HYPERV, CPUID_HV_LEVEL_KVM.
  Added CPUID_HV_VENDOR_HYPERV.
  Added hyperv as known hypservisor-vendor.
  Allow hypervisor-level to be 0.

Changes from v2 to v3:
  Clean post to qemu-devel.

Changes from v1 to v2:

1) Added 1/4 from 
http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg05153.html

   Because Fred is changing jobs and so will not be pushing to get
   this in. It needed to be rebased, And I needed it to complete the
   testing of this change.

2) Added 2/4 because of the re-work I needed a way to clear all KVM bits,

3) The rework of v1.  Make it fit into the object model re-work of cpu.c for 
x86.

4) Added 3/4 -- The split out of the code that is not needed for accel=kvm.

Changes from v2 to v3:

Marcelo Tosatti:
  Its one big patch, better split in logically correlated patches
  (with better changelog). This would help reviewers.

So split 3 and 4 into 3 to 17.  More info in change log.
No code change.

Don Slutz (17):
  target-i386: Add Hypervisor level.
  target-i386: Add Hypervisor vendor.
  target-i386: Add Hypervisor features.
  target-i386: Add cpu object access routines for Hypervisor level.
  target-i386: Add cpu object access routines for Hypervisor vendor.
  target-i386: Add cpu object access routines for Hypervisor features.
  target-i386: Add x86_set_hyperv.
  target-i386: Use x86_set_hyperv to set hypervisor vendor.
  target-i386: Use x86_set_hyperv to set hypervisor features.
  target-i386: Use Hypervisor level in -machine pc,accel=kvm.
  target-i386: Use Hypervisor vendor in -machine pc,accel=kvm.
  target-i386: Use Hypervisor features in -machine pc,accel=kvm.
  target-i386: Add VMWare CPUID Timing information in -machine
pc,accel=kvm.
  target-i386: Add vmare as a known name to Hypervisor vendor.
  target-i386: Use Hypervisor level in -machine pc,accel=tcg.
  target-i386: Use Hypervisor vendor in -machine pc,accel=tcg.
  target-i386: target-i386: Add VMWare CPUID Timing information in
-machine pc,accel=tcg

 target-i386/cpu.c |  205 +
 target-i386/cpu.h |   29 
 target-i386/kvm.c |   69 +++
 3 files changed, 290 insertions(+), 13 deletions(-)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 01/17] target-i386: Add Hypervisor level.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization level or maximim cpuid function present in 
this leaf.
This is the EAX value for 0x4000.

QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.h |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 5265c5a..1899f69 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -777,11 +777,14 @@ typedef struct CPUX86State {
 uint32_t cpuid_ext3_features;
 uint32_t cpuid_apic_id;
 bool cpuid_vendor_override;
+bool cpuid_hv_level_set;
 /* Store the results of Centaur's CPUID instructions */
 uint32_t cpuid_xlevel2;
 uint32_t cpuid_ext4_features;
 /* Flags from CPUID[EAX=7,ECX=0].EBX */
 uint32_t cpuid_7_0_ebx;
+/* Hypervisor CPUIDs */
+uint32_t cpuid_hv_level;
 
 /* MTRRs */
 uint64_t mtrr_fixed[11];
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 02/17] target-i386: Add Hypervisor vendor.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization vendor.
This is EBX, ECX, and EDX data for 0x4000.

QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.h |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 1899f69..e76ddc0 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -778,6 +778,7 @@ typedef struct CPUX86State {
 uint32_t cpuid_apic_id;
 bool cpuid_vendor_override;
 bool cpuid_hv_level_set;
+bool cpuid_hv_vendor_set;
 /* Store the results of Centaur's CPUID instructions */
 uint32_t cpuid_xlevel2;
 uint32_t cpuid_ext4_features;
@@ -785,6 +786,9 @@ typedef struct CPUX86State {
 uint32_t cpuid_7_0_ebx;
 /* Hypervisor CPUIDs */
 uint32_t cpuid_hv_level;
+uint32_t cpuid_hv_vendor1;
+uint32_t cpuid_hv_vendor2;
+uint32_t cpuid_hv_vendor3;
 
 /* MTRRs */
 uint64_t mtrr_fixed[11];
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 03/17] target-i386: Add Hypervisor features.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as kvm festures or Hypervisor vendor-neutral interface 
identification.
This is the EAX value for 0x4001.

QEMU knows this is KVM_CPUID_FEATURES (0x4001) in some builds.

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.h |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index e76ddc0..fbc8f66 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -779,6 +779,7 @@ typedef struct CPUX86State {
 bool cpuid_vendor_override;
 bool cpuid_hv_level_set;
 bool cpuid_hv_vendor_set;
+bool cpuid_hv_features_set;
 /* Store the results of Centaur's CPUID instructions */
 uint32_t cpuid_xlevel2;
 uint32_t cpuid_ext4_features;
@@ -789,6 +790,8 @@ typedef struct CPUX86State {
 uint32_t cpuid_hv_vendor1;
 uint32_t cpuid_hv_vendor2;
 uint32_t cpuid_hv_vendor3;
+/* Hypervisor features */
+uint32_t cpuid_hv_features;
 
 /* MTRRs */
 uint64_t mtrr_fixed[11];
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 04/17] target-i386: Add cpu object access routines for Hypervisor level.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

These are modeled after x86_cpuid_get_xlevel and x86_cpuid_set_xlevel.

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   29 +
 1 files changed, 29 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index b8f431a..c4bd6cf 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1162,6 +1162,32 @@ static void x86_cpuid_set_tsc_freq(Object *obj, Visitor 
*v, void *opaque,
 cpu-env.tsc_khz = value / 1000;
 }
 
+static void x86_cpuid_get_hv_level(Object *obj, Visitor *v, void *opaque,
+const char *name, Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+
+visit_type_uint32(v, cpu-env.cpuid_hv_level, name, errp);
+}
+
+static void x86_cpuid_set_hv_level(Object *obj, Visitor *v, void *opaque,
+const char *name, Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+uint32_t value;
+
+visit_type_uint32(v, value, name, errp);
+if (error_is_set(errp)) {
+return;
+}
+
+if (value != 0  value  0x4000) {
+value += 0x4000;
+}
+cpu-env.cpuid_hv_level = value;
+cpu-env.cpuid_hv_level_set = true;
+}
+
 #if !defined(CONFIG_USER_ONLY)
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
  const char *name, Error **errp)
@@ -2053,6 +2079,9 @@ static void x86_cpu_initfn(Object *obj)
 object_property_add(obj, enforce, bool,
 x86_cpuid_get_enforce,
 x86_cpuid_set_enforce, NULL, NULL, NULL);
+object_property_add(obj, hypervisor-level, int,
+x86_cpuid_get_hv_level,
+x86_cpuid_set_hv_level, NULL, NULL, NULL);
 #if !defined(CONFIG_USER_ONLY)
 object_property_add(obj, hv_spinlocks, int,
 x86_get_hv_spinlocks,
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 05/17] target-i386: Add cpu object access routines for Hypervisor vendor.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

These are modeled after x86_cpuid_set_vendor and x86_cpuid_get_vendor.

Since kvm's vendor is shorter, the test for correct size is removed and zero 
padding is added.

See http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html for 
definition of kvm's vendor.

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   44 
 1 files changed, 44 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index c4bd6cf..a87527c 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1188,6 +1188,47 @@ static void x86_cpuid_set_hv_level(Object *obj, Visitor 
*v, void *opaque,
 cpu-env.cpuid_hv_level_set = true;
 }
 
+static char *x86_cpuid_get_hv_vendor(Object *obj, Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+CPUX86State *env = cpu-env;
+char *value;
+int i;
+
+value = (char *)g_malloc(CPUID_VENDOR_SZ + 1);
+for (i = 0; i  4; i++) {
+value[i + 0] = env-cpuid_hv_vendor1  (8 * i);
+value[i + 4] = env-cpuid_hv_vendor2  (8 * i);
+value[i + 8] = env-cpuid_hv_vendor3  (8 * i);
+}
+value[CPUID_VENDOR_SZ] = '\0';
+
+return value;
+}
+
+static void x86_cpuid_set_hv_vendor(Object *obj, const char *value,
+Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+CPUX86State *env = cpu-env;
+int i;
+char adj_value[CPUID_VENDOR_SZ + 1];
+
+memset(adj_value, 0, sizeof(adj_value));
+
+pstrcpy(adj_value, sizeof(adj_value), value);
+
+env-cpuid_hv_vendor1 = 0;
+env-cpuid_hv_vendor2 = 0;
+env-cpuid_hv_vendor3 = 0;
+for (i = 0; i  4; i++) {
+env-cpuid_hv_vendor1 |= ((uint8_t)adj_value[i + 0])  (8 * i);
+env-cpuid_hv_vendor2 |= ((uint8_t)adj_value[i + 4])  (8 * i);
+env-cpuid_hv_vendor3 |= ((uint8_t)adj_value[i + 8])  (8 * i);
+}
+env-cpuid_hv_vendor_set = true;
+}
+
 #if !defined(CONFIG_USER_ONLY)
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
  const char *name, Error **errp)
@@ -2082,6 +2123,9 @@ static void x86_cpu_initfn(Object *obj)
 object_property_add(obj, hypervisor-level, int,
 x86_cpuid_get_hv_level,
 x86_cpuid_set_hv_level, NULL, NULL, NULL);
+object_property_add_str(obj, hypervisor-vendor,
+x86_cpuid_get_hv_vendor,
+x86_cpuid_set_hv_vendor, NULL);
 #if !defined(CONFIG_USER_ONLY)
 object_property_add(obj, hv_spinlocks, int,
 x86_get_hv_spinlocks,
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 06/17] target-i386: Add cpu object access routines for Hypervisor features.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as kvm festures or Hypervisor vendor-neutral interface 
identification.
This is just the EAX value for 0x4001.

QEMU knows this is KVM_CPUID_FEATURES (0x4001) in some builds.

When exposing VMWare CPUID this needs to be set to zero.

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   26 ++
 1 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index a87527c..b335a1e 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1229,6 +1229,29 @@ static void x86_cpuid_set_hv_vendor(Object *obj, const 
char *value,
 env-cpuid_hv_vendor_set = true;
 }
 
+static void x86_cpuid_get_hv_features(Object *obj, Visitor *v, void *opaque,
+const char *name, Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+
+visit_type_uint32(v, cpu-env.cpuid_hv_features, name, errp);
+}
+
+static void x86_cpuid_set_hv_features(Object *obj, Visitor *v, void *opaque,
+const char *name, Error **errp)
+{
+X86CPU *cpu = X86_CPU(obj);
+uint32_t value;
+
+visit_type_uint32(v, value, name, errp);
+if (error_is_set(errp)) {
+return;
+}
+
+cpu-env.cpuid_hv_features = value;
+cpu-env.cpuid_hv_features_set = true;
+}
+
 #if !defined(CONFIG_USER_ONLY)
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
  const char *name, Error **errp)
@@ -2126,6 +2149,9 @@ static void x86_cpu_initfn(Object *obj)
 object_property_add_str(obj, hypervisor-vendor,
 x86_cpuid_get_hv_vendor,
 x86_cpuid_set_hv_vendor, NULL);
+object_property_add(obj, hypervisor-features, int,
+x86_cpuid_get_hv_features,
+x86_cpuid_set_hv_features, NULL, NULL, NULL);
 #if !defined(CONFIG_USER_ONLY)
 object_property_add(obj, hv_spinlocks, int,
 x86_get_hv_spinlocks,
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 07/17] target-i386: Add x86_set_hyperv.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

At this stage it is used to set the cpu object's hypervisor level to
the default for Microsoft's Hypervisor.

Also known as Paravirtualization level or maximim cpuid function
present in this leaf.  This is the EAX value for 0x4000.

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

which says:
Leaf 0x4000 (at very top of table):

EAX

The maximum input value for hypervisor CPUID information. For Microsoft
hypervisors, this value will be at least 0x4005. The vendor ID
signature should be used only for reporting and diagnostic purposes.

QEMU already uses HYPERV_CPUID_MIN in accel=kvm mode.  However this
HYPERV_CPUID_MIN is not used and a copy
(CPUID_HV_LEVEL_HYPERV_CPUID_MIN) is added so that the resulting
CPUID bits exposed to the guest should be a function of the
machine-type and command-line/config parameters, and nothing else
(otherwise the CPUID bits would change under the guest's feet when
live-migrating).

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |9 +
 target-i386/cpu.h |4 
 2 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index b335a1e..283ac01 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1253,6 +1253,12 @@ static void x86_cpuid_set_hv_features(Object *obj, 
Visitor *v, void *opaque,
 }
 
 #if !defined(CONFIG_USER_ONLY)
+static void x86_set_hyperv(Object *obj, Error **errp)
+{
+object_property_set_int(obj, CPUID_HV_LEVEL_HYPERV_CPUID_MIN,
+hypervisor-level, errp);
+}
+
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
  const char *name, Error **errp)
 {
@@ -1275,6 +1281,7 @@ static void x86_set_hv_spinlocks(Object *obj, Visitor *v, 
void *opaque,
 return;
 }
 hyperv_set_spinlock_retries(value);
+x86_set_hyperv(obj, errp);
 }
 
 static void x86_get_hv_relaxed(Object *obj, Visitor *v, void *opaque,
@@ -1295,6 +1302,7 @@ static void x86_set_hv_relaxed(Object *obj, Visitor *v, 
void *opaque,
 return;
 }
 hyperv_enable_relaxed_timing(value);
+x86_set_hyperv(obj, errp);
 }
 
 static void x86_get_hv_vapic(Object *obj, Visitor *v, void *opaque,
@@ -1315,6 +1323,7 @@ static void x86_set_hv_vapic(Object *obj, Visitor *v, 
void *opaque,
 return;
 }
 hyperv_enable_vapic_recommended(value);
+x86_set_hyperv(obj, errp);
 }
 #endif
 
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index fbc8f66..cd4e83c 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -488,6 +488,10 @@
 
 #define CPUID_VENDOR_VIA   CentaurHauls
 
+/* The maximum input value for hypervisor CPUID information for
+ * Microsoft hypervisors.  Is related to HYPERV_CPUID_MIN. */
+#define CPUID_HV_LEVEL_HYPERV_CPUID_MIN  0x4005
+
 #define CPUID_MWAIT_IBE (1  1) /* Interrupts can exit capability */
 #define CPUID_MWAIT_EMX (1  0) /* enumeration supported */
 
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 08/17] target-i386: Use x86_set_hyperv to set hypervisor vendor.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

At this stage it is used to set the cpu object's hypervisor vendor
to the default for Microsoft's Hypervisor (Microsoft Hv).

Also known as Paravirtualization vendor.
This is EBX, ECX, EDX data for 0x4000.

QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |2 ++
 target-i386/cpu.h |1 +
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 283ac01..958be81 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1257,6 +1257,8 @@ static void x86_set_hyperv(Object *obj, Error **errp)
 {
 object_property_set_int(obj, CPUID_HV_LEVEL_HYPERV_CPUID_MIN,
 hypervisor-level, errp);
+object_property_set_str(obj, CPUID_HV_VENDOR_HYPERV,
+hypervisor-vendor, errp);
 }
 
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index cd4e83c..f2045d6 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -491,6 +491,7 @@
 /* The maximum input value for hypervisor CPUID information for
  * Microsoft hypervisors.  Is related to HYPERV_CPUID_MIN. */
 #define CPUID_HV_LEVEL_HYPERV_CPUID_MIN  0x4005
+#define CPUID_HV_VENDOR_HYPERV Microsoft Hv
 
 #define CPUID_MWAIT_IBE (1  1) /* Interrupts can exit capability */
 #define CPUID_MWAIT_EMX (1  0) /* enumeration supported */
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 09/17] target-i386: Use x86_set_hyperv to set hypervisor features.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

At this stage it is used to set the cpu object's hypervisor features
to the default for Microsoft's Hypervisor (Hv#1).

Also known as kvm festures or Hypervisor vendor-neutral interface 
identification.
This is the EAX value for 0x4001.

QEMU knows this is KVM_CPUID_FEATURES (0x4001) in some builds.

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |2 ++
 target-i386/cpu.h |1 +
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 958be81..f058add 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1259,6 +1259,8 @@ static void x86_set_hyperv(Object *obj, Error **errp)
 hypervisor-level, errp);
 object_property_set_str(obj, CPUID_HV_VENDOR_HYPERV,
 hypervisor-vendor, errp);
+object_property_set_int(obj, CPUID_HV_FEATURES_HYPERV,
+hypervisor-features, errp);
 }
 
 static void x86_get_hv_spinlocks(Object *obj, Visitor *v, void *opaque,
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index f2045d6..9a34c7b 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -492,6 +492,7 @@
  * Microsoft hypervisors.  Is related to HYPERV_CPUID_MIN. */
 #define CPUID_HV_LEVEL_HYPERV_CPUID_MIN  0x4005
 #define CPUID_HV_VENDOR_HYPERV Microsoft Hv
+#define CPUID_HV_FEATURES_HYPERV 0x31237648 /* Hv#1 */
 
 #define CPUID_MWAIT_IBE (1  1) /* Interrupts can exit capability */
 #define CPUID_MWAIT_EMX (1  0) /* enumeration supported */
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 10/17] target-i386: Use Hypervisor level in -machine pc,accel=kvm.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization level.

QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/kvm.c |8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 5b18383..30963e1 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -392,10 +392,14 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 c-function = KVM_CPUID_SIGNATURE;
 if (!hyperv_enabled()) {
 memcpy(signature, KVMKVMKVM\0\0\0, 12);
-c-eax = 0;
+if (!env-cpuid_hv_level_set) {
+c-eax = 0;
+} else {
+c-eax = env-cpuid_hv_level;
+}
 } else {
 memcpy(signature, Microsoft Hv, 12);
-c-eax = HYPERV_CPUID_MIN;
+c-eax = env-cpuid_hv_level;
 }
 c-ebx = signature[0];
 c-ecx = signature[1];
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 11/17] target-i386: Use Hypervisor vendor in -machine pc,accel=kvm.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization vendor.
This is EBX, ECX, and EDX data for 0x4000.

QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).

If hypervisor vendor is set then add kvm's
signature at KVM_CPUID_SIGNATURE_NEXT (0x4100).

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/kvm.c |   26 ++
 1 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 30963e1..513356d 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -390,20 +390,21 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
 c-function = KVM_CPUID_SIGNATURE;
-if (!hyperv_enabled()) {
-memcpy(signature, KVMKVMKVM\0\0\0, 12);
-if (!env-cpuid_hv_level_set) {
-c-eax = 0;
-} else {
-c-eax = env-cpuid_hv_level;
-}
+if (!env-cpuid_hv_level_set) {
+c-eax = 0;
 } else {
-memcpy(signature, Microsoft Hv, 12);
 c-eax = env-cpuid_hv_level;
 }
-c-ebx = signature[0];
-c-ecx = signature[1];
-c-edx = signature[2];
+if (!env-cpuid_hv_vendor_set) {
+memcpy(signature, KVMKVMKVM\0\0\0, 12);
+c-ebx = signature[0];
+c-ecx = signature[1];
+c-edx = signature[2];
+} else {
+c-ebx = env-cpuid_hv_vendor1;
+c-ecx = env-cpuid_hv_vendor2;
+c-edx = env-cpuid_hv_vendor3;
+}
 
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
@@ -448,7 +449,8 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 c-function = HYPERV_CPUID_IMPLEMENT_LIMITS;
 c-eax = 0x40;
 c-ebx = 0x40;
-
+}
+if (env-cpuid_hv_vendor_set) {
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
 c-function = KVM_CPUID_SIGNATURE_NEXT;
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 12/17] target-i386: Use Hypervisor features in -machine pc,accel=kvm.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

QEMU knows this as KVM_CPUID_FEATURES (0x4001) in some builds.

If hypervisor features are set, then pass adjusted
cpuid_kvm_features as EAX in 0x4101.

This is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/kvm.c |   27 +--
 1 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 513356d..b61027f 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -409,13 +409,14 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
 c-function = KVM_CPUID_FEATURES;
-c-eax = env-cpuid_kvm_features 
-kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0, R_EAX);
+if (!env-cpuid_hv_features_set) {
+c-eax = env-cpuid_kvm_features 
+kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0, R_EAX);
+} else {
+c-eax = env-cpuid_hv_features;
+}
 
 if (hyperv_enabled()) {
-memcpy(signature, Hv#1\0\0\0\0\0\0\0\0, 12);
-c-eax = signature[0];
-
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
 c-function = HYPERV_CPUID_VERSION;
@@ -455,10 +456,24 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 memset(c, 0, sizeof(*c));
 c-function = KVM_CPUID_SIGNATURE_NEXT;
 memcpy(signature, KVMKVMKVM\0\0\0, 12);
-c-eax = 0;
+if (env-cpuid_hv_features_set) {
+c-eax = KVM_CPUID_SIGNATURE_NEXT -
+KVM_CPUID_SIGNATURE + KVM_CPUID_FEATURES;
+} else {
+c-eax = 0;
+}
 c-ebx = signature[0];
 c-ecx = signature[1];
 c-edx = signature[2];
+
+if (env-cpuid_hv_features_set) {
+c = cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c-function = KVM_CPUID_SIGNATURE_NEXT -
+KVM_CPUID_SIGNATURE + KVM_CPUID_FEATURES;
+c-eax = env-cpuid_kvm_features 
+kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0, R_EAX);
+}
 }
 
 has_msr_async_pf_en = c-eax  (1  KVM_FEATURE_ASYNC_PF);
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 13/17] target-i386: Add VMWare CPUID Timing information in -machine pc,accel=kvm.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

This is EAX and EBX data for 0x4010.

Add new #define CPUID_HV_TIMING_INFO for this.

The best documentation I have found is:
   http://article.gmane.org/gmane.comp.emulators.kvm.devel/22643

And a test under ESXi 4.0 shows that VMware is setting this data.

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.h |4 
 target-i386/kvm.c |   22 ++
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 9a34c7b..6ceef05 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -488,6 +488,10 @@
 
 #define CPUID_VENDOR_VIA   CentaurHauls
 
+/* VMware hardware version 7 defines timing information as
+ * 0x4010. */
+#define CPUID_HV_TIMING_INFO 0x4010
+
 /* The maximum input value for hypervisor CPUID information for
  * Microsoft hypervisors.  Is related to HYPERV_CPUID_MIN. */
 #define CPUID_HV_LEVEL_HYPERV_CPUID_MIN  0x4005
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index b61027f..81b0014 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -451,6 +451,28 @@ int kvm_arch_init_vcpu(CPUX86State *env)
 c-eax = 0x40;
 c-ebx = 0x40;
 }
+if (env-cpuid_hv_level = CPUID_HV_TIMING_INFO) {
+const uint32_t apic_khz = 100L;
+
+/*
+ * From article.gmane.org/gmane.comp.emulators.kvm.devel/22643
+ *
+ *Leaf 0x4010, Timing Information.
+ *
+ *VMware has defined the first generic leaf to provide timing
+ *information.  This leaf returns the current TSC frequency and
+ *current Bus frequency in kHz.
+ *
+ *# EAX: (Virtual) TSC frequency in kHz.
+ *# EBX: (Virtual) Bus (local apic timer) frequency in kHz.
+ *# ECX, EDX: RESERVED (Per above, reserved fields are set to 
zero).
+ */
+c = cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c-function = CPUID_HV_TIMING_INFO;
+c-eax = (uint32_t)env-tsc_khz;
+c-ebx = apic_khz;
+}
 if (env-cpuid_hv_vendor_set) {
 c = cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 14/17] target-i386: Add vmare as a known name to Hypervisor vendor.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also adds some other known names (kvm, hyperv) to Hypervisor vendor.

This allows hypervisor-vendor=vmware3 instead of
hypervisor-vendor=VMwareVMware,hypervisor-level=2,hypervisor-features=0.

And hypervisor-vendor=vmware instead of
hypervisor-vendor=VMwareVMware,hypervisor-level=0x10,hypervisor-features=0.

This is based on:

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   58 -
 target-i386/cpu.h |9 
 2 files changed, 66 insertions(+), 1 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index f058add..c8466ec 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1203,6 +1203,23 @@ static char *x86_cpuid_get_hv_vendor(Object *obj, Error 
**errp)
 }
 value[CPUID_VENDOR_SZ] = '\0';
 
+/* Convert known names */
+if (!strcmp(value, CPUID_HV_VENDOR_HYPERV) 
+env-cpuid_hv_level == CPUID_HV_LEVEL_HYPERV_CPUID_MIN 
+env-cpuid_hv_features_set == CPUID_HV_FEATURES_HYPERV) {
+pstrcpy(value, sizeof(value), hyperv);
+} else if (!strcmp(value, CPUID_HV_VENDOR_VMWARE) 
+env-cpuid_hv_features_set == CPUID_HV_FEATURES_VMWARE) {
+if (env-cpuid_hv_level == CPUID_HV_LEVEL_VMWARE_4) {
+pstrcpy(value, sizeof(value), vmware4);
+} else if (env-cpuid_hv_level == CPUID_HV_LEVEL_VMWARE_3) {
+pstrcpy(value, sizeof(value), vmware3);
+}
+} else if (!strcmp(value, CPUID_HV_VENDOR_KVM) 
+   (env-cpuid_hv_level == CPUID_HV_LEVEL_KVM_0 ||
+env-cpuid_hv_level == CPUID_HV_LEVEL_KVM_1)) {
+pstrcpy(value, sizeof(value), kvm);
+}
 return value;
 }
 
@@ -1216,7 +1233,46 @@ static void x86_cpuid_set_hv_vendor(Object *obj, const 
char *value,
 
 memset(adj_value, 0, sizeof(adj_value));
 
-pstrcpy(adj_value, sizeof(adj_value), value);
+/* Convert known names */
+if (!strcmp(value, hyperv)) {
+if (!env-cpuid_hv_level_set) {
+object_property_set_int(obj, CPUID_HV_LEVEL_HYPERV_CPUID_MIN,
+hypervisor-level, errp);
+}
+if (!env-cpuid_hv_features_set) {
+object_property_set_int(obj, CPUID_HV_FEATURES_HYPERV,
+hypervisor-features, errp);
+}
+pstrcpy(adj_value, sizeof(adj_value), CPUID_HV_VENDOR_HYPERV);
+} else if (!strcmp(value, vmware) || !strcmp(value, vmware4)) {
+if (!env-cpuid_hv_level_set) {
+object_property_set_int(obj, CPUID_HV_LEVEL_VMWARE_4,
+hypervisor-level, errp);
+}
+if (!env-cpuid_hv_features_set) {
+object_property_set_int(obj, CPUID_HV_FEATURES_VMWARE,
+hypervisor-features, errp);
+}
+pstrcpy(adj_value, sizeof(adj_value), CPUID_HV_VENDOR_VMWARE);
+} else if (!strcmp(value, vmware3)) {
+if (!env-cpuid_hv_level_set) {
+object_property_set_int(obj, CPUID_HV_LEVEL_VMWARE_3,
+hypervisor-level, errp);
+}
+if (!env-cpuid_hv_features_set) {
+object_property_set_int(obj, CPUID_HV_FEATURES_VMWARE,
+hypervisor-features, errp);
+}
+pstrcpy(adj_value, sizeof(adj_value), CPUID_HV_VENDOR_VMWARE);
+} else if (!strcmp(value, kvm)) {
+if (!env-cpuid_hv_level_set) {
+object_property_set_int(obj, CPUID_HV_LEVEL_KVM_1,
+hypervisor-level, errp);
+}
+pstrcpy(adj_value, sizeof(adj_value), CPUID_HV_VENDOR_KVM);
+} else {
+pstrcpy(adj_value, sizeof(adj_value), value);
+}
 
 env-cpuid_hv_vendor1 = 0;
 env-cpuid_hv_vendor2 = 0;
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 6ceef05..a387d82 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -498,6 +498,15 @@
 #define CPUID_HV_VENDOR_HYPERV Microsoft Hv
 #define CPUID_HV_FEATURES_HYPERV 0x31237648 /* Hv#1 */
 
+#define CPUID_HV_LEVEL_VMWARE_3 0x4002
+#define CPUID_HV_LEVEL_VMWARE_4 0x4010
+#define CPUID_HV_VENDOR_VMWARE VMwareVMware
+#define CPUID_HV_FEATURES_VMWARE 0
+
+#define CPUID_HV_LEVEL_KVM_0  0
+#define CPUID_HV_LEVEL_KVM_1  0x4001
+#define CPUID_HV_VENDOR_KVM KVMKVMKVM
+
 #define CPUID_MWAIT_IBE (1  1) /* Interrupts can exit capability */
 #define CPUID_MWAIT_EMX (1  0) /* enumeration supported */
 
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 15/17] target-i386: Use Hypervisor level in -machine pc,accel=tcg.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization level.

QEMU knows this as KVM_CPUID_SIGNATURE (0x4000) in kvm on linux.

This does not provide vendor support in tcg yet.

From http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html
kvm has this issue:

Note also that old hosts set eax value to 0x0. This should
be interpreted as if the value was 0x4001.

This change is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   30 ++
 1 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index c8466ec..5b33b95 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1767,6 +1767,24 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 index =  env-cpuid_xlevel;
 }
 }
+} else if (index  0x4000) {
+/* test if maximum index reached
+ * but only if Hypervisor level is set */
+if (env-cpuid_hv_level_set) {
+uint32_t real_level = env-cpuid_hv_level;
+
+/* Handle Hypervisor CPUIDs.
+ * kvm defines 0 to be the same as 0x4001 */
+if (real_level  0x4000) {
+real_level = 0x4001;
+}
+if (index  real_level) {
+index = real_level;
+}
+} else {
+if (index  env-cpuid_level)
+index = env-cpuid_level;
+}
 } else {
 if (index  env-cpuid_level)
 index = env-cpuid_level;
@@ -1905,6 +1923,18 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *edx = 0;
 }
 break;
+case 0x4000:
+*eax = env-cpuid_hv_level;
+*ebx = 0;
+*ecx = 0;
+*edx = 0;
+break;
+case 0x4001:
+*eax = env-cpuid_kvm_features;
+*ebx = 0;
+*ecx = 0;
+*edx = 0;
+break;
 case 0x8000:
 *eax = env-cpuid_xlevel;
 *ebx = env-cpuid_vendor1;
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 16/17] target-i386: Use Hypervisor vendor in -machine pc,accel=tcg.

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

Also known as Paravirtualization vendor.

This change is based on:

Microsoft Hypervisor CPUID Leaves:
  
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx

Linux kernel change starts with:
  http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also:
  http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html
This is where the 0 is the same as 0x4001 is defined.

VMware documention on CPUIDs (Mechanisms to determine if software is
running in a VMware virtual machine):
  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009458

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |   11 ++-
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 5b33b95..49e5db3 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1769,8 +1769,9 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 }
 } else if (index  0x4000) {
 /* test if maximum index reached
- * but only if Hypervisor level is set */
-if (env-cpuid_hv_level_set) {
+ * but only if Hypervisor level is set or
+ * if Hypervisor vendor is set */
+if (env-cpuid_hv_level_set || env-cpuid_hv_vendor_set) {
 uint32_t real_level = env-cpuid_hv_level;
 
 /* Handle Hypervisor CPUIDs.
@@ -1925,9 +1926,9 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 break;
 case 0x4000:
 *eax = env-cpuid_hv_level;
-*ebx = 0;
-*ecx = 0;
-*edx = 0;
+*ebx = env-cpuid_hv_vendor1;
+*ecx = env-cpuid_hv_vendor2;
+*edx = env-cpuid_hv_vendor3;
 break;
 case 0x4001:
 *eax = env-cpuid_kvm_features;
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 17/17] target-i386: target-i386: Add VMWare CPUID Timing information in -machine pc,accel=tcg

2012-10-12 Thread Don Slutz
Part of target-i386: Add way to expose VMWare CPUID

This is EAX and EBX data for 0x4010.

Add new #define CPUID_HV_TIMING_INFO for this.

The best documentation I have found is:
   http://article.gmane.org/gmane.comp.emulators.kvm.devel/22643

And a test under ESXi 4.0 shows that VMware is setting this data.

Signed-off-by: Don Slutz d...@cloudswitch.com
---
 target-i386/cpu.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 49e5db3..924db0d 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1936,6 +1936,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *ecx = 0;
 *edx = 0;
 break;
+case 0x4010:
+*eax = env-tsc_khz;
+*ebx = 100; /* apic_khz */
+*ecx = 0;
+*edx = 0;
+break;
 case 0x8000:
 *eax = env-cpuid_xlevel;
 *ebx = env-cpuid_vendor1;
-- 
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Stefan Hajnoczi
On Fri, Oct 12, 2012 at 9:26 PM, Lukas Laukamp lu...@laukamp.me wrote:
 Am 12.10.2012 21:13, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 8:14 PM, Lukas Laukamp lu...@laukamp.me wrote:

 Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
 wrote:

 Am 12.10.2012 10:58, schrieb Lukas Laukamp:

 Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

 On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

 I have a simple user question. I have a few LVM based KVM guests
 and
 wan't to backup them to files. The simple and nasty way would be
 to
 create a complete output file with dd, which wastes very much
 space.
 So I would like to create a backup of the LVM to a file which
 only
 locates the space which is used on the LVM. Would be create when
 the
 output file would be something like a qcow2 file which could be
 also
 simply startet with KVM.

 If the VM is not running you can use qemu-img convert:

qemu-img convert -f raw -O qcow2 /dev/vg/vm001
 vm001-backup.qcow2

 Note that cp(1) tries to make the destination file sparse (see
 the
 --sparse option in the man page).  So you don't need to use
 qcow2,
 you
 can use cp(1) to copy the LVM volume to a raw file.  It will not
 use
 disk space for zero regions.

 If the VM is running you need to use LVM snapshots or stop the VM
 temporarily so a crash-consistent backup can be taken.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html


 Hello Stefano,

 thanks for the fast reply. I will test this later. In my case now
 it
 would
 be a offline backup. For the online backup I think about a
 seperated
 system
 which every day makes incremental backups and once a week a full
 backup.
 The
 main problem is, that the systems are in a WAN network and I need
 encryption
 between the systems. Would it be possible to do something like
 this:
 create
 the LVM snapshot for the backup, read this LVM snapshot with the
 remote
 backup system via ssh tunnel and save the output of this to qcow2
 files
 on
 the backup system? And in which format could be the incremental
 backups
 be
 stored?

 Since there is a WAN link it's important to use a compact image
 representation before hitting the network. I would use qemu-img
 convert -O qcow2 on the host and only transfer the qcow2 output.
 The
 qcow2 file does not contain zero regions and will therefore save a
 lot
 of network bandwidth compared to accessing the LVM volume over the
 WAN.

 If you are using rsync or another tool it's a different story.  You
 could rsync the current LVM volume on the host over the last full
 backup, it should avoid transferring image data which is already
 present in the last full backup - the result is that you only
 transfer
 changed data plus the rsync metadata.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 Hello Stefan,

 the rsync part I don't have understood fully. So to create a qcow2 on
 the
 host, transfer this to the backup server will result in the weekly
 full
 backup. So do you mean I could use rsync to read the LVM from the
 host,
 compare the LVM data with the data in the qcow2 on the backup server
 and
 simply transfer the differences to the file? Or does it work on
 another
 way?

 When using rsync you can skip qcow2.  Only two objects are needed:
 1. The LVM volume on the host.
 2. The last full backup on the backup client.

 rsync compares #1 and #2 efficiently over the network and only
 transfers data from #1 which has changed.

 After rsync completes your full backup image is identical to the LVM
 volume.  Next week you can use it as the last image to rsync
 against.

 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 So I simply update the full backup, which is simply a raw file which
 get
 mounted while the backup?

 The image file does not need to be mounted.  Just rsync the raw image
 file.

 Stefan


 I tested the qemu-img command now, but it does not do that what I want. I
 have a VM with a 5GB disk, this disk is not allocated with 1GB of data.
 When
 I do the convert command the output is a 5GB qcow2 disk. What do I have
 to
 do to get a qcow2 file with only the allocated space/data from the LVM? I
 also tried the -c option of qemu-img convert but the result was nearly
 the
 same.

 Please show the exact command-lines 

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Lukas Laukamp

Am 12.10.2012 22:43, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 9:26 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 21:13, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 8:14 PM, Lukas Laukamp lu...@laukamp.me wrote:

Am 12.10.2012 13:36, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:50 PM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 12:47, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 12:16 PM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 12:11, schrieb Stefan Hajnoczi:


On Fri, Oct 12, 2012 at 11:17 AM, Lukas Laukamp lu...@laukamp.me
wrote:

Am 12.10.2012 10:58, schrieb Lukas Laukamp:


Am 12.10.2012 10:42, schrieb Stefan Hajnoczi:

On Fri, Oct 12, 2012 at 08:52:32AM +0200, Lukas Laukamp wrote:

I have a simple user question. I have a few LVM based KVM guests
and
wan't to backup them to files. The simple and nasty way would be
to
create a complete output file with dd, which wastes very much
space.
So I would like to create a backup of the LVM to a file which
only
locates the space which is used on the LVM. Would be create when
the
output file would be something like a qcow2 file which could be
also
simply startet with KVM.

If the VM is not running you can use qemu-img convert:

qemu-img convert -f raw -O qcow2 /dev/vg/vm001
vm001-backup.qcow2

Note that cp(1) tries to make the destination file sparse (see
the
--sparse option in the man page).  So you don't need to use
qcow2,
you
can use cp(1) to copy the LVM volume to a raw file.  It will not
use
disk space for zero regions.

If the VM is running you need to use LVM snapshots or stop the VM
temporarily so a crash-consistent backup can be taken.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html


Hello Stefano,

thanks for the fast reply. I will test this later. In my case now
it
would
be a offline backup. For the online backup I think about a
seperated
system
which every day makes incremental backups and once a week a full
backup.
The
main problem is, that the systems are in a WAN network and I need
encryption
between the systems. Would it be possible to do something like
this:
create
the LVM snapshot for the backup, read this LVM snapshot with the
remote
backup system via ssh tunnel and save the output of this to qcow2
files
on
the backup system? And in which format could be the incremental
backups
be
stored?

Since there is a WAN link it's important to use a compact image
representation before hitting the network. I would use qemu-img
convert -O qcow2 on the host and only transfer the qcow2 output.
The
qcow2 file does not contain zero regions and will therefore save a
lot
of network bandwidth compared to accessing the LVM volume over the
WAN.

If you are using rsync or another tool it's a different story.  You
could rsync the current LVM volume on the host over the last full
backup, it should avoid transferring image data which is already
present in the last full backup - the result is that you only
transfer
changed data plus the rsync metadata.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello Stefan,

the rsync part I don't have understood fully. So to create a qcow2 on
the
host, transfer this to the backup server will result in the weekly
full
backup. So do you mean I could use rsync to read the LVM from the
host,
compare the LVM data with the data in the qcow2 on the backup server
and
simply transfer the differences to the file? Or does it work on
another
way?

When using rsync you can skip qcow2.  Only two objects are needed:
1. The LVM volume on the host.
2. The last full backup on the backup client.

rsync compares #1 and #2 efficiently over the network and only
transfers data from #1 which has changed.

After rsync completes your full backup image is identical to the LVM
volume.  Next week you can use it as the last image to rsync
against.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


So I simply update the full backup, which is simply a raw file which
get
mounted while the backup?

The image file does not need to be mounted.  Just rsync the raw image
file.

Stefan


I tested the qemu-img command now, but it does not do that what I want. I
have a VM with a 5GB disk, this disk is not allocated with 1GB of data.
When
I do the convert command the output is a 5GB qcow2 disk. What do I have
to
do to get a qcow2 file with only the allocated space/data from the LVM? I
also tried the -c option of qemu-img convert but the result was nearly
the
same.

Please show the exact command-lines you are using and the qemu-img
info filename output afterwards.

Stefan

Re: [User Question] How to create a backup of an LVM based maschine without wasting space

2012-10-12 Thread Javier Guerra Giraldez
On Fri, Oct 12, 2012 at 3:56 PM, Lukas Laukamp lu...@laukamp.me wrote:
 I think that it must be possible to create an image with a size like the
 used space + a few hundret MB with metadata or something like that.

the 'best' way to do it is 'from within' the VM

the typical workaround is to mount/fsck a LVM snapshot (don't skip the
fsck, or at least a journal replay)

beyond that, there are a few utilities for specific filesystems:
PartImage [1], dump/restore [2], and i'm sure some others.  I don't
know how would these behave with unclean images, which is what you get
if you pull the image under the VM's feet.


[1] http://www.partimage.org/Main_Page
[2] http://dump.sourceforge.net/


-- 
Javier
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html