Re: [Qemu-devel] [PATCH] Add definitions for current cpu models..

2010-01-26 Thread Gerd Hoffmann

On 01/25/10 23:35, Dor Laor wrote:

On 01/25/2010 04:21 PM, Anthony Liguori wrote:

Another way to look at this is that implementing a somewhat arbitrary
policy within QEMU's .c files is something we should try to avoid.
Implementing arbitrary policy in our default config file is a fine thing
to do. Default configs are suggested configurations that are modifiable
by a user. Something baked into QEMU is something that ought to work for



If we get the models right, users and mgmt stacks won't need to define
them. It seems like almost impossible task for us, mgmt stack/users
won't do a better job, the opposite I guess. The configs are great, I
have no argument against them, my case is that if we can pin down some
definitions, its better live in the code, like the above models.
It might even help to get the same cpus across the various vendors,
otherwise we might end up with IBM's core2duo, RH's core2duo, Suse's,..


I agree.  When looking at this thread and config file idea it feels a 
bit like we have a hard time to agree on some sensible default cpu 
types, so lets make this configurable so we don't have to.  Which is a 
bad thing IMHO.


cheers,
  Gerd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] QEMU - provide e820 reserve through qemu_cfg

2010-01-26 Thread Jes Sorensen

On 01/26/10 07:46, Gleb Natapov wrote:

On Mon, Jan 25, 2010 at 06:13:35PM +0100, Jes Sorensen wrote:

I am fine with having QEMU build the e820 tables completely if there is
a consensus to take that path.


QEMU can't build the e820 map completely. There are things it doesn't
know. Like how much memory ACPI tables take and where they are located.


Good point!

I think the conclusion is to do a load-extra-tables kinda interface 
allowing QEMU to pass in a bunch of them, but leaving things like the

ACPI space for the BIOS to reserve.

Cheers,
Jes
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


git snapshot can open readonly vm image

2010-01-26 Thread John Wong
Hi, after upgrade to current qemu-kvm (download from
http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=snapshot;h=34f6b13147766789fc2ef289f5b420f85e51d049;sf=tgz),
i can not open the readonly vm image, it can work before.

please help, thank you.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git snapshot can open readonly vm image

2010-01-26 Thread Avi Kivity
On 01/26/2010 10:49 AM, John Wong wrote:
 Hi, after upgrade to current qemu-kvm (download from
 http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=snapshot;h=34f6b13147766789fc2ef289f5b420f85e51d049;sf=tgz),
 i can not open the readonly vm image, it can work before.

 please help, thank you.
   

It's impossible to understand what's wrong without a lot more details.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.32-KVM-pit_ioport_read() integer buffer overflow hole

2010-01-26 Thread Avi Kivity

On 01/26/2010 10:59 AM, wzt wzt wrote:

Hi:
 In kernel 2.6.32 kernel/arch/x86/kvm/i8254.c, I found
pit_ioport_read maybe have a integer buffer overflow hole:

static int pit_ioport_read(struct kvm_io_device *this,
   gpa_t addr, int len, void *data)
{
…
if (len  sizeof(ret))
len = sizeof(ret);

memcpy(data, (char *)ret, len);  // if len is a negative(  0),
  the data memory will be buffer overflow.
…
}
   



Is there any caller that can send a negative length, user- or guest- 
controlled?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 07:49, Chris Wright wrote:

 Please send in any agenda items you are interested in covering.

KVM Hardware Inquiry Tool

One of the things I have on my todo list is a tool you can run on your machine 
that tells you which virtualization features it supports. Imaginary output of 
such a tool:

--

KVM Supported: yes
NPT/EPT: yes
Device Assignment: no

Expected Virtual CPU Speed: 95%

--

That way users can easily determine what to expect when they run KVM on a 
machine without need to know about CPUID flags that don't even get exposed in 
/proc/cpuinfo or grepping dmesg.

My main question on this one is how to best implement it.

Should this be part of qemu? We'll need some architecture specific backend 
code, so leveraging the structure might be helpful.
Should this be a separate script? That'd mean installing one more application 
that distros might name differently :(.
Does it even have chances to get accepted upstream?


Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print test every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang sh...@linux.intel.com
---
 cpu-all.h |2 ++
 exec.c|6 ++
 kvm-all.c |   21 +
 kvm.h |1 +
 vl.c  |2 ++
 5 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, 
ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /***/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t 
addr, ram_addr_t size)
 kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+if (kvm_enabled())
+kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
 RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..889fc42 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,7 @@ struct KVMState
 int vmfd;
 int regs_modified;
 int coalesced_mmio;
+struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
 int broken_set_mem_region;
 int migration_log;
 int vcpu_events;
@@ -200,6 +201,12 @@ int kvm_init_vcpu(CPUState *env)
 goto err;
 }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+if (s-coalesced_mmio  !s-coalesced_mmio_ring)
+s-coalesced_mmio_ring = (void *) env-kvm_run +
+   s-coalesced_mmio * PAGE_SIZE;
+#endif
+
 ret = kvm_arch_init_vcpu(env);
 if (ret == 0) {
 qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +473,10 @@ int kvm_init(int smp_cpus)
 goto err;
 }
 
+s-coalesced_mmio = 0;
+s-coalesced_mmio_ring = NULL;
 #ifdef KVM_CAP_COALESCED_MMIO
 s-coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-s-coalesced_mmio = 0;
 #endif
 
 s-broken_set_mem_region = 1;
@@ -544,14 +551,12 @@ static int kvm_handle_io(uint16_t port, void *data, int 
direction, int size,
 return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
 KVMState *s = kvm_state;
-if (s-coalesced_mmio) {
-struct kvm_coalesced_mmio_ring *ring;
-
-ring = (void *)run + (s-coalesced_mmio * TARGET_PAGE_SIZE);
+if (s-coalesced_mmio_ring) {
+struct kvm_coalesced_mmio_ring *ring = s-coalesced_mmio_ring;
 while (ring-first != ring-last) {
 struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +614,7 @@ int kvm_cpu_exec(CPUState *env)
 abort();
 }
 
-kvm_run_coalesced_mmio(env, run);
+kvm_flush_coalesced_mmio_buffer();
 
 ret = 0; /* exit loop */
 switch (run-exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
   target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
 DisplayState *ds = opaque;
 DisplayChangeListener *dcl = ds-listeners;
 
+qemu_flush_coalesced_mmio_buffer();
 dpy_refresh(ds);
 
 while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
 uint64_t interval = GUI_REFRESH_INTERVAL;
 
+qemu_flush_coalesced_mmio_buffer();
 qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WinXP virtual crashes on 0.12.1.2 but not 0.12.1.1

2010-01-26 Thread Mark Cave-Ayland

Avi Kivity wrote:

Unfortunately, no such luck.  Apparently this is not msr/cpuid related - 
perhaps power management.  Can you enable the kvm_mmio and kvm_pio 
events?  Perhaps they will provide a clue.


Yeah, I did take a quick glance at the trace and figured that there 
would be some (#GP) strings in there if this were the cause. At least 
that probably explains why the ignore_msrs=1 parameter didn't have an 
effect.


I've just created the kvm_mmio and kvm_pio trace here for you to look 
at: http://www.siriusit.co.uk/tmp/kvm1.trace.



ATB,

Mark.

--
Mark Cave-Ayland - Senior Technical Architect
PostgreSQL - PostGIS
Sirius Corporation plc - control through freedom
http://www.siriusit.co.uk
t: +44 870 608 0063

Sirius Labs: http://www.siriusit.co.uk/labs
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 10:41, Sheng Yang wrote:

 The default action of coalesced MMIO is, cache the writing in buffer, until:
 1. The buffer is full.
 2. Or the exit to QEmu due to other reasons.
 
 But this would result in a very late writing in some condition.
 1. The each time write to MMIO content is small.
 2. The writing interval is big.
 3. No need for input or accessing other devices frequently.
 
 This issue was observed in a experimental embbed system. The test image
 simply print test every 1 seconds. The output in QEmu meets expectation,
 but the output in KVM is delayed for seconds.
 
 Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
 handler. By this way, We don't need vcpu explicit exit to QEmu to
 handle this issue.
 
 Signed-off-by: Sheng Yang sh...@linux.intel.com
 ---
 cpu-all.h |2 ++
 exec.c|6 ++
 kvm-all.c |   21 +
 kvm.h |1 +
 vl.c  |2 ++
 5 files changed, 24 insertions(+), 8 deletions(-)
 
 diff --git a/cpu-all.h b/cpu-all.h
 index 57b69f8..1ccc9a8 100644
 --- a/cpu-all.h
 +++ b/cpu-all.h
 @@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t 
 addr, ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
 +void qemu_flush_coalesced_mmio_buffer(void);
 +
 /***/
 /* host CPU ticks (if available) */
 
 diff --git a/exec.c b/exec.c
 index 1190591..6875370 100644
 --- a/exec.c
 +++ b/exec.c
 @@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t 
 addr, ram_addr_t size)
 kvm_uncoalesce_mmio_region(addr, size);
 }
 
 +void qemu_flush_coalesced_mmio_buffer(void)
 +{
 +if (kvm_enabled())
 +kvm_flush_coalesced_mmio_buffer();
 +}
 +
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
 RAMBlock *new_block;
 diff --git a/kvm-all.c b/kvm-all.c
 index 15ec38e..889fc42 100644
 --- a/kvm-all.c
 +++ b/kvm-all.c
 @@ -59,6 +59,7 @@ struct KVMState
 int vmfd;
 int regs_modified;
 int coalesced_mmio;
 +struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;

I guess this needs to be guarded by an #ifdef?


Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git snapshot can open readonly vm image

2010-01-26 Thread John Wong
Avi Kivity 提到:
 On 01/26/2010 10:49 AM, John Wong wrote:
   
 Hi, after upgrade to current qemu-kvm (download from
 http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=snapshot;h=34f6b13147766789fc2ef289f5b420f85e51d049;sf=tgz),
 i can not open the readonly vm image, it can work before.

 please help, thank you.
   
 

 It's impossible to understand what's wrong without a lot more details.

   
Sorry, i try to more details.

After upgrade to current qemu-kvm
(http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=snapshot;h=34f6b13147766789fc2ef289f5b420f85e51d049;sf=tgz),
i can not open the vm image, which only have read only (0644) permission
for my login ID.

And i can do that with qemu-kvm ( download from git.kernel.org at
10-jan-2010 ),
both use same kernel modules (2.6.32 from debian kernel package).

my system is debian/sid/amd64.
uname -a: Linux retro 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC
2010 x86_64 GNU/Linux
ls -l image.qcow2: -rw-r--r-- 1 root root 3235643392 2010-01-11 16:24
image.qcow2
kvm-qemu error message like this: qemu: could not open disk image
/home/root/kvm/winxp-hkjc-full.qcow2: Permission denied


And i noticed, after upgrade to current qemu-kvm, i see a lot of warning
message like this:
QEMU 0.12.50 monitor - type 'help' for more information
(qemu) BUG: kvm_dirty_pages_log_enable_slot: invalid parameters
BUG: kvm_dirty_pages_log_disable_slot: invalid parameters
BUG: kvm_dirty_pages_log_enable_slot: invalid parameters
BUG: kvm_dirty_pages_log_disable_slot: invalid parameters
BUG: kvm_dirty_pages_log_enable_slot: invalid parameters

Please help, thank you.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
On Tue, Jan 26, 2010 at 10:59:17AM +0100, Alexander Graf wrote:
 
 On 26.01.2010, at 10:41, Sheng Yang wrote:
 
  --- a/kvm-all.c
  +++ b/kvm-all.c
  @@ -59,6 +59,7 @@ struct KVMState
  int vmfd;
  int regs_modified;
  int coalesced_mmio;
  +struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
 
 I guess this needs to be guarded by an #ifdef?

Oh, yes. Thanks for reminder. :)

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print test every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang sh...@linux.intel.com
---
 cpu-all.h |2 ++
 exec.c|6 ++
 kvm-all.c |   23 +++
 kvm.h |1 +
 vl.c  |2 ++
 5 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/cpu-all.h b/cpu-all.h
index 57b69f8..1ccc9a8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, 
ram_addr_t size);
 
 void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
 
+void qemu_flush_coalesced_mmio_buffer(void);
+
 /***/
 /* host CPU ticks (if available) */
 
diff --git a/exec.c b/exec.c
index 1190591..6875370 100644
--- a/exec.c
+++ b/exec.c
@@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t 
addr, ram_addr_t size)
 kvm_uncoalesce_mmio_region(addr, size);
 }
 
+void qemu_flush_coalesced_mmio_buffer(void)
+{
+if (kvm_enabled())
+kvm_flush_coalesced_mmio_buffer();
+}
+
 ram_addr_t qemu_ram_alloc(ram_addr_t size)
 {
 RAMBlock *new_block;
diff --git a/kvm-all.c b/kvm-all.c
index 15ec38e..f8350c9 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,9 @@ struct KVMState
 int vmfd;
 int regs_modified;
 int coalesced_mmio;
+#ifdef KVM_CAP_COALESCED_MMIO
+struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
+#endif
 int broken_set_mem_region;
 int migration_log;
 int vcpu_events;
@@ -200,6 +203,12 @@ int kvm_init_vcpu(CPUState *env)
 goto err;
 }
 
+#ifdef KVM_CAP_COALESCED_MMIO
+if (s-coalesced_mmio  !s-coalesced_mmio_ring)
+s-coalesced_mmio_ring = (void *) env-kvm_run +
+   s-coalesced_mmio * PAGE_SIZE;
+#endif
+
 ret = kvm_arch_init_vcpu(env);
 if (ret == 0) {
 qemu_register_reset(kvm_reset_vcpu, env);
@@ -466,10 +475,10 @@ int kvm_init(int smp_cpus)
 goto err;
 }
 
+s-coalesced_mmio = 0;
 #ifdef KVM_CAP_COALESCED_MMIO
 s-coalesced_mmio = kvm_check_extension(s, KVM_CAP_COALESCED_MMIO);
-#else
-s-coalesced_mmio = 0;
+s-coalesced_mmio_ring = NULL;
 #endif
 
 s-broken_set_mem_region = 1;
@@ -544,14 +553,12 @@ static int kvm_handle_io(uint16_t port, void *data, int 
direction, int size,
 return 1;
 }
 
-static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run)
+void kvm_flush_coalesced_mmio_buffer(void)
 {
 #ifdef KVM_CAP_COALESCED_MMIO
 KVMState *s = kvm_state;
-if (s-coalesced_mmio) {
-struct kvm_coalesced_mmio_ring *ring;
-
-ring = (void *)run + (s-coalesced_mmio * TARGET_PAGE_SIZE);
+if (s-coalesced_mmio_ring) {
+struct kvm_coalesced_mmio_ring *ring = s-coalesced_mmio_ring;
 while (ring-first != ring-last) {
 struct kvm_coalesced_mmio *ent;
 
@@ -609,7 +616,7 @@ int kvm_cpu_exec(CPUState *env)
 abort();
 }
 
-kvm_run_coalesced_mmio(env, run);
+kvm_flush_coalesced_mmio_buffer();
 
 ret = 0; /* exit loop */
 switch (run-exit_reason) {
diff --git a/kvm.h b/kvm.h
index 1c93ac5..59cba18 100644
--- a/kvm.h
+++ b/kvm.h
@@ -53,6 +53,7 @@ void kvm_setup_guest_memory(void *start, size_t size);
 
 int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
 int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size);
+void kvm_flush_coalesced_mmio_buffer(void);
 
 int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr,
   target_ulong len, int type);
diff --git a/vl.c b/vl.c
index 2b0b653..1f0c536 100644
--- a/vl.c
+++ b/vl.c
@@ -3193,6 +3193,7 @@ static void gui_update(void *opaque)
 DisplayState *ds = opaque;
 DisplayChangeListener *dcl = ds-listeners;
 
+qemu_flush_coalesced_mmio_buffer();
 dpy_refresh(ds);
 
 while (dcl != NULL) {
@@ -3208,6 +3209,7 @@ static void nographic_update(void *opaque)
 {
 uint64_t interval = GUI_REFRESH_INTERVAL;
 
+qemu_flush_coalesced_mmio_buffer();
 qemu_mod_timer(nographic_timer, interval + qemu_get_clock(rt_clock));
 }
 
-- 
1.5.4.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] Fix failed msr tracing

2010-01-26 Thread Marcelo Tosatti
On Mon, Jan 25, 2010 at 07:36:02PM +0200, Avi Kivity wrote:
 We don't trace failed msr access (wrmsr or rdmsr which end up generating a
 #GP), which loses important data.
 
 Avi Kivity (2):
   KVM: Fix msr trace
   KVM: Trace failed msr reads and writes
 
  arch/x86/kvm/svm.c   |   13 -
  arch/x86/kvm/trace.h |   27 ---
  arch/x86/kvm/vmx.c   |5 +++--
  3 files changed, 27 insertions(+), 18 deletions(-)

Applied, thanks.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Marcelo Tosatti
On Tue, Jan 26, 2010 at 07:21:16PM +0800, Sheng Yang wrote:
 The default action of coalesced MMIO is, cache the writing in buffer, until:
 1. The buffer is full.
 2. Or the exit to QEmu due to other reasons.
 
 But this would result in a very late writing in some condition.
 1. The each time write to MMIO content is small.
 2. The writing interval is big.
 3. No need for input or accessing other devices frequently.
 
 This issue was observed in a experimental embbed system. The test image
 simply print test every 1 seconds. The output in QEmu meets expectation,
 but the output in KVM is delayed for seconds.
 
 Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
 handler. By this way, We don't need vcpu explicit exit to QEmu to
 handle this issue.
 
 Signed-off-by: Sheng Yang sh...@linux.intel.com

Applied, thanks.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


TPM Support in KVM

2010-01-26 Thread Martin Schneider
Dear list,

is there a document that describes the level of support of trusted
computing technology in KVM and how things work?

I read in various sources that KVM should support virtual Trusted
Platform Modules in virtual machines but I coudln't find any evidence
and/or document about this on the official site.

Thanks a lot
Martin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Add definitions for current cpu models..

2010-01-26 Thread Anthony Liguori

On 01/26/2010 02:26 AM, Gerd Hoffmann wrote:

On 01/25/10 23:35, Dor Laor wrote:

On 01/25/2010 04:21 PM, Anthony Liguori wrote:

Another way to look at this is that implementing a somewhat arbitrary
policy within QEMU's .c files is something we should try to avoid.
Implementing arbitrary policy in our default config file is a fine 
thing

to do. Default configs are suggested configurations that are modifiable
by a user. Something baked into QEMU is something that ought to work 
for



If we get the models right, users and mgmt stacks won't need to define
them. It seems like almost impossible task for us, mgmt stack/users
won't do a better job, the opposite I guess. The configs are great, I
have no argument against them, my case is that if we can pin down some
definitions, its better live in the code, like the above models.
It might even help to get the same cpus across the various vendors,
otherwise we might end up with IBM's core2duo, RH's core2duo, Suse's,..


I agree.  When looking at this thread and config file idea it feels a 
bit like we have a hard time to agree on some sensible default cpu 
types, so lets make this configurable so we don't have to.  Which is 
a bad thing IMHO.


There's no sensible default.  If a user only has Nehalem-EX class 
processors and Westmeres, why would they want to limit themselves to 
just Nehalem?  For an organization that already uses and understand the 
VMware grouping, is it wrong for them to want to just use VMware-style 
grouping?


This feature is purely data driven.  There is no code involved.  Any 
time a feature is purely data driven and there isn't a clear right and 
wrong solution, a configuration file is a natural solution IMHO.


I think the only real question is whether it belongs in the default 
config or a dedicated configuration file but honestly that's just a 
statement of convention.


Regards,

Anthony Liguori


cheers,
  Gerd


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: TPM Support in KVM

2010-01-26 Thread Anthony Liguori

On 01/26/2010 06:47 AM, Martin Schneider wrote:

Dear list,

is there a document that describes the level of support of trusted
computing technology in KVM and how things work?

I read in various sources that KVM should support virtual Trusted
Platform Modules in virtual machines but I coudln't find any evidence
and/or document about this on the official site.
   


It is not (yet) supported in KVM.

Regards,

Anthony Liguori


Thanks a lot
Martin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
   


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Anthony Liguori

On 01/26/2010 03:09 AM, Alexander Graf wrote:

On 26.01.2010, at 07:49, Chris Wright wrote:

   

Please send in any agenda items you are interested in covering.
 

KVM Hardware Inquiry Tool
   


Avi beat you to it ;-)  See vmxcap in the tree.


One of the things I have on my todo list is a tool you can run on your machine 
that tells you which virtualization features it supports. Imaginary output of 
such a tool:

--

KVM Supported: yes
NPT/EPT: yes
Device Assignment: no

Expected Virtual CPU Speed: 95%
   


I would suggest exercising caution in making such a broad performance 
statement.  It's never going to be that simple.


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 03:11 PM, Anthony Liguori wrote:

On 01/26/2010 03:09 AM, Alexander Graf wrote:

On 26.01.2010, at 07:49, Chris Wright wrote:


Please send in any agenda items you are interested in covering.

KVM Hardware Inquiry Tool


Avi beat you to it ;-)  See vmxcap in the tree.


I knew I should have put a disclaimer in there.  Maybe I should make the 
output vary randomly over time?


Anyway we really need a virtualization stack inquiry tool, since 
capabilities depend on the hardware, kernel, and qemu.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 14:11, Anthony Liguori wrote:

 On 01/26/2010 03:09 AM, Alexander Graf wrote:
 On 26.01.2010, at 07:49, Chris Wright wrote:
 
   
 Please send in any agenda items you are interested in covering.
 
 KVM Hardware Inquiry Tool
   
 
 Avi beat you to it ;-)  See vmxcap in the tree.

Interesting. Though as the name implies it's for VMX. No good for anybody but 
Intel users. I was more thinking of something generic that would also work just 
fine on PPC and S390.

 
 One of the things I have on my todo list is a tool you can run on your 
 machine that tells you which virtualization features it supports. Imaginary 
 output of such a tool:
 
 --
 
 KVM Supported: yes
 NPT/EPT: yes
 Device Assignment: no
 
 Expected Virtual CPU Speed: 95%
   
 
 I would suggest exercising caution in making such a broad performance 
 statement.  It's never going to be that simple.

Well, I think we should tell users something. We are telling them According to 
performance measurements, when using NPT with a non-IO heavy workload gives you 
 90% native performance in the VM today already. At least that's what I 
remembered ;-).

The message should be something really simple so users know what to expect from 
KVM before they actually use it. With all the device assignment questions 
arising that somehow seems to underline my statement.

I'd also like to see some simple help analysis built into this tool. Something 
like VMX is disabled in the BIOS, Machine is Device Passthrough capable, but 
it's disabled in the BIOS, Please pass parameter XXX to the kernel command 
line to activate feature Y.

The main question is where does it belong?

a) built into qemu
b) built as separate tool, but shipped with qemu
c) completely separate

I'm personally leaning towards a. That way we can reuse the detection code and 
give help when an option is used that doesn't work.

Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 03:18 PM, Alexander Graf wrote:


The main question is where does it belong?

a) built into qemu
b) built as separate tool, but shipped with qemu
c) completely separate

I'm personally leaning towards a. That way we can reuse the detection code and 
give help when an option is used that doesn't work.

   


Me too, especially as the whole stack is involved, and qemu is the 
topmost part from our perspective (no doubt libvirt will want to 
integrate that functionality as well).


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Daniel P. Berrange
On Tue, Jan 26, 2010 at 03:24:50PM +0200, Avi Kivity wrote:
 On 01/26/2010 03:18 PM, Alexander Graf wrote:
 
 The main question is where does it belong?
 
 a) built into qemu
 b) built as separate tool, but shipped with qemu
 c) completely separate
 
 I'm personally leaning towards a. That way we can reuse the detection code 
 and give help when an option is used that doesn't work.
 

 
 Me too, especially as the whole stack is involved, and qemu is the 
 topmost part from our perspective (no doubt libvirt will want to 
 integrate that functionality as well).

FYI, libvirt already exposes this kind of functionality. The API call
virConnectGetCapabilities() / command line virsh capabilities command
tells you about what the virtualization host is able to support. It can
tell you what architectures are supported, by which binaries. What
machine types are available. Whether KVM or KQEMU acceleration are
present. What CPU model / flags are on the host. What NUMA topology is
available. etc etc 

The data format it outputs though is not exactly targetted for direct
end user consumption though, rather its a XML doc aimed at applications
The virt-manager app tries to use this to inform the user of problems
such as ability todo hardware virt, but it not being enabled.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 14:33, Daniel P. Berrange wrote:

 On Tue, Jan 26, 2010 at 03:24:50PM +0200, Avi Kivity wrote:
 On 01/26/2010 03:18 PM, Alexander Graf wrote:
 
 The main question is where does it belong?
 
 a) built into qemu
 b) built as separate tool, but shipped with qemu
 c) completely separate
 
 I'm personally leaning towards a. That way we can reuse the detection code 
 and give help when an option is used that doesn't work.
 
 
 
 Me too, especially as the whole stack is involved, and qemu is the 
 topmost part from our perspective (no doubt libvirt will want to 
 integrate that functionality as well).
 
 FYI, libvirt already exposes this kind of functionality. The API call
 virConnectGetCapabilities() / command line virsh capabilities command
 tells you about what the virtualization host is able to support. It can
 tell you what architectures are supported, by which binaries. What
 machine types are available. Whether KVM or KQEMU acceleration are
 present. What CPU model / flags are on the host. What NUMA topology is
 available. etc etc 
 
 The data format it outputs though is not exactly targetted for direct
 end user consumption though, rather its a XML doc aimed at applications
 The virt-manager app tries to use this to inform the user of problems
 such as ability todo hardware virt, but it not being enabled.

Hrm, while I sympathize with the goals of libvirt and all the efforts in it, 
I'd like to see the stock qemu exectable stay as user friendly as possible. One 
of qemu's strong points always used to be its really simple CLI.
So IMHO it rather belongs there with libvirt querying qemu than the other way 
around.

Nevertheless, I suppose the code would be a pretty good starting point!

Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 03:33 PM, Daniel P. Berrange wrote:



Me too, especially as the whole stack is involved, and qemu is the
topmost part from our perspective (no doubt libvirt will want to
integrate that functionality as well).
 

FYI, libvirt already exposes this kind of functionality. The API call
virConnectGetCapabilities() / command line virsh capabilities command
tells you about what the virtualization host is able to support. It can
tell you what architectures are supported, by which binaries. What
machine types are available. Whether KVM or KQEMU acceleration are
present. What CPU model / flags are on the host. What NUMA topology is
available. etc etc

   


Great.  Note that for a cpu flag to be usable in a guest, it needs to be 
supported by both kvm.ko and qemu, so reporting /proc/cpuinfo is 
insufficient.  There are also synthetic cpu flags (kvm paravirt 
features, x2apic) that aren't present in /proc/cpuinfo.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to properly turn off guest VM on server shutdown?

2010-01-26 Thread Markus Breitländer
Hello Glennie,

Am 26.01.2010 14:46, schrieb Glennie Vignarajah:
 Le 24/01/2010 vers 18:16, dans le message intitulé Re: How to properly turn 
 off guest VM on server shutdown?, Markus Breitländer(Markus Breitländer 
 breitlaen...@stud.fh-dortmund.de) a écrit:
 
 Hi!
 
 Hello;
 
 Does anyone have sample scripts for this job?
 
 #!/bin/bash
 CONNECT_STRING=qemu:///system
 for MACHINE in $(virsh -c $CONNECT_STRING list | awk '/running$/ {print 
 $2}') ; do
   virsh -c $CONNECT_STRING shutdown $MACHINE
 done
 sleep 600
 
 This code will shutdown all runnning hosts with acpi en enabled.
 I haven't tested it under windows Seven, but under win 2003, you have modify:
 
  * Using regedit the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows 
 NT\CurrentVersion\Windows and set the value ShutdownWarningDialogTimeout to 
 dword:0001 (this will force shutdown even if users are connected)
 
 
 AND
 
  * Goto Control Pannel, admin tools and double-click Local security settings
  * Expand Local policies and click on Security Options (left window pan)
  * On the right side, locate Shutdown: Allow system to be shutdown... and 
 enable the option(this allows to powerdown on ctr-alt-del screen.
 
 HTH

I was thinking about a script that doesn't use virsh.

I would be intrested in what commands virsh uses in it's 'shutdown'
command...

I haven't been working on my own script yet. Up to know i have in mind
to use qemu monitor command 'system_powerdown' and maybe ssh into linux
boxes to get em down (but the latter is not really nice).
By the way when testing manually, i experianced you might want to use
the 'system_powerdown' command twice / execute it two times shortly
after another to get windows machines down.

Greets,
  Markus

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Anthony Liguori

On 01/26/2010 07:24 AM, Avi Kivity wrote:

On 01/26/2010 03:18 PM, Alexander Graf wrote:


The main question is where does it belong?

a) built into qemu
b) built as separate tool, but shipped with qemu
c) completely separate

I'm personally leaning towards a. That way we can reuse the detection 
code and give help when an option is used that doesn't work.




Me too, especially as the whole stack is involved, and qemu is the 
topmost part from our perspective (no doubt libvirt will want to 
integrate that functionality as well).


I'm not sure I agree.  It would use no code from qemu and really benefit 
in no way from being part of qemu.  I don't feel that strongly about it 
though.


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] RFC: alias rework

2010-01-26 Thread Avi Kivity

On 01/25/2010 10:40 PM, Izik Eidus wrote:



Or is this a feature you need?
 


I dont need it (I asked Avi to do something), So he said he want to nuke the 
aliasing
from kvm and keep supporting the old userspace`s

Do you have any other way to achive this?

Btw I do realize it might be better not to push this patch and just keep the old
way of treating aliasing as we have now, I really don`t mind.
   


How about implementing an alias pointing at a deleted slot as an invalid 
slot?


If the slot comes back later, we can revalidate it.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 04:13 PM, Anthony Liguori wrote:
Me too, especially as the whole stack is involved, and qemu is the 
topmost part from our perspective (no doubt libvirt will want to 
integrate that functionality as well).



I'm not sure I agree.  It would use no code from qemu and really 
benefit in no way from being part of qemu.  I don't feel that strongly 
about it though.




It would need to know which cpuid bits qemu supports.  Only qemu knows that.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] RFC: alias rework

2010-01-26 Thread Izik Eidus
On Tue, 26 Jan 2010 16:14:47 +0200
Avi Kivity a...@redhat.com wrote:

 On 01/25/2010 10:40 PM, Izik Eidus wrote:
 
  Or is this a feature you need?
   
 
  I dont need it (I asked Avi to do something), So he said he want to nuke 
  the aliasing
  from kvm and keep supporting the old userspace`s
 
  Do you have any other way to achive this?
 
  Btw I do realize it might be better not to push this patch and just keep 
  the old
  way of treating aliasing as we have now, I really don`t mind.
 
 
 How about implementing an alias pointing at a deleted slot as an invalid 
 slot?
 
 If the slot comes back later, we can revalidate it.
 

Ok, didn`t notice this invalid memslot flag,
I will add this, I will still leave the update_aliased_memslot()
in order to update the userspace virtual address...
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Anthony Liguori

On 01/26/2010 08:26 AM, Avi Kivity wrote:

On 01/26/2010 04:22 PM, Anthony Liguori wrote:

On 01/26/2010 08:15 AM, Avi Kivity wrote:

On 01/26/2010 04:13 PM, Anthony Liguori wrote:
Me too, especially as the whole stack is involved, and qemu is the 
topmost part from our perspective (no doubt libvirt will want to 
integrate that functionality as well).



I'm not sure I agree.  It would use no code from qemu and really 
benefit in no way from being part of qemu.  I don't feel that 
strongly about it though.




It would need to know which cpuid bits qemu supports.  Only qemu 
knows that.


I'm not sure I understand why.  Can you elaborate?



If qemu doesn't recognize -cpu qemu64,+nx, then no amount of hardware 
and kvm.ko support will allow the user to enable nx in a guest.


Does -cpu host filter out flags that we don't know about?  I'm pretty 
sure it doesn't.  Since we're planning on moving to -cpu host by default 
for KVM, does it really matter?


Oh, I was under the impression that the tool was meant to be software 
agnostic.  IOW, here are all the virt features your hardware supports.


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH/RFC] KVM: Plan obsolescence of kernel allocated slots, paravirt mmu

2010-01-26 Thread Avi Kivity
These features are unused by modern userspace and can go away.  Paravirt
mmu needs to stay a little longer for live migration.

Signed-off-by: Avi Kivity a...@redhat.com
---
 Documentation/feature-removal-schedule.txt |   23 +++
 1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/Documentation/feature-removal-schedule.txt 
b/Documentation/feature-removal-schedule.txt
index 870d190..88ca110 100644
--- a/Documentation/feature-removal-schedule.txt
+++ b/Documentation/feature-removal-schedule.txt
@@ -493,3 +493,26 @@ Why:   These two features use non-standard interfaces. 
There are the
 Who:   Corentin Chary corentin.ch...@gmail.com
 
 
+What:  KVM memory aliases support
+When:  July 2010
+Why:   Memory aliasing support is used for speeding up guest vga access
+   through the vga windows.
+
+   Modern userspace no longer uses this feature, so it's just bitrotted
+   code and can be removed with no impact.
+Who:   Avi Kivity a...@redhat.com
+
+What:  KVM kernel-allocated memory slots
+When:  July 2010
+Why:   Since 2.6.25, kvm supports user-allocated memory slots, which are
+   much more flexible than kernel-allocated slots.  All current userspace
+   supports the newer interface and this code can be removed with no
+   impact.
+Who:   Avi Kivity a...@redhat.com
+
+What:  KVM paravirt mmu host support
+When:  January 2011
+Why:   The paravirt mmu host support is slower than non-paravirt mmu, both
+   on newer and older hardware.  It is already not exposed to the guest,
+   and kept only for live migration purposes.
+Who:   Avi Kivity a...@redhat.com
-- 
1.6.5.3

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 04:32 PM, Anthony Liguori wrote:
It would need to know which cpuid bits qemu supports.  Only qemu 
knows that.


I'm not sure I understand why.  Can you elaborate?



If qemu doesn't recognize -cpu qemu64,+nx, then no amount of hardware 
and kvm.ko support will allow the user to enable nx in a guest.



Does -cpu host filter out flags that we don't know about?  I'm pretty 
sure it doesn't.  Since we're planning on moving to -cpu host by 
default for KVM, does it really matter?


People who use discovery tools are probably setting up a migration 
cluster.  They aren't going to use -cpu host.




Oh, I was under the impression that the tool was meant to be software 
agnostic.  IOW, here are all the virt features your hardware supports.


That's /proc/cpuinfo, we should just extend it, maybe that's what Alex 
meant, but I'd like to see something more capable.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 15:37, Avi Kivity wrote:

 On 01/26/2010 04:32 PM, Anthony Liguori wrote:
 It would need to know which cpuid bits qemu supports.  Only qemu knows 
 that.
 
 I'm not sure I understand why.  Can you elaborate?
 
 
 If qemu doesn't recognize -cpu qemu64,+nx, then no amount of hardware and 
 kvm.ko support will allow the user to enable nx in a guest.
 
 
 Does -cpu host filter out flags that we don't know about?  I'm pretty sure 
 it doesn't.  Since we're planning on moving to -cpu host by default for KVM, 
 does it really matter?
 
 People who use discovery tools are probably setting up a migration cluster.  
 They aren't going to use -cpu host.
 
 
 Oh, I was under the impression that the tool was meant to be software 
 agnostic.  IOW, here are all the virt features your hardware supports.
 
 That's /proc/cpuinfo, we should just extend it, maybe that's what Alex meant, 
 but I'd like to see something more capable.

I think we're all looking at different use-cases.

First and frontmost the one type of user I'm concerned with in this case is a 
mortal end-user who doesn't know that much about virtualization details and 
doesn't care what NPT is. He just wants to have a VM running and wants to know 
how well it'll work.

For such a user an addition to /proc/cpuinfo would be enough, if it'd include 
IOMMU information. Or maybe /proc/iommu?

I think users should be able to run some simple command to evaluate if what 
they're trying to do works out. And if not, the command should give assistance 
on how to make things work (buy a new mainboard, set this kernel option, ...)

Of course one could fit in stuff for management tools too, but that's not my 
main goal for this feature.

Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Anthony Liguori

On 01/26/2010 08:37 AM, Avi Kivity wrote:
People who use discovery tools are probably setting up a migration 
cluster.  They aren't going to use -cpu host.


BTW, it might be neat to introduce a qemu command line that runs a 
monitor command and exits without creating a VM.  We could then 
introduce a info cpucap command that dumped all of the supported CPU 
features.


Someone setting up a migration cluster would then run qemu -monitor 
command=info cpucap, collect the results, compute an intersection, and 
then use that to generate a -cpu flag.  In fact, providing a tool that 
parsed a bunch of those outputs and generated a -cpu flag would be a 
pretty nice addition.




Oh, I was under the impression that the tool was meant to be software 
agnostic.  IOW, here are all the virt features your hardware supports.


That's /proc/cpuinfo, we should just extend it, maybe that's what Alex 
meant, but I'd like to see something more capable.


I definitely think extending /proc/cpuinfo or introducing a 
/proc/virtinfo would be a good idea regardless of any tool we introduce.


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 04:42 PM, Anthony Liguori wrote:

On 01/26/2010 08:37 AM, Avi Kivity wrote:
People who use discovery tools are probably setting up a migration 
cluster.  They aren't going to use -cpu host.


BTW, it might be neat to introduce a qemu command line that runs a 
monitor command and exits without creating a VM.  We could then 
introduce a info cpucap command that dumped all of the supported CPU 
features.


Someone setting up a migration cluster would then run qemu -monitor 
command=info cpucap, collect the results, compute an intersection, 
and then use that to generate a -cpu flag.  In fact, providing a tool 
that parsed a bunch of those outputs and generated a -cpu flag would 
be a pretty nice addition.


Definitely.  And query about supported machine models virtio NIC 
features, etc.




--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Avi Kivity

On 01/26/2010 04:42 PM, Alexander Graf wrote:



That's /proc/cpuinfo, we should just extend it, maybe that's what Alex meant, 
but I'd like to see something more capable.
 

I think we're all looking at different use-cases.

First and frontmost the one type of user I'm concerned with in this case is a 
mortal end-user who doesn't know that much about virtualization details and 
doesn't care what NPT is. He just wants to have a VM running and wants to know 
how well it'll work.
   


It really depends on what he does with it.  3D gaming? might have a 
different experience from the always exciting kernel builds.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 15:47, Avi Kivity wrote:

 On 01/26/2010 04:42 PM, Alexander Graf wrote:
 
 That's /proc/cpuinfo, we should just extend it, maybe that's what Alex 
 meant, but I'd like to see something more capable.
 
 I think we're all looking at different use-cases.
 
 First and frontmost the one type of user I'm concerned with in this case is 
 a mortal end-user who doesn't know that much about virtualization details 
 and doesn't care what NPT is. He just wants to have a VM running and wants 
 to know how well it'll work.
   
 
 It really depends on what he does with it.  3D gaming? might have a different 
 experience from the always exciting kernel builds.

Well, we can give an estimation (based on previous measurements) for certain 
subsystems. Like I proposed in the original mail, we can actually give users 
information about virtual CPU speed.

With SPICE hopefully merged one day we also could give some estimates on 3D 
performance.


Alex--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Jan 26

2010-01-26 Thread Anthony Liguori

On 01/26/2010 08:50 AM, Alexander Graf wrote:

On 26.01.2010, at 15:47, Avi Kivity wrote:

   

On 01/26/2010 04:42 PM, Alexander Graf wrote:
 
   

That's /proc/cpuinfo, we should just extend it, maybe that's what Alex meant, 
but I'd like to see something more capable.

 

I think we're all looking at different use-cases.

First and frontmost the one type of user I'm concerned with in this case is a 
mortal end-user who doesn't know that much about virtualization details and 
doesn't care what NPT is. He just wants to have a VM running and wants to know 
how well it'll work.

   

It really depends on what he does with it.  3D gaming? might have a different 
experience from the always exciting kernel builds.
 

Well, we can give an estimation (based on previous measurements) for certain 
subsystems. Like I proposed in the original mail, we can actually give users 
information about virtual CPU speed.
   


The problem with making an unqualified statement about something like 
virtual CPU speed is that if a user runs a random benchmark, and gets 
less than XX%, they'll consider it a bug and be unhappy.


I'm very reluctant to take anything in QEMU that makes promises about 
virtualization performance.  It's a bad idea IMHO.



With SPICE hopefully merged one day we also could give some estimates on 3D 
performance.
   


Spice doesn't support 3D today.

Regards,

Anthony Liguori


Alex


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to properly turn off guest VM on server shutdown?

2010-01-26 Thread Kenni Lund
2010/1/26 Markus Breitländer breitlaen...@stud.fh-dortmund.de:
 Hello Glennie,

 Am 26.01.2010 14:46, schrieb Glennie Vignarajah:
 Le 24/01/2010 vers 18:16, dans le message intitulé Re: How to properly turn
 off guest VM on server shutdown?, Markus Breitländer(Markus Breitländer
 breitlaen...@stud.fh-dortmund.de) a écrit:

 Hi!

 Hello;

 Does anyone have sample scripts for this job?

 #!/bin/bash
 CONNECT_STRING=qemu:///system
 for MACHINE in $(virsh -c $CONNECT_STRING list | awk '/running$/ {print
 $2}') ; do
   virsh -c $CONNECT_STRING shutdown $MACHINE
 done
 sleep 600

 This code will shutdown all runnning hosts with acpi en enabled.
 I haven't tested it under windows Seven, but under win 2003, you have modify:

  * Using regedit the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
 NT\CurrentVersion\Windows and set the value ShutdownWarningDialogTimeout to
 dword:0001 (this will force shutdown even if users are connected)


 AND

  * Goto Control Pannel, admin tools and double-click Local security 
 settings
  * Expand Local policies and click on Security Options (left window pan)
  * On the right side, locate Shutdown: Allow system to be shutdown... and
 enable the option(this allows to powerdown on ctr-alt-del screen.

 HTH

 I was thinking about a script that doesn't use virsh.

I suppose you can start your guest with
-monitor unix:/${SOCKETFILE},server,nowait

and then do something like:
socat - unix-connect:${SOCKETFILE} EOF
system_powerdown
EOF

Best Regards
Kenni Lund
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: TPM Support in KVM

2010-01-26 Thread Chris Wright
* Martin Schneider (martincschnei...@googlemail.com) wrote:
 Dear list,
 
 is there a document that describes the level of support of trusted
 computing technology in KVM and how things work?

There's host level trusted boot, which simply needs tboot and a new
enough kernel to support CONFIG_TXT.

 I read in various sources that KVM should support virtual Trusted
 Platform Modules in virtual machines but I coudln't find any evidence
 and/or document about this on the official site.

No vTPM is currently supported.

thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-26 Thread Lucas Meneghel Rodrigues
Yolkfull, I am copying Michael and Lawrence on the e-mail so they can
comment on the points I am going to present.

I've been meaning to comment on this test for a while, but refrained to
do so because I wanted to think a little about using image copy as an
alternate install method. Here are my thoughts:

1) Provided that one group has the ready images, indeed an explicit
image_copy test is a useful test, because it's virtually free of
failures.

2) However, operating system install is a fairly complex activity, and
it might reveal KVM bugs, so I believe the actual install step is *very*
valuable for KVM testing.

3) I do understand that the whole point of having this image_copy test
is to make sure we always execute the subsequent functional tests. But
that could be better done, considering 2), if we execute image_copy only
if unattended_install fails. But then we'd have to figure out a way to
do reverse dependency on the framework.

4) Instead of having an explicit image_copy test we could do image copy
as a postprocessing step, in case of failure of unattended install, and
remove the unattended install dependency from the subsequent tests.

My conclusion is that having this as an extra test won't be a problem
(after all we can allways choose to not run it on our test sets), but
your team could look at the possibility of 1st trying to do the actual
OS install, then resorting to image copy only if the test fails.

That said, the actual review of the test follows:

On Wed, 2010-01-06 at 11:32 +0800, Yolkfull Chow wrote:
 Add image_copy subtest for convenient KVM functional testing.
 
 The target image will be copied into the linked directory if link 'images'
 is created, and copied to the directory specified in config file otherwise.
 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/kvm_utils.py  |   64 
 
  client/tests/kvm/tests/image_copy.py   |   42 +
  client/tests/kvm/tests_base.cfg.sample |6 +++
  3 files changed, 112 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/tests/image_copy.py
 
 diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
 index 2bbbe22..3944b2b 100644
 --- a/client/tests/kvm/kvm_utils.py
 +++ b/client/tests/kvm/kvm_utils.py
 @@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
  reporter = os.path.join(report_dir, 'html_report.py')
  html_file = os.path.join(results_dir, 'results.html')
  os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
 +
 +
 +def is_dir_mounted(source, dest, type, perm):
 +
 +Check whether `source' is mounted on `dest' with right permission.
 +
 +@source: mount source
 +@dest:   mount point
 +@type:   file system type
 +
 +match_string = %s %s %s %s % (source, dest, type, perm)
 +try:
 +f = open(/etc/mtab, r)
 +mounted = f.read()
 +f.close()
 +except IOError:
 +mounted = commands.getoutput(mount)
 +if match_string in mounted: 
 +return True
 +return False

Except for checking permissions, the above could be saved by just using
os.path.ismount() instead. Considering that we are only going to use the
nfs share for copying images from there, it really doesn't matter if it
is read only or read write, so perhaps we could just use
os.path.ismount() instead.

 +
 +
 +def umount(mount_point):
 +
 +Umount `mount_point'.
 +
 +@mount_point: mount point
 +
 +cmd = umount %s % mount_point
 +s, o = commands.getstatusoutput(cmd)
 +if s != 0:
 +logging.error(Fail to umount: %s % o)
 +return False
 +return True

Let's avoid to use the commands api when we have utils.run or
utils.system instead. trap it on a try/except block like:

try:
utils.run(umount %s % mount_point)
return True
except CmdError, e:
logging.error(Fail to unmount %s: %s, mount_point, str(e))
return False

 +
 +def mount(src, mount_point, type, perm = rw):
 +
 +Mount the src into mount_point of the host.
 +
 +@src: mount source
 +@mount_point: mount point
 +@type: file system type
 +@perm: mount permission
 +
 +if is_dir_mounted(src, mount_point, type, perm):
 +return True

Please see comment on os.path.ismount().

 +umount(mount_point)
 +
 +cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
 +logging.debug(Issue mount command: %s % cmd)
 +s, o = commands.getstatusoutput(cmd)
 +if s != 0:
 +logging.error(Fail to mount: %s  % o)
 +return False

Please use utils.system() and a try/except block.

 +if is_dir_mounted(src, mount_point, type, perm):
 +logging.info(Successfully mounted %s % src)
 +return True
 +else:
 +logging.error(Mount verification failed; currently mounted: %s %
 + file('/etc/mtab').read())
 +return False
 diff 

IOMMU and VFs

2010-01-26 Thread Yevgeny Petrilin
Hello,

I have the following issue:
Our HW device is SRIOV capable, I have tried passing through one of its Virtual 
Functions to a guest OS Running RH5.4 on the guest and the host.
The function is seen by lspci on the guest, however, when the HW tries to DMA 
the guest with the BDF of the VF, it sees all 0's.
Same operation succeeds when the virtual device runs on the host.
I have VT-d enabled in BIOS.
Is there is anything else I need to configure?

Thanks,
Yevgeny


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IOMMU and VFs

2010-01-26 Thread Chris Wright
* Yevgeny Petrilin (yevge...@mellanox.co.il) wrote:
 Hello,
 
 I have the following issue:
 Our HW device is SRIOV capable, I have tried passing through one of its 
 Virtual Functions to a guest OS Running RH5.4 on the guest and the host.
 The function is seen by lspci on the guest, however, when the HW tries to 
 DMA the guest with the BDF of the VF, it sees all 0's.
 Same operation succeeds when the virtual device runs on the host.
 I have VT-d enabled in BIOS.

Which kvm/qemu-kvm version?  Newer ones will verify that you actually
have VT-d fully enabled.

Can you do 'dmesg | grep -e DMAR -e IOMMU' in the host and then we can
verify that the IOMMU is functioning.

 Is there is anything else I need to configure?

You may need to configure VT-d support in your kernel, or manually
enable it on the kernel commandline (intel_iommu=on).  Oh...I see,
RHEL5.4 host, yeah, you need intel_iommu=on.

thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: regression between 0.12.1.2 and 0.12.2

2010-01-26 Thread Jan Kiszka
Jan Kiszka wrote:
 Toralf Förster wrote:
 Hi,

 under a mostly stable Gentoo I observed this new msg :

 tfoer...@n22 ~/virtual/kvm $ qemu -hda gentoo_kdevm.img -hdb 
 portage_kdeprefix.img -hdd swap.img -smp 2 -m 768 -vga std -soundhw es1370   
  

 BUG: kvm_dirty_pages_log_enable_slot: invalid parameters 
 
 BUG: kvm_dirty_pages_log_disable_slot: invalid parameters
 
 ..

 The kvm image can be derived from 
 http://dev.gentooexperimental.org/~wired/kvm/ .

 My system is a :
 tfoer...@n22 ~/virtual/kvm $ uname -a
 Linux n22 2.6.32.4 #1 SMP Mon Jan 18 20:20:38 CET 2010 i686 Intel(R) 
 Core(TM)2 Duo CPU P8600 @ 2.40GHz GenuineIntel GNU/Linux


 
 That's a pre-0.12.1.2 qemu-kvm issue, upstream is not affected - or is
 at least not reporting it. It's already in my todo queue, just waiting
 to be dequeued.

I've looked into this a bit, and the bug message that pops up is in fact
new for your scenario (0.12.1.2-0.12.2, -vga std), it just happens to
trigger for me as well in a slightly different setup (CONFIG_FB_CIRRUS).

This is mostly harmless (the bug is gracefully handled), indicating
that qemu-kvm tries to enable/disable dirty logging for a VGA memory
area that was just unregistered. And that is because qemu-kvm tries to
keep support for old host kernels that had bugs and required workaround
approaches, but that code is bit-rotting a bit.

Avi, we should get rid of these messages, either by suppressing them in
qemu-kvm for now (stick your head into the sand...) or by finally
dropping all those dirty-logging diffs to upstream (in theory, there is
a third option: fixing the workaround code, but I don't think it's worth
the effort). What could be a road map for dropping? What distro kernels
are you aware of that may become unusable then?

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: IOMMU and VFs

2010-01-26 Thread Yevgeny Petrilin

 Which kvm/qemu-kvm version?  Newer ones will verify that you actually have 
 VT-d fully enabled.
kvm module version is :kvm-83-105.el5_4.13 (from modinfo)

 Can you do 'dmesg | grep -e DMAR -e IOMMU' in the host and then we can verify 
 that the IOMMU is functioning.
dmesg output:
ACPI: DMAR (v001AMI  OEMDMAR 0x0001 MSFT 0x0097) @ 
0xbf79e0c0
DMAR:Host address width 40
DMAR:DRHD (flags: 0x0001)base: 0xfbffe000
DMAR:RMRR base: 0x000ec000 end: 0x000e
DMAR:RMRR base: 0xbf7ec000 end: 0xbf7f
DMAR:Unknown DMAR structure type

 You may need to configure VT-d support in your kernel, or manually enable it 
 on the kernel commandline (intel_iommu=on).  Oh...I see,
 RHEL5.4 host, yeah, you need intel_iommu=on.

That did not help.

Thanks,
Yevgeny--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IOMMU and VFs

2010-01-26 Thread Chris Wright
* Yevgeny Petrilin (yevge...@mellanox.co.il) wrote:
 
  Which kvm/qemu-kvm version?  Newer ones will verify that you actually have 
  VT-d fully enabled.
 kvm module version is :kvm-83-105.el5_4.13 (from modinfo)
 
  Can you do 'dmesg | grep -e DMAR -e IOMMU' in the host and then we can 
  verify that the IOMMU is functioning.
 dmesg output:
 ACPI: DMAR (v001AMI  OEMDMAR 0x0001 MSFT 0x0097) @ 
 0xbf79e0c0
 DMAR:Host address width 40
 DMAR:DRHD (flags: 0x0001)base: 0xfbffe000
 DMAR:RMRR base: 0x000ec000 end: 0x000e
 DMAR:RMRR base: 0xbf7ec000 end: 0xbf7f
 DMAR:Unknown DMAR structure type
 
  You may need to configure VT-d support in your kernel, or manually enable 
  it on the kernel commandline (intel_iommu=on).  Oh...I see,
  RHEL5.4 host, yeah, you need intel_iommu=on.
 
 That did not help.

Please enter a BZ at bugzilla.redhat.com

thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-26 Thread Lucas Meneghel Rodrigues
On Tue, 2010-01-26 at 11:25 +0800, Yolkfull Chow wrote:
 This is designed to test all subcommands of 'qemu-img' however
 so far 'commit' is not implemented.

Hi Yolkful, this is very good! Seeing this test made me think about that
stand alone autotest module we commited a while ago, that does
qemu_iotests testsuite on the host.

Perhaps we could 'port' this module to the kvm test, since it is more
convenient to execute it inside a kvm test job (in a job where we test
more than 2 build types, for example, we need to execute qemu_img and
qemu_io_tests for every qemu-img built).

Could you look at implementing this?

 * For 'check' subcommand test, it will 'dd' to create a file with specified
 size and see whether it's supported to be checked. Then convert it to be
 supported formats (qcow2 and raw so far) to see whether there's error
 after convertion.
 
 * For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' 
 from
 the format specified in config file. And only check 'qcow2' after convertion.
 
 * For 'snapshot' subcommand test, it will create two snapshots and list them.
 Finally delete them if no errors found.
 
 * For 'info' subcommand test, it simply get output from specified image file.
 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/tests/qemu_img.py |  155 
 
  client/tests/kvm/tests_base.cfg.sample |   36 
  2 files changed, 191 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/tests/qemu_img.py
 
 diff --git a/client/tests/kvm/tests/qemu_img.py 
 b/client/tests/kvm/tests/qemu_img.py
 new file mode 100644
 index 000..1ae04f0
 --- /dev/null
 +++ b/client/tests/kvm/tests/qemu_img.py
 @@ -0,0 +1,155 @@
 +import os, logging, commands
 +from autotest_lib.client.common_lib import error
 +import kvm_vm
 +
 +
 +def run_qemu_img(test, params, env):
 +
 +`qemu-img' functions test:
 +1) Judge what subcommand is going to be tested
 +2) Run subcommand test
 +
 +@param test: kvm test object
 +@param params: Dictionary with the test parameters
 +@param env: Dictionary with test environment.
 +
 +cmd = params.get(qemu_img_binary)

It is a good idea to verify if cmd above resolves to an absolute path,
to avoid problems. If it doesn't resolve, verify if there's the symbolic
link under kvm test dir pointing to qemu-img, and if it does exist, make
sure it points to a valid file (ie, symlink is not broken).

 +subcommand = params.get(subcommand)
 +image_format = params.get(image_format)
 +image_name = kvm_vm.get_image_filename(params, test.bindir)
 +
 +def check(img):
 +global cmd
 +cmd +=  check %s % img
 +logging.info(Checking image '%s'... % img)
 +s, o = commands.getstatusoutput(cmd)
 +if not (s == 0 or does not support checks in o):
 +return (False, o)
 +return (True, )

Please use utils.system_output here instead of the equivalent commands
API on the above code. This comment applies to all further uses of
commands.[function].

 +
 +# Subcommand 'qemu-img check' test
 +# This tests will 'dd' to create a specified size file, and check it.
 +# Then convert it to supported image_format in each loop and check again.
 +def check_test():
 +size = params.get(dd_image_size)
 +test_image = params.get(dd_image_name)
 +create_image_cmd = params.get(create_image_cmd)
 +create_image_cmd = create_image_cmd % (test_image, size)
 +s, o = commands.getstatusoutput(create_image_cmd)
 +if s != 0:
 +raise error.TestError(Failed command: %s; Output is: %s %
 + (create_image_cmd, o))
 +s, o = check(test_image)
 +if not s:
 +raise error.TestFail(Failed to check image '%s' with error: %s 
 %
 +  (test_image, 
 o))
 +for fmt in params.get(supported_image_formats).split():
 +output_image = test_image + .%s % fmt
 +convert(fmt, test_image, output_image)
 +s, o = check(output_image)
 +if not s:
 +raise error.TestFail(Check image '%s' got error: %s %
 + (output_image, o))
 +commands.getoutput(rm -f %s % output_image)
 +commands.getoutput(rm -f %s % test_image)
 +#Subcommand 'qemu-img create' test
 +def create_test():
 +global cmd

I don't like very much this idea of using a global variable, instead it
should be preferrable to use a class and have a class attribute with
'cmd'. This way it would be safer, since the usage of cmd is
encapsulated. This comment applies to all further usage of 'global cmd'.

 +cmd +=  create
 +if params.get(encrypted) == yes:
 +cmd +=  -e
 +if params.get(base_image):
 +cmd +=  

[PATCH] KVM test: Fix setup stage of windows hosts

2010-01-26 Thread Lucas Meneghel Rodrigues
Most of the unattended windows install files added a
static ip network connection before reporting end of
install. That is fine when you are using user mode
networking only, but this is certainly restrictive,
and will give problems on subsequent tests if you are
using tap mode. Let's do all network configuration
through DCHP instead.

Tested and it works just fine for all versions of windows.

Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
 client/tests/kvm/unattended/win2003-32.sif |2 +-
 client/tests/kvm/unattended/win2003-64.sif |2 +-
 .../kvm/unattended/win2008-32-autounattend.xml |2 +-
 .../kvm/unattended/win2008-64-autounattend.xml |2 +-
 .../kvm/unattended/win2008-r2-autounattend.xml |2 +-
 .../tests/kvm/unattended/win7-32-autounattend.xml  |2 +-
 .../tests/kvm/unattended/win7-64-autounattend.xml  |2 +-
 .../kvm/unattended/winvista-32-autounattend.xml|2 +-
 .../kvm/unattended/winvista-64-autounattend.xml|2 +-
 client/tests/kvm/unattended/winxp32.sif|2 +-
 10 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/client/tests/kvm/unattended/win2003-32.sif 
b/client/tests/kvm/unattended/win2003-32.sif
index 5b9bf0e..901f435 100644
--- a/client/tests/kvm/unattended/win2003-32.sif
+++ b/client/tests/kvm/unattended/win2003-32.sif
@@ -60,4 +60,4 @@ Command0=cmd /c sc config TlntSvr start= auto
 Command1=cmd /c netsh firewall set opmode disable
 Command2=cmd /c net start telnet
 Command3=cmd /c E:\setuprss.bat
-Command4=cmd /c netsh interface ip set address local static 10.0.2.15 
255.255.255.0 10.0.2.2 1  ping 10.0.2.2 -n 20  A:\finish.exe 10.0.2.2
+Command4=cmd /c netsh interface ip set address local dhcp  ping 10.0.2.2 -n 
20  A:\finish.exe 10.0.2.2
diff --git a/client/tests/kvm/unattended/win2003-64.sif 
b/client/tests/kvm/unattended/win2003-64.sif
index aca24fe..9f09033 100644
--- a/client/tests/kvm/unattended/win2003-64.sif
+++ b/client/tests/kvm/unattended/win2003-64.sif
@@ -59,4 +59,4 @@ Command0=cmd /c sc config TlntSvr start= auto
 Command1=cmd /c netsh firewall set opmode disable
 Command2=cmd /c net start telnet
 Command3=cmd /c E:\setuprss.bat
-Command4=cmd /c netsh interface ip set address local static 10.0.2.15 
255.255.255.0 10.0.2.2 1  ping 10.0.2.2 -n 20  A:\finish.exe 10.0.2.2
+Command4=cmd /c netsh interface ip set address local dhcp  ping 10.0.2.2 -n 
20  A:\finish.exe 10.0.2.2
diff --git a/client/tests/kvm/unattended/win2008-32-autounattend.xml 
b/client/tests/kvm/unattended/win2008-32-autounattend.xml
index 0498e99..d8f7654 100644
--- a/client/tests/kvm/unattended/win2008-32-autounattend.xml
+++ b/client/tests/kvm/unattended/win2008-32-autounattend.xml
@@ -117,7 +117,7 @@
  /SynchronousCommand
  SynchronousCommand wcm:action=add
 Order6/Order
-CommandLine%WINDIR%\System32\cmd /c netsh interface ip 
set address Local Area Connection static 10.0.2.15 255.255.255.0 10.0.2.2 1 
#38;#38; ping 10.0.2.2 -n 20 #38;#38; A:\finish.exe 10.0.2.2/CommandLine
+CommandLine%WINDIR%\System32\cmd /c netsh interface ip 
set address Local Area Connection dhcp #38;#38; ping 10.0.2.2 -n 20 
#38;#38; A:\finish.exe 10.0.2.2/CommandLine
  /SynchronousCommand
 /FirstLogonCommands
 OOBE
diff --git a/client/tests/kvm/unattended/win2008-64-autounattend.xml 
b/client/tests/kvm/unattended/win2008-64-autounattend.xml
index 77c4999..4202b93 100644
--- a/client/tests/kvm/unattended/win2008-64-autounattend.xml
+++ b/client/tests/kvm/unattended/win2008-64-autounattend.xml
@@ -124,7 +124,7 @@
 /SynchronousCommand
 SynchronousCommand wcm:action=add
 Order6/Order
-CommandLine%WINDIR%\System32\cmd /c netsh interface ip 
set address Local Area Connection static 10.0.2.15 255.255.255.0 10.0.2.2 1 
#38;#38; ping 10.0.2.2 -n 20 #38;#38; A:\finish.exe 10.0.2.2/CommandLine
+CommandLine%WINDIR%\System32\cmd /c netsh interface ip 
set address Local Area Connection dhcp #38;#38; ping 10.0.2.2 -n 20 
#38;#38; A:\finish.exe 10.0.2.2/CommandLine
 /SynchronousCommand
 /FirstLogonCommands
 OOBE
diff --git a/client/tests/kvm/unattended/win2008-r2-autounattend.xml 
b/client/tests/kvm/unattended/win2008-r2-autounattend.xml
index 77c4999..4202b93 100644
--- a/client/tests/kvm/unattended/win2008-r2-autounattend.xml
+++ b/client/tests/kvm/unattended/win2008-r2-autounattend.xml
@@ -124,7 +124,7 @@
 /SynchronousCommand
 SynchronousCommand wcm:action=add
 Order6/Order
-CommandLine%WINDIR%\System32\cmd /c netsh interface ip 
set address Local Area Connection static 10.0.2.15 255.255.255.0 10.0.2.2 1 
#38;#38; ping 10.0.2.2 -n 20 #38;#38; A:\finish.exe 10.0.2.2/CommandLine
+

Re: Can KVM PassThrough specifically my PCI cards to fully-virt'd KVM Guests with my CPU? Yet?

2010-01-26 Thread Ben DJ
On Mon, Jan 25, 2010 at 10:51 PM, Brian Jackson i...@theiggy.com wrote:
 Nope. When support was being developed, there was, but it was never merged,
 and I highly doubt the patches would be remotely able to be applied at this
 point with all the code churn qemu has had.

Then, I'm stuck on xen.  :-(

Thanks for the explanation.

BenDJ
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: TPM Support in KVM

2010-01-26 Thread Markus Breitländer
Am 26.01.2010 16:56, schrieb Chris Wright:
 * Martin Schneider (martincschnei...@googlemail.com) wrote:
 Dear list,

 is there a document that describes the level of support of trusted
 computing technology in KVM and how things work?
 
 There's host level trusted boot, which simply needs tboot and a new
 enough kernel to support CONFIG_TXT.
 
 I read in various sources that KVM should support virtual Trusted
 Platform Modules in virtual machines but I coudln't find any evidence
 and/or document about this on the official site.
 
 No vTPM is currently supported.

Any resources to this topic (vTPM)?

I would be intrested in virtual TNC solutions (802.1x on wired networks)!

Can you virtualize a TNC Authenticator like a 802.1x Switch?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] QMP: Emit Basic events

2010-01-26 Thread Luiz Capitulino

While testing QMP on qemu-kvm I found that it's not emitting basic
events like RESET or POWERDOWN.

The reason is that in QEMU upstream those events are triggered
in QEMU's main loop (ie. vl.c:main_loop()), but control doesn't
reach there in qemu-kvm as it has its own main loop in
qemu-kvm.c:kvm_main_loop().

This commit adds the same set of events there too.

NOTE: The STOP event is not being added because it should be
triggered in vm_stop() and not in the main loop, this will be
fixed upstream.

Signed-off-by: Luiz Capitulino lcapitul...@redhat.com
---
 qemu-kvm.c |   10 +++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/qemu-kvm.c b/qemu-kvm.c
index 1c34846..06706c9 100644
--- a/qemu-kvm.c
+++ b/qemu-kvm.c
@@ -17,6 +17,7 @@
 #include block.h
 #include compatfd.h
 #include gdbstub.h
+#include monitor.h
 
 #include qemu-kvm.h
 #include libkvm.h
@@ -2124,11 +2125,14 @@ int kvm_main_loop(void)
 vm_stop(0);
 } else
 break;
-} else if (qemu_powerdown_requested())
+} else if (qemu_powerdown_requested()) {
+monitor_protocol_event(QEVENT_POWERDOWN, NULL);
 qemu_irq_raise(qemu_system_powerdown);
-else if (qemu_reset_requested())
+} else if (qemu_reset_requested()) {
+monitor_protocol_event(QEVENT_RESET, NULL);
 qemu_kvm_system_reset();
-else if (kvm_debug_cpu_requested) {
+} else if (kvm_debug_cpu_requested) {
+monitor_protocol_event(QEVENT_DEBUG, NULL);
 gdb_set_stop_cpu(kvm_debug_cpu_requested);
 vm_stop(EXCP_DEBUG);
 kvm_debug_cpu_requested = NULL;
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Sridhar Samudrala
This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree.
It also includes support for vnet_hdr option that enables gso/checksum
offload with raw backend. You can find the linux kernel patch to support
this feature here.
   http://thread.gmane.org/gmane.linux.network/150308

Signed-off-by: Sridhar Samudrala s...@us.ibm.com 

diff --git a/Makefile.objs b/Makefile.objs
index 357d305..4468124 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -34,6 +34,8 @@ net-nested-$(CONFIG_SOLARIS) += tap-solaris.o
 net-nested-$(CONFIG_AIX) += tap-aix.o
 net-nested-$(CONFIG_SLIRP) += slirp.o
 net-nested-$(CONFIG_VDE) += vde.o
+net-nested-$(CONFIG_POSIX) += raw.o
+net-nested-$(CONFIG_LINUX) += raw-linux.o
 net-obj-y += $(addprefix net/, $(net-nested-y))
 
 ##
diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index eba578a..4aa40f2 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -15,6 +15,7 @@
 #include net.h
 #include net/checksum.h
 #include net/tap.h
+#include net/raw.h
 #include qemu-timer.h
 #include virtio-net.h
 
@@ -133,6 +134,9 @@ static int peer_has_vnet_hdr(VirtIONet *n)
 case NET_CLIENT_TYPE_TAP:
 n-has_vnet_hdr = tap_has_vnet_hdr(n-nic-nc.peer);
 break;
+case NET_CLIENT_TYPE_RAW:
+n-has_vnet_hdr = raw_has_vnet_hdr(n-nic-nc.peer);
+break;
 default:
 return 0;
 }
@@ -149,6 +153,9 @@ static int peer_has_ufo(VirtIONet *n)
 case NET_CLIENT_TYPE_TAP:
 n-has_ufo = tap_has_ufo(n-nic-nc.peer);
 break;
+case NET_CLIENT_TYPE_RAW:
+n-has_ufo = raw_has_ufo(n-nic-nc.peer);
+break;
 default:
 return 0;
 }
@@ -165,6 +172,9 @@ static void peer_using_vnet_hdr(VirtIONet *n, int 
using_vnet_hdr)
 case NET_CLIENT_TYPE_TAP:
 tap_using_vnet_hdr(n-nic-nc.peer, using_vnet_hdr);
 break;
+case NET_CLIENT_TYPE_RAW:
+raw_using_vnet_hdr(n-nic-nc.peer, using_vnet_hdr);
+break;
 default:
 break; 
 }
@@ -180,6 +190,9 @@ static void peer_set_offload(VirtIONet *n, int csum, int 
tso4, int tso6,
 case NET_CLIENT_TYPE_TAP:
 tap_set_offload(n-nic-nc.peer, csum, tso4, tso6, ecn, ufo);
 break;
+case NET_CLIENT_TYPE_RAW:
+raw_set_offload(n-nic-nc.peer, csum, tso4, tso6, ecn, ufo);
+break;
 default:
 break; 
 }
diff --git a/net.c b/net.c
index 6ef93e6..1ca2415 100644
--- a/net.c
+++ b/net.c
@@ -26,6 +26,7 @@
 #include config-host.h
 
 #include net/tap.h
+#include net/raw.h
 #include net/socket.h
 #include net/dump.h
 #include net/slirp.h
@@ -1004,6 +1005,27 @@ static struct {
 },
 { /* end of list */ }
 },
+}, {
+.type = raw,
+.init = net_init_raw,
+.desc = {
+NET_COMMON_PARAMS_DESC,
+{
+.name = fd,
+.type = QEMU_OPT_STRING,
+.help = file descriptor of an already opened raw socket,
+}, {
+.name = ifname,
+.type = QEMU_OPT_STRING,
+.help = interface name,
+   }, {
+   .name = vnet_hdr,
+   .type = QEMU_OPT_BOOL,
+   .help = enable PACKET_VNET_HDR option on the raw interface
+   },
+{ /* end of list */ }
+   },
+
 #ifdef CONFIG_VDE
 }, {
 .type = vde,
@@ -1076,6 +1098,7 @@ int net_client_init(Monitor *mon, QemuOpts *opts, int 
is_netdev)
 #ifdef CONFIG_VDE
 strcmp(type, vde) != 0 
 #endif
+strcmp(type, raw) != 0 
 strcmp(type, socket) != 0) {
 qemu_error(The '%s' network backend type is not valid with 
-netdev\n,
type);
diff --git a/net.h b/net.h
index 116bb80..4722185 100644
--- a/net.h
+++ b/net.h
@@ -34,7 +34,8 @@ typedef enum {
 NET_CLIENT_TYPE_TAP,
 NET_CLIENT_TYPE_SOCKET,
 NET_CLIENT_TYPE_VDE,
-NET_CLIENT_TYPE_DUMP
+NET_CLIENT_TYPE_DUMP,
+NET_CLIENT_TYPE_RAW,
 } net_client_type;
 
 typedef void (NetPoll)(VLANClientState *, bool enable);
diff --git a/net/raw-linux.c b/net/raw-linux.c
new file mode 100644
index 000..9ed2e6a
--- /dev/null
+++ b/net/raw-linux.c
@@ -0,0 +1,97 @@
+/*
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the Software), to 
deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies 

[PATCH qemu-kvm] Add generic peer_* routines for the remaining tap specific routines

2010-01-26 Thread Sridhar Samudrala
This patch adds generic peer routines for the remaining tap specific
routines(using_vnet_hdr  set_offload). This makes it easier to add
new backends like raw(packet sockets) that support gso/checksum-offload.

Signed-off-by: Sridhar Samudrala s...@us.ibm.com

diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index 6e48997..eba578a 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -129,10 +129,13 @@ static int peer_has_vnet_hdr(VirtIONet *n)
 if (!n-nic-nc.peer)
 return 0;
 
-if (n-nic-nc.peer-info-type != NET_CLIENT_TYPE_TAP)
-return 0;
-
-n-has_vnet_hdr = tap_has_vnet_hdr(n-nic-nc.peer);
+switch (n-nic-nc.peer-info-type) {
+case NET_CLIENT_TYPE_TAP:
+n-has_vnet_hdr = tap_has_vnet_hdr(n-nic-nc.peer);
+break;
+default:
+return 0;
+}
 
 return n-has_vnet_hdr;
 }
@@ -142,11 +145,46 @@ static int peer_has_ufo(VirtIONet *n)
 if (!peer_has_vnet_hdr(n))
 return 0;
 
-n-has_ufo = tap_has_ufo(n-nic-nc.peer);
+switch (n-nic-nc.peer-info-type) {
+case NET_CLIENT_TYPE_TAP:
+n-has_ufo = tap_has_ufo(n-nic-nc.peer);
+break;
+default:
+return 0;
+}
 
 return n-has_ufo;
 }
 
+static void peer_using_vnet_hdr(VirtIONet *n, int using_vnet_hdr)
+{
+if (!n-nic-nc.peer)
+return;
+
+switch (n-nic-nc.peer-info-type) {
+case NET_CLIENT_TYPE_TAP:
+tap_using_vnet_hdr(n-nic-nc.peer, using_vnet_hdr);
+break;
+default:
+break; 
+}
+}
+
+static void peer_set_offload(VirtIONet *n, int csum, int tso4, int tso6,
+ int ecn, int ufo)
+{
+if (!n-nic-nc.peer)
+return;
+
+switch (n-nic-nc.peer-info-type) {
+case NET_CLIENT_TYPE_TAP:
+tap_set_offload(n-nic-nc.peer, csum, tso4, tso6, ecn, ufo);
+break;
+default:
+break; 
+}
+}
+
 static uint32_t virtio_net_get_features(VirtIODevice *vdev, uint32_t features)
 {
 VirtIONet *n = to_virtio_net(vdev);
@@ -154,7 +192,7 @@ static uint32_t virtio_net_get_features(VirtIODevice *vdev, 
uint32_t features)
 features |= (1  VIRTIO_NET_F_MAC);
 
 if (peer_has_vnet_hdr(n)) {
-tap_using_vnet_hdr(n-nic-nc.peer, 1);
+peer_using_vnet_hdr(n, 1);
 } else {
 features = ~(0x1  VIRTIO_NET_F_CSUM);
 features = ~(0x1  VIRTIO_NET_F_HOST_TSO4);
@@ -197,7 +235,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, 
uint32_t features)
 n-mergeable_rx_bufs = !!(features  (1  VIRTIO_NET_F_MRG_RXBUF));
 
 if (n-has_vnet_hdr) {
-tap_set_offload(n-nic-nc.peer,
+peer_set_offload(n,
 (features  VIRTIO_NET_F_GUEST_CSUM)  1,
 (features  VIRTIO_NET_F_GUEST_TSO4)  1,
 (features  VIRTIO_NET_F_GUEST_TSO6)  1,
@@ -761,8 +799,8 @@ static int virtio_net_load(QEMUFile *f, void *opaque, int 
version_id)
 }
 
 if (n-has_vnet_hdr) {
-tap_using_vnet_hdr(n-nic-nc.peer, 1);
-tap_set_offload(n-nic-nc.peer,
+peer_using_vnet_hdr(n, 1);
+peer_set_offload(n,
 (n-vdev.guest_features  VIRTIO_NET_F_GUEST_CSUM)  1,
 (n-vdev.guest_features  VIRTIO_NET_F_GUEST_TSO4)  1,
 (n-vdev.guest_features  VIRTIO_NET_F_GUEST_TSO6)  1,


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Anthony Liguori

On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:

This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree.
It also includes support for vnet_hdr option that enables gso/checksum
offload with raw backend. You can find the linux kernel patch to support
this feature here.
http://thread.gmane.org/gmane.linux.network/150308

Signed-off-by: Sridhar Samudralas...@us.ibm.com
   


See the previous discussion about the raw backend from Or's original 
patch.  There's no obvious reason why we should have this in addition to 
a tun/tap backend.


The only use-case I know of is macvlan but macvtap addresses this 
functionality while not introduce the rather nasty security problems 
associated with a raw backend.


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Anthony Liguori

On 01/26/2010 02:47 PM, Anthony Liguori wrote:

On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:

This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree.
It also includes support for vnet_hdr option that enables gso/checksum
offload with raw backend. You can find the linux kernel patch to support
this feature here.
http://thread.gmane.org/gmane.linux.network/150308

Signed-off-by: Sridhar Samudralas...@us.ibm.com


See the previous discussion about the raw backend from Or's original 
patch.  There's no obvious reason why we should have this in addition 
to a tun/tap backend.


The only use-case I know of is macvlan but macvtap addresses this 
functionality while not introduce the rather nasty security problems 
associated with a raw backend.


Not to mention that from a user perspective, raw makes almost no sense 
as it's an obscure socket protocol family.


A user wants to do useful things like bridged networking or direct VF 
assignment.  We should have -net backends that reflect things that make 
sense to a user.


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCIe device pass-through - No IOMMU, Failed to deassign device error

2010-01-26 Thread Yigal Korman
Hi,
Thank you for the responses.
I've managed to get further in the process while running KVM with
Fedora rather than Ubuntu.
I've gotten Windows 7 to recognize the graphics card but mark it with
an error about hardware resources (I assume memory/interrupts).
I've found the same error was encountered while trying to achieve this
goal using Xen, here is a blog which claims to succeed overcoming this
issue and achieving GPU pass-through with an NVIDIA GForce card with
Xen.
Maybe it can help your progress on the matter.

Thanks again,
Yigal.

On Tue, Jan 26, 2010 at 08:23, Avi Kivity a...@redhat.com wrote:
 On 01/26/2010 03:11 AM, Kenni Lund wrote:

 Can someone with write permissions to the wiki please add this?


 Everyone has write permissions, you just need an account.

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.





-- 
Due to the recession, to save on energy costs, the light at the end of
the tunnel will be turned off.
—God
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Seabios - read e820 table from qemu_cfg

2010-01-26 Thread Jes Sorensen

Hi,

Based on the feedback I received over the e820 reserve patch, I have
changed it to have QEMU pass in a list of entries that can cover more
than just the TSS/EPT range. This should provide the flexibility that
people were asking for.

The Seabios portion should allow for unlimited sized tables in theory,
whereas for QEMU I have set a fixed limit for now, but it can easily
be extended.

Please let me know what you think of this version!

Cheers,
Jes

Read optional table of e820 entries from qemu_cfg

Read optional table of e820 entries through qemu_cfg, allowing QEMU to
provide the location of KVM's switch area etc. rather than rely on
hard coded values.

For now, fall back to the old hard coded values for the TSS and EPT
switch page for compatibility reasons. Compatibility code could
possibly be removed in the future.

Signed-off-by: Jes Sorensen jes.soren...@redhat.com

---
 src/paravirt.c |   17 +
 src/paravirt.h |9 +
 src/post.c |   23 +++
 3 files changed, 45 insertions(+), 4 deletions(-)

Index: seabios/src/paravirt.c
===
--- seabios.orig/src/paravirt.c
+++ seabios/src/paravirt.c
@@ -132,6 +132,23 @@ u16 qemu_cfg_smbios_entries(void)
 return cnt;
 }
 
+u32 qemu_cfg_e820_entries(void)
+{
+u32 cnt;
+
+if (!qemu_cfg_present)
+return 0;
+
+qemu_cfg_read_entry(cnt, QEMU_CFG_E820_TABLE, sizeof(cnt));
+return cnt;
+}
+
+void* qemu_cfg_e820_load_next(void *addr)
+{
+qemu_cfg_read(addr, sizeof(struct e820_entry));
+return addr;
+}
+
 struct smbios_header {
 u16 length;
 u8 type;
Index: seabios/src/paravirt.h
===
--- seabios.orig/src/paravirt.h
+++ seabios/src/paravirt.h
@@ -36,6 +36,7 @@ static inline int kvm_para_available(voi
 #define QEMU_CFG_ACPI_TABLES   (QEMU_CFG_ARCH_LOCAL + 0)
 #define QEMU_CFG_SMBIOS_ENTRIES(QEMU_CFG_ARCH_LOCAL + 1)
 #define QEMU_CFG_IRQ0_OVERRIDE (QEMU_CFG_ARCH_LOCAL + 2)
+#define QEMU_CFG_E820_TABLE(QEMU_CFG_ARCH_LOCAL + 3)
 
 extern int qemu_cfg_present;
 
@@ -61,8 +62,16 @@ typedef struct QemuCfgFile {
 char name[56];
 } QemuCfgFile;
 
+struct e820_entry {
+u64 address;
+u64 length;
+u32 type;
+};
+
 u16 qemu_cfg_first_file(QemuCfgFile *entry);
 u16 qemu_cfg_next_file(QemuCfgFile *entry);
 u32 qemu_cfg_read_file(QemuCfgFile *entry, void *dst, u32 maxlen);
+u32 qemu_cfg_e820_entries(void);
+void* qemu_cfg_e820_load_next(void *addr);
 
 #endif
Index: seabios/src/post.c
===
--- seabios.orig/src/post.c
+++ seabios/src/post.c
@@ -135,10 +135,25 @@ ram_probe(void)
  , E820_RESERVED);
 add_e820(BUILD_BIOS_ADDR, BUILD_BIOS_SIZE, E820_RESERVED);
 
-if (kvm_para_available())
-// 4 pages before the bios, 3 pages for vmx tss pages, the
-// other page for EPT real mode pagetable
-add_e820(0xfffbc000, 4*4096, E820_RESERVED);
+if (kvm_para_available()) {
+u32 count;
+
+count = qemu_cfg_e820_entries();
+if (count) {
+struct e820_entry entry;
+int i;
+
+for (i = 0; i  count; i++) {
+qemu_cfg_e820_load_next(entry);
+add_e820(entry.address, entry.length, entry.type);
+}
+} else {
+// Backwards compatibility - provide hard coded range.
+// 4 pages before the bios, 3 pages for vmx tss pages, the
+// other page for EPT real mode pagetable
+add_e820(0xfffbc000, 4*4096, E820_RESERVED);
+}
+}
 
 dprintf(1, Ram Size=0x%08x (0x%08x%08x high)\n
 , RamSize, (u32)(RamSizeOver4G  32), (u32)RamSizeOver4G);


[PATCH] QEMU-KVM - provide e820 table via fw_cfg

2010-01-26 Thread Jes Sorensen

Hi,

This is the QEMU-KVM part of the patch. If we can agree on this
approach, I will do a version for upstream QEMU as well.

Cheers,
Jes

Use qemu-cfg to provide the BIOS with an optional table of e820 entries.

Notify the BIOS of the location of the TSS+EPT range to by reserving
it via the e820 table.

Signed-off-by: Jes Sorensen jes.soren...@redhat.com

---
 hw/pc.c   |   35 +++
 hw/pc.h   |9 +
 qemu-kvm-x86.c|7 +++
 target-i386/kvm.c |7 +++
 4 files changed, 58 insertions(+)

Index: qemu-kvm/hw/pc.c
===
--- qemu-kvm.orig/hw/pc.c
+++ qemu-kvm/hw/pc.c
@@ -66,6 +66,7 @@
 #define FW_CFG_ACPI_TABLES (FW_CFG_ARCH_LOCAL + 0)
 #define FW_CFG_SMBIOS_ENTRIES (FW_CFG_ARCH_LOCAL + 1)
 #define FW_CFG_IRQ0_OVERRIDE (FW_CFG_ARCH_LOCAL + 2)
+#define FW_CFG_E820_TABLE (FW_CFG_ARCH_LOCAL + 3)
 
 #define MAX_IDE_BUS 2
 
@@ -74,6 +75,21 @@ static RTCState *rtc_state;
 static PITState *pit;
 static PCII440FXState *i440fx_state;
 
+#define E820_NR_ENTRIES16
+
+struct e820_entry {
+uint64_t address;
+uint64_t length;
+uint32_t type;
+};
+
+struct e820_table {
+uint32_t count;
+struct e820_entry entry[E820_NR_ENTRIES];
+};
+
+static struct e820_table e820_table;
+
 qemu_irq *ioapic_irq_hack;
 
 typedef struct isa_irq_state {
@@ -444,6 +460,23 @@ static void bochs_bios_write(void *opaqu
 }
 }
 
+int e820_add_entry(uint64_t address, uint64_t length, uint32_t type)
+{
+int index = e820_table.count;
+struct e820_entry *entry;
+
+if (index = E820_NR_ENTRIES)
+return -EBUSY;
+entry = e820_table.entry[index];
+
+entry-address = address;
+entry-length = length;
+entry-type = type;
+
+e820_table.count++;
+return e820_table.count;
+}
+
 static void *bochs_bios_init(void)
 {
 void *fw_cfg;
@@ -475,6 +508,8 @@ static void *bochs_bios_init(void)
 if (smbios_table)
 fw_cfg_add_bytes(fw_cfg, FW_CFG_SMBIOS_ENTRIES,
  smbios_table, smbios_len);
+fw_cfg_add_bytes(fw_cfg, FW_CFG_E820_TABLE, (uint8_t *)e820_table,
+ sizeof(struct e820_table));
 
 /* allocate memory for the NUMA channel: one (64bit) word for the number
  * of nodes, one word for each VCPU-node and one word for each node to
Index: qemu-kvm/hw/pc.h
===
--- qemu-kvm.orig/hw/pc.h
+++ qemu-kvm/hw/pc.h
@@ -169,4 +169,13 @@ void extboot_init(BlockDriverState *bs, 
 
 int cpu_is_bsp(CPUState *env);
 
+/* e820 types */
+#define E820_RAM1
+#define E820_RESERVED   2
+#define E820_ACPI   3
+#define E820_NVS4
+#define E820_UNUSABLE   5
+
+int e820_add_entry(uint64_t, uint64_t, uint32_t);
+
 #endif
Index: qemu-kvm/qemu-kvm-x86.c
===
--- qemu-kvm.orig/qemu-kvm-x86.c
+++ qemu-kvm/qemu-kvm-x86.c
@@ -37,6 +37,13 @@ int kvm_set_tss_addr(kvm_context_t kvm, 
 {
 #ifdef KVM_CAP_SET_TSS_ADDR
int r;
+/*
+ * Tell fw_cfg to notify the BIOS to reserve the range.
+ */
+if (e820_add_entry(addr, 0x4000, E820_RESERVED)  0) {
+perror(e820_add_entry() table is full);
+exit(1);
+}
 
r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR);
if (r  0) {
Index: qemu-kvm/target-i386/kvm.c
===
--- qemu-kvm.orig/target-i386/kvm.c
+++ qemu-kvm/target-i386/kvm.c
@@ -298,6 +298,13 @@ int kvm_arch_init(KVMState *s, int smp_c
  * as unavaible memory.  FIXME, need to ensure the e820 map deals with
  * this?
  */
+/*
+ * Tell fw_cfg to notify the BIOS to reserve the range.
+ */
+if (e820_add_entry(0xfffbc000, 0x4000, E820_RESERVED)  0) {
+perror(e820_add_entry() table is full);
+exit(1);
+}
 return kvm_vm_ioctl(s, KVM_SET_TSS_ADDR, 0xfffbd000);
 }
 


Re: [PATCH] QEMU-KVM - provide e820 table via fw_cfg

2010-01-26 Thread Alexander Graf

On 26.01.2010, at 22:53, Jes Sorensen wrote:

 Hi,
 
 This is the QEMU-KVM part of the patch. If we can agree on this
 approach, I will do a version for upstream QEMU as well.

It shows as attachment again :(.


Alex

 
 Cheers,
 Jes
 
 0011-qemu-kvm-e820-table.patch

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: network shutdown under heavy load

2010-01-26 Thread Tom Lendacky
On Wednesday 20 January 2010 09:48:04 am Tom Lendacky wrote:
 On Tuesday 19 January 2010 05:57:53 pm Chris Wright wrote:
  * Tom Lendacky (t...@linux.vnet.ibm.com) wrote:
   On Wednesday 13 January 2010 03:52:28 pm Chris Wright wrote:
(Mark cc'd, sound familiar?)
   
* Tom Lendacky (t...@linux.vnet.ibm.com) wrote:
 On Sunday 10 January 2010 06:38:54 am Avi Kivity wrote:
  On 01/10/2010 02:35 PM, Herbert Xu wrote:
   On Sun, Jan 10, 2010 at 02:30:12PM +0200, Avi Kivity wrote:
   This isn't in 2.6.27.y.  Herbert, can you send it there?
  
   It appears that now that TX is fixed we have a similar problem
   with RX.  Once I figure that one out I'll send them together.

 I've been experiencing the network shutdown issue also.  I've been
 running netperf tests across 10GbE adapters with Qemu 0.12.1.2,
 RHEL5.4 guests and 2.6.32 kernel (from kvm.git) guests.  I
 instrumented Qemu to print out some network statistics.  It appears
 that at some point in the netperf test the receiving guest ends up
 having the 10GbE device receive_disabled variable in its
 VLANClientState structure stuck at 1. From looking at the code it
 appears that the virtio-net driver in the guest should cause
 qemu_flush_queued_packets in net.c to eventually run and clear the
 receive_disabled variable but it's not happening.  I don't seem
 to have these issues when I have a lot of debug settings active in
 the guest kernel which results in very low/poor network performance
 - maybe some kind of race condition?
  
   Ok, here's an update. After realizing that none of the ethtool offload
   options were enabled in my guest, I found that I needed to be using the
   -netdev option on the qemu command line.  Once I did that, some ethtool
   offload options were enabled and the deadlock did not appear when I did
   networking between guests on different machines.  However, the deadlock
   did appear when I did networking between guests on the same machine.
 
  What does your full command line look like?  And when the networking
  stops does your same receive_disabled hack make things work?
 
 The command line when using the -net option for the tap device is:
 
 /usr/local/bin/qemu-system-x86_64 -name cape-vm001 -m 1024 -drive
 file=/autobench/var/tmp/cape-vm001-
 raw.img,if=virtio,index=0,media=disk,boot=on -net
 nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:51 -net
 tap,vlan=0,script=/autobench/var/tmp/ifup-kvm-
 br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net
 nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:D1 -net
 tap,vlan=1,script=/autobench/var/tmp/ifup-kvm-
 br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :1 -monitor
 telnet::5701,server,nowait -snapshot -daemonize
 
 when using the -netdev option for the tap device:
 
 /usr/local/bin/qemu-system-x86_64 -name cape-vm001 -m 1024 -drive
 file=/autobench/var/tmp/cape-vm001-
 raw.img,if=virtio,index=0,media=disk,boot=on -net
 nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:51,netdev=cape-vm001-eth0 -
 netdev tap,id=cape-vm001-eth0,script=/autobench/var/tmp/ifup-kvm-
 br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net
 nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:D1,netdev=cape-vm001-eth1 -
 netdev tap,id=cape-vm001-eth1,script=/autobench/var/tmp/ifup-kvm-
 br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :1 -monitor
 telnet::5701,server,nowait -snapshot -daemonize
 
 
 The first ethernet device is a 1GbE device for communicating with the
 automation infrastructure we have.  The second ethernet device is the 10GbE
 device that the netperf tests run on.
 
 I can get the networking to work again by bringing down the interfaces and
 reloading the virtio_net module (modprobe -r virtio_net / modprobe
 virtio_net).
 
 I haven't had a chance yet to run the tests against a modified version of
  qemu that does not set the receive_disabled variable.

I got a chance to run with the setting of the receive_diabled variable 
commented out and I still run into the problem.  It's easier to reproduce when 
running netperf between two guests on the same machine.  I instrumented qemu 
and virtio a little bit to try and track this down.  What I'm seeing is that, 
with two guests on the same machine, the receiving (netserver) guest 
eventually gets into a condition where the tap read poll callback is disabled 
and never re-enabled.  So packets are never delivered from tap to qemu and to 
the guest.  On the sending (netperf) side the transmit queue eventually runs 
out of capacity and it can no longer send packets (I believe this is unique to 
having the guests on the same machine).  And as before, bringing down the 
interfaces, reloading the virtio_net module, and restarting the interfaces 
clears things up.

Tom

 
 Tom
 
  thanks,
  -chris
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  

Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Sridhar Samudrala
On Tue, 2010-01-26 at 14:47 -0600, Anthony Liguori wrote:
 On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:
  This patch adds raw socket backend to qemu and is based on Or Gerlitz's
  patch re-factored and ported to the latest qemu-kvm git tree.
  It also includes support for vnet_hdr option that enables gso/checksum
  offload with raw backend. You can find the linux kernel patch to support
  this feature here.
  http://thread.gmane.org/gmane.linux.network/150308
 
  Signed-off-by: Sridhar Samudralas...@us.ibm.com
 
 
 See the previous discussion about the raw backend from Or's original 
 patch.  There's no obvious reason why we should have this in addition to 
 a tun/tap backend.
 
 The only use-case I know of is macvlan but macvtap addresses this 
 functionality while not introduce the rather nasty security problems 
 associated with a raw backend.

The raw backend can be attached to a physical device, macvlan or SR-IOV VF.
I don't think AF_PACKET socket itself introduces any security problems. The
raw socket can be created only by a user with CAP_RAW capability. The only
issue is if we need to assume that qemu itself is an untrusted process and a
raw fd cannot be passed to it.
But, i think it is a useful backend to support in qemu that provides guest to
remote host connectivity without the need for a bridge/tap.

macvtap could be an alternative if it supports binding to SR-IOV VFs too.

Thanks
Sridhar


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Sridhar Samudrala
On Tue, 2010-01-26 at 14:50 -0600, Anthony Liguori wrote:
 On 01/26/2010 02:47 PM, Anthony Liguori wrote:
  On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:
  This patch adds raw socket backend to qemu and is based on Or Gerlitz's
  patch re-factored and ported to the latest qemu-kvm git tree.
  It also includes support for vnet_hdr option that enables gso/checksum
  offload with raw backend. You can find the linux kernel patch to support
  this feature here.
  http://thread.gmane.org/gmane.linux.network/150308
 
  Signed-off-by: Sridhar Samudralas...@us.ibm.com
 
  See the previous discussion about the raw backend from Or's original 
  patch.  There's no obvious reason why we should have this in addition 
  to a tun/tap backend.
 
  The only use-case I know of is macvlan but macvtap addresses this 
  functionality while not introduce the rather nasty security problems 
  associated with a raw backend.
 
 Not to mention that from a user perspective, raw makes almost no sense 
 as it's an obscure socket protocol family.
Not clear what you mean here. AF_PACKET socket is just a transport
mechanism for the host kernel to put the packets from the guest directly
to an attached interface and vice-versa.

 A user wants to do useful things like bridged networking or direct VF 
 assignment.  We should have -net backends that reflect things that make 
 sense to a user.

Binding to a SR-IOV VF is one of the use-case that is supported by raw
backend.

Thanks
Sridhar

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Anthony Liguori

On 01/26/2010 05:15 PM, Sridhar Samudrala wrote:

On Tue, 2010-01-26 at 14:47 -0600, Anthony Liguori wrote:
   

On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:
 

This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree.
It also includes support for vnet_hdr option that enables gso/checksum
offload with raw backend. You can find the linux kernel patch to support
this feature here.
 http://thread.gmane.org/gmane.linux.network/150308

Signed-off-by: Sridhar Samudralas...@us.ibm.com

   

See the previous discussion about the raw backend from Or's original
patch.  There's no obvious reason why we should have this in addition to
a tun/tap backend.

The only use-case I know of is macvlan but macvtap addresses this
functionality while not introduce the rather nasty security problems
associated with a raw backend.
 

The raw backend can be attached to a physical device


This is equivalent to bridging with tun/tap except that it has the 
unexpected behaviour of unreliable host/guest networking (which is not 
universally consistent across platforms either).  This is not a mode we 
want to encourage users to use.



, macvlan


macvtap is a superior way to achieve this use case because a macvtap fd 
can safely be given to a lesser privilege process without allowing 
escalation of privileges.



  or SR-IOV VF.
   


This depends on vhost-net.  In general, what I would like to see for 
this is something more user friendly that dealt specifically with this 
use-case.  Although honestly, given the recent security concerns around 
raw sockets, I'm very concerned about supporting raw sockets in qemu at all.


Essentially, you get worse security doing vhost-net + raw + VF then with 
PCI passthrough + VF because at least in the later case you can run qemu 
without privileges.  CAP_NET_RAW is a very big privilege.


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


qemu-kvm: enable get/set vcpu events on reset and migration

2010-01-26 Thread Marcelo Tosatti

qemu-kvm should reset and save/restore vcpu events.

Signed-off-by: Marcelo Tosatti mtosa...@redhat.com

diff --git a/kvm.h b/kvm.h
index e2a945b..9fa4e25 100644
--- a/kvm.h
+++ b/kvm.h
@@ -52,6 +52,9 @@ int kvm_set_migration_log(int enable);
 int kvm_has_sync_mmu(void);
 #endif /* KVM_UPSTREAM */
 int kvm_has_vcpu_events(void);
+int kvm_put_vcpu_events(CPUState *env);
+int kvm_get_vcpu_events(CPUState *env);
+
 #ifdef KVM_UPSTREAM
 
 void kvm_setup_guest_memory(void *start, size_t size);
@@ -96,7 +99,9 @@ int kvm_arch_init(KVMState *s, int smp_cpus);
 
 int kvm_arch_init_vcpu(CPUState *env);
 
+#endif
 void kvm_arch_reset_vcpu(CPUState *env);
+#ifdef KVM_UPSTREAM
 
 struct kvm_guest_debug;
 struct kvm_debug_exit_arch;
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index 82e362c..7f820a4 100644
--- a/qemu-kvm-x86.c
+++ b/qemu-kvm-x86.c
@@ -1457,8 +1457,9 @@ void kvm_arch_push_nmi(void *opaque)
 
 void kvm_arch_cpu_reset(CPUState *env)
 {
-env-interrupt_injected = -1;
+kvm_arch_reset_vcpu(env);
 kvm_arch_load_regs(env);
+kvm_put_vcpu_events(env);
 if (!cpu_is_bsp(env)) {
if (kvm_irqchip_in_kernel()) {
 #ifdef KVM_CAP_MP_STATE
diff --git a/qemu-kvm.c b/qemu-kvm.c
index 1c34846..f891a3e 100644
--- a/qemu-kvm.c
+++ b/qemu-kvm.c
@@ -2187,6 +2187,11 @@ static int kvm_create_context(void)
 return r;
 }
 
+kvm_state-vcpu_events = 0;
+#ifdef KVM_CAP_VCPU_EVENTS
+kvm_state-vcpu_events = kvm_check_extension(kvm_state, 
KVM_CAP_VCPU_EVENTS);
+#endif
+
 kvm_init_ap();
 if (kvm_irqchip) {
 if (!qemu_kvm_has_gsi_routing()) {
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 9af1e48..79be2d5 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -285,6 +285,7 @@ int kvm_arch_init_vcpu(CPUState *env)
 return kvm_vcpu_ioctl(env, KVM_SET_CPUID2, cpuid_data);
 }
 
+#endif
 void kvm_arch_reset_vcpu(CPUState *env)
 {
 env-exception_injected = -1;
@@ -292,6 +293,7 @@ void kvm_arch_reset_vcpu(CPUState *env)
 env-nmi_injected = 0;
 env-nmi_pending = 0;
 }
+#ifdef KVM_UPSTREAM
 
 static int kvm_has_msr_star(CPUState *env)
 {
@@ -776,8 +778,9 @@ static int kvm_get_mp_state(CPUState *env)
 env-mp_state = mp_state.mp_state;
 return 0;
 }
+#endif
 
-static int kvm_put_vcpu_events(CPUState *env)
+int kvm_put_vcpu_events(CPUState *env)
 {
 #ifdef KVM_CAP_VCPU_EVENTS
 struct kvm_vcpu_events events;
@@ -807,7 +810,7 @@ static int kvm_put_vcpu_events(CPUState *env)
 #endif
 }
 
-static int kvm_get_vcpu_events(CPUState *env)
+int kvm_get_vcpu_events(CPUState *env)
 {
 #ifdef KVM_CAP_VCPU_EVENTS
 struct kvm_vcpu_events events;
@@ -844,6 +847,7 @@ static int kvm_get_vcpu_events(CPUState *env)
 return 0;
 }
 
+#ifdef KVM_UPSTREAM
 int kvm_arch_put_registers(CPUState *env)
 {
 int ret;
diff --git a/target-i386/machine.c b/target-i386/machine.c
index 47ca6e8..219224d 100644
--- a/target-i386/machine.c
+++ b/target-i386/machine.c
@@ -324,6 +324,7 @@ static void cpu_pre_save(void *opaque)
 
 cpu_synchronize_state(env);
 kvm_save_mpstate(env);
+kvm_get_vcpu_events(env);
 
 /* FPU */
 env-fpus_vmstate = (env-fpus  ~0x3800) | (env-fpstt  0x7)  11;
@@ -374,6 +375,7 @@ static int cpu_post_load(void *opaque, int version_id)
 
 kvm_load_tsc(env);
 kvm_load_mpstate(env);
+kvm_put_vcpu_events(env);
 }
 
 return 0;
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Windows Vista/7 repeatedly prompt to Set Network Location

2010-01-26 Thread Jamin W. Collins
Every time I start a Windows Vista or Windows 7 virtual machine it 
request a that a location be set for the network, regardless of the fact 
that the network location has already been set the same way every time 
the system is started.  Near as I can tell the VM's NIC MAC, IP address, 
DNS servers, default gateway, and all other network related items are 
the same every single time.  While this does not stop the VM from 
working it is annoying.


I'm starting the VM the same way through libvirt every single time.  The 
resultant kvm command line is:


/usr/bin/kvm -S -M pc-0.11 -cpu qemu32 -enable-kvm -m 1536 -smp 1 -name 
win7 -uuid 6671f42a-b974-6bb9-bd7e-fd1da95cabe5 -monitor 
unix:/var/lib/libvirt/qemu/win7.monitor,server,nowait -localtime -boot c 
-drive file=/media/devel/testing/win7/testing.img,if=ide,index=0,boot=on 
-drive 
file=/media/devel/testing/isos/en_windows_7_professional_x86_dvd_x15-65804.iso,if=ide,media=cdrom,index=2 
-net nic,macaddr=54:52:00:37:4a:ce,vlan=0,model=rtl8139,name=rtl8139.0 
-net tap,fd=38,vlan=0,name=tap.0 -serial pty -parallel none -usb 
-usbdevice tablet -vga cirrus


While the uuid isn't the same from one execution to the next, to my 
knowledge it's not something the VM ever sees and is only an identifier 
within KVM.  Has anyone else seen anything like this?


Please CC me, as I'm not subscribed to the list.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH qemu-kvm] Add raw(af_packet) network backend to qemu

2010-01-26 Thread Arnd Bergmann
On Wednesday 27 January 2010, Anthony Liguori wrote:
  The raw backend can be attached to a physical device
 
 This is equivalent to bridging with tun/tap except that it has the 
 unexpected behaviour of unreliable host/guest networking (which is not 
 universally consistent across platforms either).  This is not a mode we 
 want to encourage users to use.

It's not the most common scenario, but I've seen systems (I remember
one on s/390 with z/VM) where you really want to isolate the guest
network as much as possible from the host network. Besides PCI
passthrough, giving the host device to a guest using a raw socket
is the next best approximation of that.

Then again, macvtap will do that too, if the device driver supports
multiple unicast MAC addresses without forcing promiscous mode.

  , macvlan
 
 macvtap is a superior way to achieve this use case because a macvtap fd 
 can safely be given to a lesser privilege process without allowing 
 escalation of privileges.

Yes.

or SR-IOV VF.
 
 
 This depends on vhost-net.

Why? I don't see anything in this scenario that is vhost-net specific.
I also plan to cover this aspect in macvtap in the future, but the current
code does not do it yet. It also requires device driver changes.

   In general, what I would like to see for 
 this is something more user friendly that dealt specifically with this 
 use-case.  Although honestly, given the recent security concerns around 
 raw sockets, I'm very concerned about supporting raw sockets in qemu at all.
 
 Essentially, you get worse security doing vhost-net + raw + VF then with 
 PCI passthrough + VF because at least in the later case you can run qemu 
 without privileges.  CAP_NET_RAW is a very big privilege.

It can be contained to a large degree with network namespaces. When you
run qemu in its own namespace and add the VF to that, CAP_NET_RAW
should ideally have no effect on other parts of the system (except
bugs in the namespace implementation).

Arnd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvmppc/booke: Set ESR and DEAR when inject interrupt to guest

2010-01-26 Thread Hollis Blanchard
On Mon, Jan 25, 2010 at 3:32 AM, Liu Yu-B13201 b13...@freescale.com wrote:

 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Friday, January 22, 2010 7:33 PM
 To: Liu Yu-B13201
 Cc: hol...@penguinppc.org; kvm-ppc@vger.kernel.org;
 k...@vger.kernel.org
 Subject: Re: [PATCH] kvmppc/booke: Set ESR and DEAR when
 inject interrupt to guest


 On 22.01.2010, at 12:27, Liu Yu-B13201 wrote:

 
 
  -Original Message-
  From: kvm-ppc-ow...@vger.kernel.org
  [mailto:kvm-ppc-ow...@vger.kernel.org] On Behalf Of Alexander Graf
  Sent: Friday, January 22, 2010 7:13 PM
  To: Liu Yu-B13201
  Cc: hol...@penguinppc.org; kvm-ppc@vger.kernel.org;
  k...@vger.kernel.org
  Subject: Re: [PATCH] kvmppc/booke: Set ESR and DEAR when
  inject interrupt to guest
 
 
  On 22.01.2010, at 11:54, Liu Yu wrote:
 
  Old method prematurely sets ESR and DEAR.
  Move this part after we decide to inject interrupt,
  and make it more like hardware behave.
 
  Signed-off-by: Liu Yu yu@freescale.com
  ---
  arch/powerpc/kvm/booke.c   |   24 ++--
  arch/powerpc/kvm/emulate.c |    2 --
  2 files changed, 14 insertions(+), 12 deletions(-)
 
  @@ -286,15 +295,12 @@ int kvmppc_handle_exit(struct kvm_run
  *run, struct kvm_vcpu *vcpu,
            break;
 
    case BOOKE_INTERRUPT_DATA_STORAGE:
  -         vcpu-arch.dear = vcpu-arch.fault_dear;
  -         vcpu-arch.esr = vcpu-arch.fault_esr;
            kvmppc_booke_queue_irqprio(vcpu,
  BOOKE_IRQPRIO_DATA_STORAGE);
 
  kvmppc_booke_queue_data_storage(vcpu, vcpu-arch.fault_esr,
  vcpu-arch.fault_dear);
 
            kvmppc_account_exit(vcpu, DSI_EXITS);
            r = RESUME_GUEST;
            break;
 
    case BOOKE_INTERRUPT_INST_STORAGE:
  -         vcpu-arch.esr = vcpu-arch.fault_esr;
            kvmppc_booke_queue_irqprio(vcpu,
  BOOKE_IRQPRIO_INST_STORAGE);
 
  kvmppc_booke_queue_inst_storage(vcpu, vcpu-arch.fault_esr);
 
 
  Not sure if this is redundant, as we already have fault_esr.
  Or should we ignore what hareware is and create a new esr to guest?

 On Book3S I take the SRR1 we get from the host as
 inspiration of what to pass to the guest as SRR1. I think
 we should definitely be able to inject a fault that we didn't
 get in that exact form from the exit path.

 I'm also not sure if something could clobber fault_esr if
 another interrupt takes precedence. Say a #MC.

 No as far as I know.
 And if yes, the clobber could as well happen before we copy it.
 Hollis, what do you think we should do here?

I'm torn, and in some ways it's not that important right now. However,
I think it makes sense to add something like vcpu-queued_esr as a
separate field from vcpu-fault_esr. The use case I'm thinking of is a
debugger wanting to invoke guest kernel to provide a translation for
an address not currently present in the TLB (i.e. not currently
visible to the debugger).

-Hollis
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html