Ping.
Have I committed a bug-reporting sin in the mail below or is everyone
simply too busy to look at this kvm-related crash?
On 07/09/12 11:57, Chris Clayton wrote:
Hi,
When I run WinXP SP3 through qemu-kvm-1.1.0 on linux kernel 3.5.0-rc6, I
get a segmentation fault within 3 or 4 minutes
On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
Ping.
Have I committed a bug-reporting sin in the mail below or is
everyone simply too busy to look at this kvm-related crash?
Since you have good and bad points can you bisect the problem?
On 07/09/12 11:57, Chris Clayton
On 07/11/12 08:12, Gleb Natapov wrote:
On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
Ping.
Have I committed a bug-reporting sin in the mail below or is
everyone simply too busy to look at this kvm-related crash?
Since you have good and bad points can you bisect the problem?
On Wed, Jul 11, 2012 at 08:18:17AM +0100, Chris Clayton wrote:
On 07/11/12 08:12, Gleb Natapov wrote:
On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
Ping.
Have I committed a bug-reporting sin in the mail below or is
everyone simply too busy to look at this kvm-related crash?
DMY guys.
I've sorted it.
We're happy now.
From: verucasal...@hotmail.co.uk
To: kvm@vger.kernel.org
Subject: Issue with mouse-capture
Date: Mon, 9 Jul 2012 08:16:10 +
I realise you guys are very busy, but I'm about to go into the Qemu-kvm code
On 07/11/2012 03:56 AM, Alexander Graf wrote:
Hi Avi,
This is my current patch queue for ppc. Please pull.
It contains the following changes:
* VERY IMPORTANT (please forward to -stable):
Fix H_CEDE with PR KVM and newer guest kernels
If it's important please separate it and put
On 07/11/2012 03:56 AM, Alexander Graf wrote:
Hi Avi,
This is my current patch queue for ppc. Please pull.
* Book3S HV: Fix locks (should be in your tree already?)
Indeed it's in 3.5 already. The way to check it to look for it in
auto-next, which includes master, upstream, and next.
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
struct kvm_lpage_info {
diff --git
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO workload, so that's just incredible to see so much cpu
wasted. I feel that 2
On 07/09/2012 10:55 AM, Christian Borntraeger wrote:
On 09/07/12 08:20, Raghavendra K T wrote:
Currently Pause Looop Exit (PLE) handler is doing directed yield to a
random VCPU on PL exit. Though we already have filtering while choosing
the candidate to yield_to, we can do better.
Problem
On 07/03/2012 10:21 PM, Alex Williamson wrote:
Here's the latest iteration of adding an interface to assert and
de-assert level interrupts from external drivers like vfio. These
apply on top of the previous argument cleanup, documentation, and
sanitization patches for irqfd. It would be
On 07/09/2012 07:53 PM, Alex Williamson wrote:
The kernel no longer allows us to pass NULL for the hard handler
without also specifying IRQF_ONESHOT. IRQF_ONESHOT imposes latency
in the exit path that we don't need for MSI interrupts. Long term
we'd like to inject these interrupts from the
On 06/19/2012 06:42 PM, Chegu Vinod wrote:
Hello,
Wanted to share some preliminary data from live migration experiments on a setup
that is perhaps one of the larger ones.
We used Juan's huge_memory patches (without the separate migration thread) and
measured the total migration time and the
On 06/19/2012 08:22 PM, Michael Roth wrote:
On Tue, Jun 19, 2012 at 11:34:42PM +0900, Takuya Yoshikawa wrote:
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori anth...@codemonkey.ws wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the
On 07/06/2012 07:22 PM, Jan Kiszka wrote:
Replace the home-brewed qdev property for PCI host addresses with the
new upstream version.
Thanks, applied.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe kvm in
the
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristics:
- loop for a given amount of loops
- check if the
On 2012-07-11 11:53, Avi Kivity wrote:
On 07/03/2012 10:21 PM, Alex Williamson wrote:
Here's the latest iteration of adding an interface to assert and
de-assert level interrupts from external drivers like vfio. These
apply on top of the previous argument cleanup, documentation, and
This is v2 of the ACPI memory hotplug prototype for x86_64 target.
Changes v1-v2
- memory map is automatically calculated for hotplug dimms. Dimms are added from
top-of-memory skipping the pci hole at [PCI_HOLE_START, 4G).
- Renamed from -memslot to -dimm. Commands changed to dimm_add,
Extend the DSDT to include methods for handling memory hot-add and hot-remove
notifications and memory device status requests. These functions are called
from the memory device SSDT methods.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl | 70
A 32-byte register is used to present up to 256 hotplug-able memory devices
to BIOS and OSPM. Hot-add and hot-remove functions trigger an ACPI hotplug
event through these. Only reads are allowed from these registers.
An ACPI hot-remove event but needs to wait for OSPM to eject the device.
We use
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis can be detected
with the new hmp command info memhp or
Add support for _OST method. _OST method will write into the correct I/O byte to
signal success / failure of hot-add or hot-remove to qemu.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl | 46 ++
This allows failed hot operations to be retried at anytime. This only
works for guests that use _OST notification. Other guests cannot retry failed
hot operations on same devices until after reboot.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
hw/acpi_piix4.c |
Implement batch dimm creation command line options. These could be useful for
not bloating the command line with a large number of dimms.
syntax: -dimms pfx=poolid,size=sz,num=n
Will create numdimms dimms with ids poolid0, ..., poolidn-1. Each dimm has a
size of sz.
Implement -dimmpop option
Each hotplug-able memory slot is a SysBusDevice. A hot-add operation for a
particular dimm creates a new MemoryRegion of the given physical address
offset, size and node proximity, and attaches it to main system memory as a
sub_region. A hot-remove operation detaches and frees the MemoryRegion
Live migration works after memory hot-add events, as long as the
qemu command line -dimm arguments are changed on the destination host
to specify populated=on for the dimms that have been hot-added.
If a command-line change has not occured, the destination host does not yet
have the corresponding
in case of hot-remove or hot-add failure, the dimm bitmaps in qemu and Seabios
are inconsistent with the true state of the DIMM devices. The populated field
of the DimmState reflects the true state of the device. This inconsistency means
that a failed operation cannot be retried.
Ths patch
This reverts bitmap state in the case of a failed hot operation, in order to
allow retry of failed hot operations
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git
Returns total memory of guest in bytes, including hotplugged memory.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
hmp-commands.hx |2 ++
hmp.c|7 +++
hmp.h|1 +
hw/dimm.c| 15 +++
monitor.c|
This allows qemu to receive notifications from the guest OS on success or
failure of a memory hotplug request. The guest OS needs to implement the _OST
functionality for this to work (linux-next: http://lkml.org/lkml/2012/6/25/321)
Also add new _OST registers in docs/specs/acpi_hotplug.txt
Hot-add hmp syntax: dimm_add dimmid
Hot-remove hmp syntax: dimm_del dimmid
Respective qmp commands are dimm-add, dimm-del.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
hmp-commands.hx | 32
monitor.c | 11 +++
Syntax: -dimm id=name,size=sz,node=pxm,populated=on|off
The starting physical address for all dimms is calculated automatically from top
of memory, skipping the pci hole at [PCI_HOLE_START, 4G).
populated=on means the dimm is populated at machine startup. Default is off.
node is defining numa
The numa_fw_cfg paravirt interface is extended to include SRAT information for
all hotplug-able dimms. There are 3 words for each hotplug-able memory slot,
denoting start address, size and node proximity. The new info is appended after
existing numa info, so that the fw_cfg layout does not break.
Dimm physical address offsets are calculated automatically and memory map is
adjusted accordingly. If a DIMM can fit before the PCI_HOLE_START (currently
0xe000), it will be added normally, otherwise its physical address will be
above 4GB.
Signed-off-by: Vasilis Liaskovitis
The memory device generation is guided by qemu paravirt info. Seabios
first uses the info to setup SRAT entries for the hotplug-able memory slots.
Afterwards, build_memssdt uses the created SRAT entries to generate
appropriate memory device objects. One memory device (and corresponding SRAT
entry)
Define SSDT hotplug-able memory devices in _SB namespace. The dynamically
generated SSDT includes per memory device hotplug methods. These methods
just call methods defined in the DSDT. Also dynamically generate a MTFY
method and a MEON array of the online/available memory devices. ACPI
At 07/11/2012 06:31 PM, Vasilis Liaskovitis Wrote:
The memory device generation is guided by qemu paravirt info. Seabios
first uses the info to setup SRAT entries for the hotplug-able memory slots.
Afterwards, build_memssdt uses the created SRAT entries to generate
appropriate memory device
On 07/11/2012 01:18 PM, Jan Kiszka wrote:
On 2012-07-11 11:53, Avi Kivity wrote:
On 07/03/2012 10:21 PM, Alex Williamson wrote:
Here's the latest iteration of adding an interface to assert and
de-assert level interrupts from external drivers like vfio. These
apply on top of the previous
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several
On 11.07.2012, at 13:04, Avi Kivity wrote:
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
Perhaps x86 should copy this.
See
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K Traghavendra...@linux.vnet.ibm.com
Noting pause loop exited vcpu helps in filtering right candidate to
yield.
Yielding to same
On 11/07/12 13:04, Avi Kivity wrote:
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
On 2012-07-11 12:49, Avi Kivity wrote:
On 07/11/2012 01:18 PM, Jan Kiszka wrote:
On 2012-07-11 11:53, Avi Kivity wrote:
On 07/03/2012 10:21 PM, Alex Williamson wrote:
Here's the latest iteration of adding an interface to assert and
de-assert level interrupts from external drivers like vfio.
On 07/11/2012 02:16 PM, Alexander Graf wrote:
yes the data structure itself seems based on the algorithm
and not on arch specific things. That should work. If we move that to
common
code then s390 will use that scheme automatically for the cases were we
call
kvm_vcpu_on_spin(). All
On 07/11/2012 02:18 PM, Christian Borntraeger wrote:
On 11/07/12 13:04, Avi Kivity wrote:
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
Perhaps
VHOST_SET_MEM_TABLE failed: Operation not supported
In vhost_set_memory(), We have
if (mem.padding)
return -EOPNOTSUPP;
So, we need to zero struct vhost_memory.
Signed-off-by: Asias He asias.he...@gmail.com
---
tools/kvm/virtio/net.c |2 +-
1 file changed, 1
Current qemu-kvm master merged with latest upstream fails on startup:
(gdb) bt
#0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
/home/tlv/akivity/qemu/kvm-all.c:1602
#1 0x7fdcd49c9fda in kvm_apic_enable_tpr_reporting
(s=0x7fdcd75af6c0, enable=false) at
On 07/11/2012 02:23 PM, Jan Kiszka wrote:
I'd appreciate a couple of examples for formality's sake.
From the top of my head: NVIDIA FX3700 (granted, legacy by now), Atheros
AR9287. For others, I need to check.
Thanks.
And then there is not easily replaceable legacy hardware like old
On 11.07.2012, at 13:23, Avi Kivity wrote:
On 07/11/2012 02:16 PM, Alexander Graf wrote:
yes the data structure itself seems based on the algorithm
and not on arch specific things. That should work. If we move that to
common
code then s390 will use that scheme automatically for the
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several
On 2012-07-11 13:46, Avi Kivity wrote:
Current qemu-kvm master merged with latest upstream fails on startup:
(gdb) bt
#0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
/home/tlv/akivity/qemu/kvm-all.c:1602
#1 0x7fdcd49c9fda in kvm_apic_enable_tpr_reporting
On 11/07/12 13:51, Raghavendra K T wrote:
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristics:
- loop for a given amount of loops
- check if
On 07/11/12 12:31, Vasilis Liaskovitis wrote:
In order to hotplug memory between RamSize and BUILD_PCIMEM_START, the pci
window needs to start at BUILD_PCIMEM_START (0xe000).
Otherwise, the guest cannot online new dimms at those ranges due to pci_root
window conflicts. (workaround for
On 07/11/2012 02:55 PM, Jan Kiszka wrote:
On 2012-07-11 13:46, Avi Kivity wrote:
Current qemu-kvm master merged with latest upstream fails on startup:
(gdb) bt
#0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
/home/tlv/akivity/qemu/kvm-all.c:1602
#1
On 07/11/2012 04:48 PM, Avi Kivity wrote:
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K Traghavendra...@linux.vnet.ibm.com
Noting pause loop exited vcpu helps in filtering
On 2012-07-11 13:58, Avi Kivity wrote:
On 07/11/2012 02:55 PM, Jan Kiszka wrote:
On 2012-07-11 13:46, Avi Kivity wrote:
Current qemu-kvm master merged with latest upstream fails on startup:
(gdb) bt
#0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
On 07/11/2012 02:59 PM, Jan Kiszka wrote:
I will try to reproduce. Is there a tree of the merge available?
I just merged upstream into qemu-kvm master. For some reason there were
no conflicts.
A rare moment, I guess. ;)
I'll put it down to random chance until we can figure out who's
On 07/11/2012 05:25 PM, Christian Borntraeger wrote:
On 11/07/12 13:51, Raghavendra K T wrote:
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several
On 07/11/2012 03:04 PM, Avi Kivity wrote:
specific command line or guest?
qemu-system-x86_64
Just did the same, but it's all fine here.
Ok, I'll debug it. Probably something stupid like a miscompile.
Indeed, a simple clean build fixed it up. Paolo, it looks like
autodependencies are
Il 11/07/2012 14:08, Avi Kivity ha scritto:
specific command line or guest?
qemu-system-x86_64
Just did the same, but it's all fine here.
Ok, I'll debug it. Probably something stupid like a miscompile.
Indeed, a simple clean build fixed it up. Paolo, it looks like
- Original Message -
Hm, suppose we're the next-in-line for a ticket lock and exit due
to
PLE. The lock holder completes and unlocks, which really assigns
the
lock to us. So now we are the lock owner, yet we are marked as
don't
yield-to-us in the PLE code.
Yes..
On 07/11/2012 05:21 PM, Raghavendra K T wrote:
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
So there is no win here, but there are other cases were diag44 is
used, e.g. cpu_relax.
I have to double check with others, if these cases are
On 06/21/2012 04:48 AM, Xiao Guangrong wrote:
On 06/20/2012 10:11 PM, Takuya Yoshikawa wrote:
We can change the debug message later if needed.
Actually, i am going to use tracepoint instead of
these debug code.
Yes, these should be in the kvmmmu namespace.
--
error compiling
On 06/20/2012 10:56 AM, Xiao Guangrong wrote:
Changlog:
- always atomicly update the spte if it can be updated out of mmu-lock
- rename spte_can_be_writable() to spte_is_locklessly_modifiable()
- cleanup and comment spte_write_protect()
Performance result:
(The benchmark can be found at:
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO workload, so that's just incredible to see so
Hello Joerg,
Joerg Roedel wrote:
On Tue, Jun 05, 2012 at 08:27:05AM -0600, Alex Williamson wrote:
Joerg, the question is whether the multifunction device above allows
peer-to-peer between functions that could bypass the iommu. If not, we
can make it the first entry in device specific acs
On 07/11/2012 01:32 PM, Vasilis Liaskovitis wrote:
Implement batch dimm creation command line options. These could be useful for
not bloating the command line with a large number of dimms.
IMO this is unneeded. With a management tool there is no problem
generating a long command line; from the
On 07/11/2012 04:31 AM, Vasilis Liaskovitis wrote:
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis
On 07/11/2012 04:32 AM, Vasilis Liaskovitis wrote:
Returns total memory of guest in bytes, including hotplugged memory.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
Should this instead be merged with query-balloon output, so that we have
a single command that shows
Hi Avi,
This is my current patch queue for ppc against master.
It contains an important bug fix which can lead to guest freezes when
using PAPR guests with PR KVM.
Please pull.
Alex
The following changes since commit 85b7059169e128c57a3a8a3e588fb89cb2031da1:
Xiao Guangrong (1):
KVM:
From: Benjamin Herrenschmidt b...@kernel.crashing.org
H_CEDE should enable the vcpu's MSR:EE bit. It does on HV KVM (it's
burried in the assembly code though) and as far as I can tell, qemu
does it as well.
Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Alexander
On 07/11/2012 06:38 PM, Alexander Graf wrote:
Hi Avi,
This is my current patch queue for ppc against master.
It contains an important bug fix which can lead to guest freezes when
using PAPR guests with PR KVM.
Please pull.
Thanks, pulled.
--
error compiling committee.c: too many
VHOST_SET_MEM_TABLE failed: Operation not supported
In vhost_set_memory(), We have
if (mem.padding)
return -EOPNOTSUPP;
So, we need to zero struct vhost_memory.
Signed-off-by: Asias He asias.he...@gmail.com
---
tools/kvm/virtio/net.c |2 +-
1 file changed, 1
If vhost is enabled for a virtio device, vhost will poll the ioeventfd
in kernel side and there is no need to poll it in userspace. Otherwise,
both vhost kernel and userspace will race to poll.
Signed-off-by: Asias He asias.he...@gmail.com
---
tools/kvm/include/kvm/ioeventfd.h |2 +-
On 07/11/2012 07:08 PM, Asias He wrote:
VHOST_SET_MEM_TABLE failed: Operation not supported
In vhost_set_memory(), We have
if (mem.padding)
return -EOPNOTSUPP;
So, we need to zero struct vhost_memory.
Is this due to a change in vhost?
--
error compiling
Hi,
On Wed, Jul 11, 2012 at 06:48:38PM +0800, Wen Congyang wrote:
+if (enabled)
+add_e820(mem_base, mem_len, E820_RAM);
add_e820() is declared in memmap.h. You should include this header file,
otherwise, seabios cannot be built.
thanks. you had the same comment on v1
Hi,
On Wed, Jul 11, 2012 at 01:56:19PM +0200, Gerd Hoffmann wrote:
On 07/11/12 12:31, Vasilis Liaskovitis wrote:
In order to hotplug memory between RamSize and BUILD_PCIMEM_START, the pci
window needs to start at BUILD_PCIMEM_START (0xe000).
Otherwise, the guest cannot online new dimms
Hi,
On Wed, Jul 11, 2012 at 08:59:03AM -0600, Eric Blake wrote:
On 07/11/2012 04:31 AM, Vasilis Liaskovitis wrote:
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove
Hi,
On Wed, Jul 11, 2012 at 09:14:29AM -0600, Eric Blake wrote:
On 07/11/2012 04:32 AM, Vasilis Liaskovitis wrote:
Returns total memory of guest in bytes, including hotplugged memory.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
Should this instead be merged
Hi,
On Wed, Jul 11, 2012 at 05:55:25PM +0300, Avi Kivity wrote:
On 07/11/2012 01:32 PM, Vasilis Liaskovitis wrote:
Implement batch dimm creation command line options. These could be useful
for
not bloating the command line with a large number of dimms.
IMO this is unneeded. With a
Hi Andreas,
On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
May I please ask, if you meanwhile could get any information about
potential peer-to-peer communication between the functions of the
following multifunction device:
Good news: I actually found the right person to
Introduce struct disk_image_params to contain all the disk image parameters.
This is useful for adding more disk image parameters, e.g. disk image
cache mode.
Signed-off-by: Asias He asias.he...@gmail.com
---
tools/kvm/builtin-run.c| 11 +--
tools/kvm/disk/core.c
On 05.07.2012, at 13:39, Caraman Mihai Claudiu-B02008 wrote:
-Original Message-
From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-
ow...@vger.kernel.org] On Behalf Of Alexander Graf
Sent: Wednesday, July 04, 2012 4:56 PM
To: Caraman Mihai Claudiu-B02008
Cc:
On 05.07.2012, at 14:54, Caraman Mihai Claudiu-B02008 wrote:
-Original Message-
From: Alexander Graf [mailto:ag...@suse.de]
Sent: Thursday, July 05, 2012 3:13 PM
To: Caraman Mihai Claudiu-B02008
Cc: kvm-...@vger.kernel.org; kvm@vger.kernel.org; linuxppc-
d...@lists.ozlabs.org;
On 07/06/2012 08:47 PM, Prarit Bhargava wrote:
[PATCH 1/2] kvm, Add x86_hyper_kvm to complete detect_hypervisor_platform
check [v3]
While debugging I noticed that unlike all the other hypervisor code in the
kernel, kvm does not have an entry for x86_hyper which is used in
Hi,
We're running into a problem where we can't start up a single instance
of kvm-qemu with 5 or more virtual functions (for the ethernet card)
being passed to the guest. It's an Intel I350 NIC if it matters.
I noticed a discussion in a thread titled [RFC PATCH 0/2] Expose
available KVM
Hello Joerg,
Joerg Roedel wrote:
Hi Andreas,
On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
May I please ask, if you meanwhile could get any information about
potential peer-to-peer communication between the functions of the
following multifunction device:
Good news:
On Wed, 2012-07-11 at 10:52 -0600, Chris Friesen wrote:
Hi,
We're running into a problem where we can't start up a single instance
of kvm-qemu with 5 or more virtual functions (for the ethernet card)
being passed to the guest. It's an Intel I350 NIC if it matters.
I noticed a
On Wed, 2012-07-11 at 14:51 +0300, Avi Kivity wrote:
On 07/11/2012 02:23 PM, Jan Kiszka wrote:
I'd appreciate a couple of examples for formality's sake.
From the top of my head: NVIDIA FX3700 (granted, legacy by now), Atheros
AR9287. For others, I need to check.
Thanks.
And
On 07/11/2012 01:34 PM, Alex Williamson wrote:
The limiting factor to increasing memory slots was searching the array.
That's since been fixed by caching mmio page table entries.
Thanks for the confirmation of my suspicions.
Do you know roughly when this went in? A commit ID would be great.
On Wed, 2012-07-11 at 21:32 +0200, Andreas Hartmann wrote:
Hello Joerg,
Joerg Roedel wrote:
Hi Andreas,
On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
May I please ask, if you meanwhile could get any information about
potential peer-to-peer communication between
On Wed, 2012-07-11 at 13:56 -0600, Chris Friesen wrote:
On 07/11/2012 01:34 PM, Alex Williamson wrote:
The limiting factor to increasing memory slots was searching the array.
That's since been fixed by caching mmio page table entries.
Thanks for the confirmation of my suspicions.
Do
From: Nicholas Bellinger n...@linux-iscsi.org
This QEMU patch sets VirtIOSCSIConfig-max_target=0 for vhost-scsi operation
to restrict virtio-scsi LLD guest scanning to max_id=0 (a single target ID
instance) when connected to individual tcm_vhost endpoints as requested by
Paolo.
This ensures that
On 07/11/2012 02:06 PM, Alex Williamson wrote:
On Wed, 2012-07-11 at 13:56 -0600, Chris Friesen wrote:
On 07/11/2012 01:34 PM, Alex Williamson wrote:
The limiting factor to increasing memory slots was searching the array.
That's since been fixed by caching mmio page table entries.
Thanks for
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
The following is a RFC-v2 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
After last week's developments along with the help of some new folks, the
changelog v1 - v2 so far looks like:
*) Fix
From: Nicholas Bellinger n...@risingtidesystems.com
This patch adds the initial vhost_scsi_ioctl() callers for
VHOST_SCSI_SET_ENDPOINT
and VHOST_SCSI_CLEAR_ENDPOINT respectively, and also adds struct
vhost_vring_target
that is used by tcm_vhost code when locating target ports during qemu setup.
From: Nicholas Bellinger n...@linux-iscsi.org
This patch adds the initial code for tcm_vhost, a Vhost level TCM
fabric driver for virtio SCSI initiators into KVM guest.
This code is currently up and running on v3.5-rc2 host+guest along
with the virtio-scsi vdev-scan() patch to allow a proper
From: Nicholas Bellinger n...@linux-iscsi.org
This patch changes virtio-scsi to use a new virtio_driver-scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of
On Wed, 2012-07-11 at 19:37 +0100, James Bottomley wrote:
On Fri, 2012-07-06 at 20:15 +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
This patch changes virtio-scsi to use a new virtio_driver-scan() callback
so that scsi_scan_host() can be properly
Hi
I have an Ubuntu 12.04 KVM server
In Centos6 VM's - when I install the latest kernel for centos 6.3 -
2.6.32-279.1.1.el6 - if you reboot from inside a Centos6 vm it gets
stuck in a loop between seabios/grub.
If i use virsh/virt-manager to reboot its fine, only from inside a
centos6 vm (with
1 - 100 of 127 matches
Mail list logo