Hello,
I am using the S2600CP server board with cpu Intel(R) Xeon(R) CPU
E5-2620 v2 @ 2.10GHz which supports for APICv,
Does this hardware support the vt-d posted interrupt feature as described in
[v3 00/26] Add VT-d Posted-Interrupts support and
https://lkml.org/lkml/2014 /12/3/102 ,
On 03/20/2015 02:38 AM, Waiman Long wrote:
On 03/19/2015 06:01 AM, Peter Zijlstra wrote:
[...]
You are probably right. The initial apply_paravirt() was done before the
SMP boot. Subsequent ones were at kernel module load time. I put a
counter in the __native_queue_spin_unlock() and it
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #8 from Igor Mammedov imamm...@redhat.com ---
(In reply to Thomas Stein from comment #7)
Hello.
After reverting commit 1d4e7e3c0bca747d0fc54069a6ab8393349431c0 i had no
problem any more. But we have to keep in mind this error only
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #9 from Thomas Stein himbe...@meine-oma.de ---
Hello.
I applied the patch to vanilla 3.19.2. No problems so far. Did a few snapshots
and vm restarts.
cheers
t.
--
You are receiving this mail because:
You are watching the assignee
https://bugzilla.kernel.org/show_bug.cgi?id=93251
--- Comment #7 from Thomas Stein himbe...@meine-oma.de ---
Hello.
After reverting commit 1d4e7e3c0bca747d0fc54069a6ab8393349431c0 i had no
problem any more. But we have to keep in mind this error only happend now and
then. Especially creating
From: Suresh Warrier warr...@linux.vnet.ibm.com
Interrupt-based hypercalls return H_TOO_HARD to inform KVM that it needs
to switch to the host to complete the rest of hypercall function in
virtual mode. This patch ports the virtual mode ICS/ICP reject and resend
functions to be runnable in
From: Michael Ellerman mich...@ellerman.id.au
Some PowerNV systems include a hardware random-number generator.
This HWRNG is present on POWER7+ and POWER8 chips and is capable of
generating one 64-bit random number every microsecond. The random
numbers are produced by sampling a set of 64
Commit 4a157d61b48c (KVM: PPC: Book3S HV: Fix endianness of
instruction obtained from HEIR register) had the side effect that
we no longer reset vcpu-arch.last_inst to -1 on guest exit in
the cases where the instruction is not fetched from the guest.
This means that if instruction emulation turns
Currently, kvmppc_set_lpcr() has a spinlock around the whole function,
and inside that does mutex_lock(kvm-lock). It is not permitted to
take a mutex while holding a spinlock, because the mutex_lock might
call schedule(). In addition, this causes lockdep to warn about a
lock ordering issue:
From: Suresh Warrier warr...@linux.vnet.ibm.com
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This adds helper routines for locking and unlocking HPTEs, and uses
them in the rest of the code. We don't change any locking rules in
this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Paul Mackerras
Currently, kvmppc_set_lpcr() has a spinlock around the whole function,
and inside that does mutex_lock(kvm-lock). It is not permitted to
take a mutex while holding a spinlock, because the mutex_lock might
call schedule(). In addition, this causes lockdep to warn about a
lock ordering issue:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We don't support real-mode areas now that 970 support is removed.
Remove the remaining details of rma from the code. Also rename
rma_setup_done to hpte_setup_done to better reflect the changes.
Signed-off-by: Aneesh Kumar K.V
From: Suresh Warrier warr...@linux.vnet.ibm.com
Add two counters to count how often we generate real-mode ICS resend
and reject events. The counters provide some performance statistics
that could be used in the future to consider if the real mode functions
need further optimizing. The counters
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Add counters to track number of times we switch from guest real mode
to host virtual mode during an interrupt-related hyper call because the
hypercall requires actions that cannot be completed in real mode. This
will help when making
The VPA (virtual processor area) is defined by PAPR and is therefore
big-endian, so we need a be32_to_cpu when reading it in
kvmppc_get_yield_count(). Without this, H_CONFER always fails on a
little-endian host, causing SMP guests to waste time spinning on
spinlocks.
Cc: sta...@vger.kernel.org #
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This adds helper routines for locking and unlocking HPTEs, and uses
them in the rest of the code. We don't change any locking rules in
this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Paul Mackerras
From: Michael Ellerman mich...@ellerman.id.au
Some PowerNV systems include a hardware random-number generator.
This HWRNG is present on POWER7+ and POWER8 chips and is capable of
generating one 64-bit random number every microsecond. The random
numbers are produced by sampling a set of 64
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to allow reuse of
vcpu array slot in KVM during cpu hot plug/unplug from guest. One such
proposed workaround is to park the
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We don't support real-mode areas now that 970 support is removed.
Remove the remaining details of rma from the code. Also rename
rma_setup_done to hpte_setup_done to better reflect the changes.
Signed-off-by: Aneesh Kumar K.V
The VPA (virtual processor area) is defined by PAPR and is therefore
big-endian, so we need a be32_to_cpu when reading it in
kvmppc_get_yield_count(). Without this, H_CONFER always fails on a
little-endian host, causing SMP guests to waste time spinning on
spinlocks.
Cc: sta...@vger.kernel.org #
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to allow reuse of
vcpu array slot in KVM during cpu hot plug/unplug from guest. One such
proposed workaround is to park the
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I have some
patches which are not KVM-related
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I have some
patches which are not KVM-related
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Add counters to track number of times we switch from guest real mode
to host virtual mode during an interrupt-related hyper call because the
hypercall requires actions that cannot be completed in real mode. This
will help when making
Commit 4a157d61b48c (KVM: PPC: Book3S HV: Fix endianness of
instruction obtained from HEIR register) had the side effect that
we no longer reset vcpu-arch.last_inst to -1 on guest exit in
the cases where the instruction is not fetched from the guest.
This means that if instruction emulation turns
KVM guest can fail to startup with following trace on host:
qemu-system-x86: page allocation failure: order:4, mode:0x40d0
Call Trace:
dump_stack+0x47/0x67
warn_alloc_failed+0xee/0x150
__alloc_pages_direct_compact+0x14a/0x150
__alloc_pages_nodemask+0x776/0xb80
Adding few more information regarding the setup which i had created to
test the vt-d posted interrupts for assigned devices,
Hardware used for evaluating vt-posted interrupts
cpu E5-2620 v2 @ 2.10GHz and S2600CP server board
I had used kernel-3.18 patched with KVM-VFIO IRQ forward
On Wed, Mar 18, 2015 at 04:53:28PM +0100, Francesc Guasch wrote:
I have three Ubuntu Server 14.04 trusty with KVM. Two of
them are HP servers and one is Dell. Both brands run fine
the KVM virtual servers, and I can do live migration between
the HPs. But I get I/O errors in the vda when I
On 20.03.15 10:39, Paul Mackerras wrote:
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I
On 20.03.15 10:39, Paul Mackerras wrote:
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to allow reuse of
vcpu array slot in KVM during cpu hot plug/unplug from
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to allow reuse of
vcpu array slot in KVM during cpu hot plug/unplug from
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
*
On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code.
On 20.03.15 12:26, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:01:32PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain
On Fri, Mar 20, 2015 at 10:03:20AM +, Stefan Hajnoczi wrote:
Hi Stefan, thank you very much for answering me.
On Wed, Mar 18, 2015 at 04:53:28PM +0100, Francesc Guasch wrote:
I have three Ubuntu Server 14.04 trusty with KVM. Two of
them are HP servers and one is Dell. Both brands run
On 20.03.15 12:25, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
*
On 20.03.15 10:39, Paul Mackerras wrote:
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where
On 20.03.15 10:39, Paul Mackerras wrote:
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where
On 20.03.15 10:39, Paul Mackerras wrote:
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the
On 20.03.15 10:39, Paul Mackerras wrote:
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the
On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code.
On Fri, Mar 20, 2015 at 12:01:32PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to
On 20.03.15 12:26, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:01:32PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain
On 20.03.15 12:25, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time
On 20.03.15 10:39, Paul Mackerras wrote:
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I
On 20.03.15 10:39, Paul Mackerras wrote:
This is my current patch queue for HV KVM on PPC. This series is
based on the queue branch of the KVM tree, i.e. roughly v4.0-rc3
plus a set of recent KVM changes which don't intersect with the
changes in this series. On top of that, in my testing I
https://bugzilla.kernel.org/show_bug.cgi?id=93251
Igor Mammedov imamm...@redhat.com changed:
What|Removed |Added
CC||imamm...@redhat.com
When running a multi-threaded guest and vcpu 0 in a virtual core
is not running in the guest (i.e. it is busy elsewhere in the host),
thread 0 of the physical core will switch the MMU to the guest and
then go to nap mode in the code at kvm_do_nap. If the guest sends
an IPI to thread 0 using the
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the virtual XICS, and the
interrupts to bring the other
This arranges for threads that are napping due to their vcpu having
ceded or due to not having a vcpu to wake up at the end of the guest's
timeslice without having to be poked with an IPI. We do that by
arranging for the decrementer to contain a value no greater than the
number of timebase ticks
This replaces the assembler code for kvmhv_commence_exit() with C code
in book3s_hv_builtin.c. It also moves the IPI/message sending code
that was in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function
so it can be used by kvmhv_commence_exit() as well as
icp_rm_set_vcpu_irq().
We can tell when a secondary thread has finished running a guest by
the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there
is no real need for the nap_count field in the kvmppc_vcore struct.
This changes kvmppc_wait_for_nap to poll the kvm_hstate.kvm_vcpu
pointers of the secondary
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
* rm_entry - time taken from the start of
* Remove unused kvmppc_vcore::n_busy field.
* Remove setting of RMOR, since it was only used on PPC970 and the
PPC970 KVM support has been removed.
* Don't use r1 or r2 in setting the runlatch since they are
conventionally reserved for other things; use r0 instead.
* Streamline the code a
Currently, the entry_exit_count field in the kvmppc_vcore struct
contains two 8-bit counts, one of the threads that have started entering
the guest, and one of the threads that have started exiting the guest.
This changes it to an entry_exit_map field which contains two bitmaps
of 8 bits each.
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where is the PID of the
process that created the
This replaces the assembler code for kvmhv_commence_exit() with C code
in book3s_hv_builtin.c. It also moves the IPI/message sending code
that was in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function
so it can be used by kvmhv_commence_exit() as well as
icp_rm_set_vcpu_irq().
From: Suresh Warrier warr...@linux.vnet.ibm.com
Interrupt-based hypercalls return H_TOO_HARD to inform KVM that it needs
to switch to the host to complete the rest of hypercall function in
virtual mode. This patch ports the virtual mode ICS/ICP reject and resend
functions to be runnable in
From: Suresh Warrier warr...@linux.vnet.ibm.com
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a
* Remove unused kvmppc_vcore::n_busy field.
* Remove setting of RMOR, since it was only used on PPC970 and the
PPC970 KVM support has been removed.
* Don't use r1 or r2 in setting the runlatch since they are
conventionally reserved for other things; use r0 instead.
* Streamline the code a
We can tell when a secondary thread has finished running a guest by
the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there
is no real need for the nap_count field in the kvmppc_vcore struct.
This changes kvmppc_wait_for_nap to poll the kvm_hstate.kvm_vcpu
pointers of the secondary
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where is the PID of the
process that created the
Rather than calling cond_resched() in kvmppc_run_core() before doing
the post-processing for the vcpus that we have just run (that is,
calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do
that post-processing before calling cond_resched(), and that post-
processing is moved out
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA
update (i.e. one of its 3 virtual processor areas needed to be pinned
in memory so the host real mode code can update it on guest entry and
exit), we would drop the vcore lock and do the update there and then.
Future changes
Rather than calling cond_resched() in kvmppc_run_core() before doing
the post-processing for the vcpus that we have just run (that is,
calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do
that post-processing before calling cond_resched(), and that post-
processing is moved out
When running a multi-threaded guest and vcpu 0 in a virtual core
is not running in the guest (i.e. it is busy elsewhere in the host),
thread 0 of the physical core will switch the MMU to the guest and
then go to nap mode in the code at kvm_do_nap. If the guest sends
an IPI to thread 0 using the
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the virtual XICS, and the
interrupts to bring the other
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA
update (i.e. one of its 3 virtual processor areas needed to be pinned
in memory so the host real mode code can update it on guest entry and
exit), we would drop the vcore lock and do the update there and then.
Future changes
This arranges for threads that are napping due to their vcpu having
ceded or due to not having a vcpu to wake up at the end of the guest's
timeslice without having to be poked with an IPI. We do that by
arranging for the decrementer to contain a value no greater than the
number of timebase ticks
From: Suresh Warrier warr...@linux.vnet.ibm.com
Add two counters to count how often we generate real-mode ICS resend
and reject events. The counters provide some performance statistics
that could be used in the future to consider if the real mode functions
need further optimizing. The counters
On entry to the guest, secondary threads now wait for the primary to
switch the MMU after loading up most of their state, rather than before.
This means that the secondary threads get into the guest sooner, in the
common case where the secondary threads get to kvmppc_hv_entry before
the primary
Currently, the entry_exit_count field in the kvmppc_vcore struct
contains two 8-bit counts, one of the threads that have started entering
the guest, and one of the threads that have started exiting the guest.
This changes it to an entry_exit_map field which contains two bitmaps
of 8 bits each.
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
* rm_entry - time taken from the start of
On entry to the guest, secondary threads now wait for the primary to
switch the MMU after loading up most of their state, rather than before.
This means that the secondary threads get into the guest sooner, in the
common case where the secondary threads get to kvmppc_hv_entry before
the primary
On 03/20/2015 03:04 PM, Alex Williamson wrote:
On Fri, 2015-03-20 at 15:24 +0530, bk rakesh wrote:
Adding few more information regarding the setup which i had created to
test the vt-d posted interrupts for assigned devices,
Hardware used for evaluating vt-posted interrupts
cpu E5-2620 v2
2015-03-19 18:44-0300, Marcelo Tosatti:
On Wed, Mar 18, 2015 at 07:38:22PM +0100, Radim Krčmář wrote:
kvm_ioapic_update_eoi() wasn't called if directed EOI was enabled.
We need to do that for irq notifiers. (Like with edge interrupts.)
Fix it by skipping EOI broadcast only.
Bug:
On Fri, 2015-03-20 at 15:24 +0530, bk rakesh wrote:
Adding few more information regarding the setup which i had created to
test the vt-d posted interrupts for assigned devices,
Hardware used for evaluating vt-posted interrupts
cpu E5-2620 v2 @ 2.10GHz and S2600CP server board
I had
On Fri, 2015-03-20 at 15:10 +0100, Eric Auger wrote:
On 03/20/2015 03:04 PM, Alex Williamson wrote:
On Fri, 2015-03-20 at 15:24 +0530, bk rakesh wrote:
Adding few more information regarding the setup which i had created to
test the vt-d posted interrupts for assigned devices,
Hardware
KVM guest can fail to startup with following trace on host:
qemu-system-x86: page allocation failure: order:4, mode:0x40d0
Call Trace:
dump_stack+0x47/0x67
warn_alloc_failed+0xee/0x150
__alloc_pages_direct_compact+0x14a/0x150
__alloc_pages_nodemask+0x776/0xb80
On Fri, Mar 20, 2015 at 12:40:02PM +, Andre Przywara wrote:
On 03/19/2015 03:44 PM, Andre Przywara wrote:
Hej Christoffer,
[ ... ]
+static int vgic_handle_mmio_access(struct kvm_vcpu *vcpu,
+struct kvm_io_device *this, gpa_t addr,
+
On 03/19/2015 03:44 PM, Andre Przywara wrote:
Hej Christoffer,
[ ... ]
+static int vgic_handle_mmio_access(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this, gpa_t addr,
+ int len, void *val, bool is_write)
+{
+ struct
On Thu, Mar 19, 2015 at 03:44:51PM +, Andre Przywara wrote:
Hej Christoffer,
On 14/03/15 14:27, Christoffer Dall wrote:
On Fri, Mar 13, 2015 at 04:10:08PM +, Andre Przywara wrote:
Currently we use a lot of VGIC specific code to do the MMIO
dispatching.
Use the previous reworks
2015-03-19 16:51-0600, James Sullivan:
I played around with native_compose_msi_msg and discovered the following:
* dm=0, rh=0 = Physical Destination Mode
* dm=0, rh=1 = Failed delivery
* dm=1, rh=0 = Logical Destination Mode, No Redirection
* dm=1, rh=1 = Logical Destination Mode,
On Fri, Mar 20, 2015 at 12:34:18PM +0100, Alexander Graf wrote:
On 20.03.15 12:26, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:01:32PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't
On Fri, Mar 20, 2015 at 12:34:18PM +0100, Alexander Graf wrote:
On 20.03.15 12:26, Paul Mackerras wrote:
On Fri, Mar 20, 2015 at 12:01:32PM +0100, Alexander Graf wrote:
On 20.03.15 10:39, Paul Mackerras wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't
On 03/20/2015 09:15 AM, Radim Krčmář wrote:
2015-03-19 16:51-0600, James Sullivan:
I played around with native_compose_msi_msg and discovered the following:
* dm=0, rh=0 = Physical Destination Mode
* dm=0, rh=1 = Failed delivery
* dm=1, rh=0 = Logical Destination Mode, No Redirection
*
On Fri, Mar 20, 2015 at 09:51:26AM +, Igor Mammedov wrote:
KVM guest can fail to startup with following trace on host:
qemu-system-x86: page allocation failure: order:4, mode:0x40d0
Call Trace:
dump_stack+0x47/0x67
warn_alloc_failed+0xee/0x150
On Fri, 20 Mar 2015 08:59:03 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Mar 20, 2015 at 09:51:26AM +, Igor Mammedov wrote:
KVM guest can fail to startup with following trace on host:
qemu-system-x86: page allocation failure: order:4, mode:0x40d0
Call Trace:
On Thu, Mar 19, 2015 at 10:18 AM, Stefan Assmann sassm...@redhat.com wrote:
On 19.03.2015 15:04, jacob jacob wrote:
Hi Stefan,
have you been able to get PCI passthrough working without any issues
after the upgrade?
My XL710 fails to transfer regular TCP traffic (netperf). If that works
for
On 03/20/2015 09:22 AM, James Sullivan wrote:
On 03/20/2015 09:15 AM, Radim Krčmář wrote:
2015-03-19 16:51-0600, James Sullivan:
I played around with native_compose_msi_msg and discovered the following:
* dm=0, rh=0 = Physical Destination Mode
* dm=0, rh=1 = Failed delivery
* dm=1, rh=0 =
Running
3.18.9-200.fc21.x86_64
qemu 2:2.1.3-3.fc21
libvirt 1.2.9.2-1.fc21
System is a Thinkpad X250 with Intel i7-5600u Broadwell GT2
I'm trying to replace the Win7 installation on my laptop with Fedora
21 and virtualizing Windows 7 for work purposes. I'd prefer to give
the guest its own NTFS
The patch adds one more EEH sub-command (VFIO_EEH_PE_INJECT_ERR)
to inject the specified EEH error, which is represented by
(struct vfio_eeh_pe_err), to the indicated PE for testing purpose.
Signed-off-by: Gavin Shan gws...@linux.vnet.ibm.com
---
Documentation/vfio.txt| 12
The patch defines PCI error types and functions in eeh.h and
exports function eeh_pe_inject_err(), which will be called by
VFIO driver to inject the specified PCI error to the indicated
PE for testing purpose.
Signed-off-by: Gavin Shan gws...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/eeh.h
Those two patches are extention to EEH support for VFIO PCI devices,
which allows to inject EEH errors to VFIO PCI devices from userspace
for testing purpose.
Changelog
=
v2 - v3:
* Use offsetofend(), instead of sizeof(struct vfio_eeh_pe_op)
to calculate argument buffer
97 matches
Mail list logo