Alex Bennée writes:
> Christoffer Dall writes:
>> On Sun, Feb 08, 2015 at 08:48:09AM +0100, Jan Kiszka wrote:
>>> BTW, KVM tracing support on ARM seems like it requires some care. E.g.:
>>> kvm_exit does not report an exit reason. The in-kernel vgic also seems
>>> to lack instrumentation. Unf
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_
Christoffer Dall writes:
> Hi Jan,
>
> On Sun, Feb 08, 2015 at 08:48:09AM +0100, Jan Kiszka wrote:
>> Hi,
>>
>> after fixing the VM_BUG_ON, my QEMU guest on the Jetson TK1 generally
>> refuses to boot. Once in a while it does, but quickly gets stuck again.
>> In one case I found this in the ker
On Fri, Feb 13, 2015 at 07:21:20AM +0100, Jan Kiszka wrote:
> Hi Christoffer,
>
> On 2015-02-13 05:46, Christoffer Dall wrote:
> > Hi Jan,
> >
> > On Sun, Feb 08, 2015 at 08:48:09AM +0100, Jan Kiszka wrote:
> >> Hi,
> >>
> >> after fixing the VM_BUG_ON, my QEMU guest on the Jetson TK1 generally
>
Hi Christoffer,
On 2015-02-13 05:46, Christoffer Dall wrote:
> Hi Jan,
>
> On Sun, Feb 08, 2015 at 08:48:09AM +0100, Jan Kiszka wrote:
>> Hi,
>>
>> after fixing the VM_BUG_ON, my QEMU guest on the Jetson TK1 generally
>> refuses to boot. Once in a while it does, but quickly gets stuck again.
>> I
On Sat, Feb 07, 2015 at 10:21:20PM +0100, Jan Kiszka wrote:
> From: Jan Kiszka
>
> The check is supposed to catch page-unaligned sizes, not the inverse.
>
> Signed-off-by: Jan Kiszka
Thanks, applied.
-Christoffer
signature.asc
Description: Digital signature
Hi Jan,
On Sun, Feb 08, 2015 at 08:48:09AM +0100, Jan Kiszka wrote:
> Hi,
>
> after fixing the VM_BUG_ON, my QEMU guest on the Jetson TK1 generally
> refuses to boot. Once in a while it does, but quickly gets stuck again.
> In one case I found this in the kernel log (never happened again so
> far
Hi Alex,
While trying to get VFIO-PCI working on AArch64 (with 64k page size), I
stumbled over the following piece of code:
> static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> {
> struct vfio_domain *domain;
> unsigned long bitmap = PAGE_MASK;
>
> mutex_
We want to support mixed modes and the easiest solution is to avoid
optimizing those weird and unlikely scenarios.
Signed-off-by: Radim Krčmář
---
v2
- optimize and name a check for valid map [Paolo]
- don't use cluster id with invalid map [Paolo]
- define KVM_APIC_MODE_* closer to kvm_apic_m
In mixed modes, we musn't deliver xAPIC IPIs like x2APIC and vice versa.
Instead of preserving the information in apic_send_ipi(), we regain it
by converting all destinations into correct MDA in the slow path.
This allows easier reasoning about subsequent matching.
Our kvm_apic_broadcast() had an
recalculate_apic_map() uses two passes over all VCPUs. This is a relic
from time when we selected a global mode in the first pass and set up
the optimized table in the second pass (to have a consistent mode).
Recent changes made mixed mode unoptimized and we can do it in one pass.
Format of logic
Broadcast allowed only one global APIC mode, but mixed modes are
theoretically possible. x2APIC IPI doesn't mean 0xff as broadcast,
the rest does.
x2APIC broadcasts are accepted by xAPIC. If we take SDM to be logical,
even addreses beginning with 0xff should be accepted, but real hardware
disagr
Each patch has a diff from v1, here is only a prologue on the mythical
mixed xAPIC and x2APIC mode:
There is one interesting alias in xAPIC and x2APIC ICR destination, the
0xff00, which is a broadcast in xAPIC and either a physical message
to high x2APIC ID or a message to an empty set in x2AP
https://bugzilla.kernel.org/show_bug.cgi?id=92291
--- Comment #10 from Mark ---
well thanks very much guys, I'll cautiously say that a patched guest kernel
seems to resolve it :-)
the bug seemed to appear even when the host is untainted; serial log says
[ 375.989736] divide error: [#1] SMP
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/2015 12:00 PM, Frederic Weisbecker wrote:
> On Thu, Feb 12, 2015 at 10:47:10AM -0500, Rik van Riel wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
>>
>> On 02/12/2015 10:42 AM, Frederic Weisbecker wrote:
>>> On Wed, Feb 11, 2015 at 02
Linus,
The following changes since commit 056bb5f51c357ee00046fde4929a03468ff45e7a:
arm64: kvm: decode ESR_ELx.EC when reporting exceptions (2015-01-15 12:24:52
+)
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git tags/for-linus
for you to fetch cha
On Tue, Feb 10, 2015 at 03:27:49PM -0500, r...@redhat.com wrote:
> When running a KVM guest on a system with NOHZ_FULL enabled, and the
> KVM guest running with idle=poll mode, we still get wakeups of the
> rcuos/N threads.
>
> This problem has already been solved for user space by telling the
> R
On Thu, Feb 12, 2015 at 10:47:10AM -0500, Rik van Riel wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 02/12/2015 10:42 AM, Frederic Weisbecker wrote:
> > On Wed, Feb 11, 2015 at 02:43:19PM -0500, Rik van Riel wrote:
> >> If exception_enter happens when already in IN_KERNEL state,
On 02/11, Jeremy Fitzhardinge wrote:
>
> On 02/11/2015 09:24 AM, Oleg Nesterov wrote:
> > I agree, and I have to admit I am not sure I fully understand why
> > unlock uses the locked add. Except we need a barrier to avoid the race
> > with the enter_slowpath() users, of course. Perhaps this is the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/2015 10:42 AM, Frederic Weisbecker wrote:
> On Wed, Feb 11, 2015 at 02:43:19PM -0500, Rik van Riel wrote:
>> If exception_enter happens when already in IN_KERNEL state, the
>> code still calls context_tracking_exit, which ends up in
>> rcu_e
On Wed, Feb 11, 2015 at 02:43:19PM -0500, Rik van Riel wrote:
> If exception_enter happens when already in IN_KERNEL state, the
> code still calls context_tracking_exit, which ends up in
> rcu_eqs_exit_common, which explodes with a WARN_ON when it is
> called in a situation where dynticks are not e
On 02/12/2015 08:30 PM, Peter Zijlstra wrote:
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote:
[...]
Linus suggested that we should not do any writes to lock after unlock(),
and we can move slowpath clearing to fastpath lock.
So this patch implements the fix with:
1. Moving sl
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote:
> Paravirt spinlock clears slowpath flag after doing unlock.
> As explained by Linus currently it does:
> prev = *lock;
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>
> /* add_smp()
On 02/12/2015 07:32 PM, Oleg Nesterov wrote:
Damn, sorry for noise, forgot to mention...
On 02/12, Raghavendra K T wrote:
+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock,
+ __ticket_t head)
+{
+ if (head &
On 02/12/2015 07:20 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -191,8 +189,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
*lock)
* We need to check "unlocked" in a loop, tmp.head == head
* can be false positive because of overf
On 02/12/2015 07:07 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
*lock, __ticket_t want)
* check again make sure it didn't become free while
* we weren't looking.
*/
- if (AC
Damn, sorry for noise, forgot to mention...
On 02/12, Raghavendra K T wrote:
>
> +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock,
> + __ticket_t head)
> +{
> + if (head & TICKET_SLOWPATH_FLAG) {
> + arc
On 02/12, Raghavendra K T wrote:
>
> @@ -191,8 +189,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
> *lock)
>* We need to check "unlocked" in a loop, tmp.head == head
>* can be false positive because of overflow.
>*/
> - if
On 02/12, Raghavendra K T wrote:
>
> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
> *lock, __ticket_t want)
>* check again make sure it didn't become free while
>* we weren't looking.
>*/
> - if (ACCESS_ONCE(lock->tickets.head) == want) {
>
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_
v2: fix typo
---
Add option to exit in case latency exceeds a given maximum.
Its useful to test host latency without the complexity of a
guest.
Example command line:
echo 1 > /sys/kernel/debug/tracing/tracing_on; taskset -c 3 qemu-kvm
-enable-kvm -device testdev,chardev=testlog -chardev
file,
Add option to exit in case latency exceeds a given maximum.
Its useful to test host latency without the complexity of a
guest.
Example command line:
echo 1 > /sys/kernel/debug/tracing/tracing_on; taskset -c 3 qemu-kvm
-enable-kvm -device testdev,chardev=testlog -chardev
file,id=testlog,path=m
Hi,
I have submitted our Google Summer of Code application for QEMU,
libvirt, and KVM.
Accepted organizations will be announced on March 2nd at 19:00 UTC on
http://google-melange.com/.
You can still add project ideas if you wish to mentor:
http://qemu-project.org/Google_Summer_of_Code_2015
Mento
33 matches
Mail list logo