Does anyone have an idea?
The request itself is completely filled with cc
That is very weird, the 'rq' is got from hctx-tags, and rq should be
valid, and rq-q shouldn't have been changed even though it was
double free or double allocation.
I am currently asking myself if
On Wed, 17 Sep 2014 14:00:34 +0200
David Hildenbrand d...@linux.vnet.ibm.com wrote:
Does anyone have an idea?
The request itself is completely filled with cc
That is very weird, the 'rq' is got from hctx-tags, and rq should be
valid, and rq-q shouldn't have been changed
are
fully initialized if you mean all requests have been used one time.
On Wed, Sep 17, 2014 at 10:11 PM, David Hildenbrand
I was playing with a simple patch that just sets cmd_flags and action_flags
to
What is action_flags?
atomic_flags, sorry :)
Otherwise e.g. REQ_ATOM_STARTED could
This patch should fix the bug reported in https://lkml.org/lkml/2014/9/11/249.
Test is still pending.
David Hildenbrand (1):
blk-mq: Avoid race condition with uninitialized requests
block/blk-mq.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--
1.8.5.5
--
To unsubscribe from
of a request.
Also move the reset of cmd_flags for the initializing code to the point where a
request is freed. So we will never end up with pending flush request indicators
that might trigger dereferences of invalid pointers in blk_mq_timeout_check().
Cc: sta...@vger.kernel.org
Signed-off-by: David
is to be accessed while pagefault_disabled() is set, the atomic
variants of copy_(to|from)_user can be used.
This patch reverts commit 662bbcb2747c2422cf98d3d97619509379eee466 taking care
of the !MMU optimization.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/kernel.h | 8
checks
5. .*_inatomic variants don't call might_fault()
6. If common code uses the __.* variants, it has to trigger access_ok() and
call might_fault()
7. For pagefault_disable(), the inatomic variants are to be used
Comments? Opinions?
David Hildenbrand (2):
powerpc/fsl-pci: atomic get_user
Whenever we have pagefaults disabled, we have to use the atomic variants of
(set|get)_user and copy_(from|to)_user.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/powerpc/sysdev/fsl_pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/sysdev
.
On Tue, Nov 25, 2014 at 12:43:24PM +0100, David Hildenbrand wrote:
I recently discovered that commit 662bbcb2747c2422cf98d3d97619509379eee466
removed/skipped all might_sleep checks for might_fault() calls when in
atomic.
Yes. You can add e.g. might_sleep in your code if, for some reason
On Wed, Nov 26, 2014 at 05:17:29PM +0200, Michael S. Tsirkin wrote:
On Wed, Nov 26, 2014 at 11:05:04AM +0100, David Hildenbrand wrote:
What's the path you are trying to debug?
Well, we had a problem where we held a spin_lock and called
copy_(from|to)_user(). We experienced very
for virtio v1.0.
Reviewed-by: Thomas Huth th...@linux.vnet.ibm.com
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
Signed-off-by: Cornelia Huck cornelia.h...@de.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
include/uapi/linux/virtio_blk.h | 15
This is what happened on our side (very recent kernel):
spin_lock(lock)
copy_to_user(...)
spin_unlock(lock)
That's a deadlock even without copy_to_user - it's
enough for the thread to be preempted and another one
to try taking the lock.
1. s390 locks/unlocks a spin lock with
Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only* code like
spin_lock(lock);
Is only code like this valid or also with the spin_lock() dropped?
(e.g. the
This is subjective, but how about
static bool xxx(void)
{
mutex_lock(cpu_hotplug.lock);
if (atomic_read(cpu_hotplug.refcount) == 0)
return true;
mutex_unlock(cpu_hotplug.lock);
return false;
This is subjective, but how about
static bool xxx(void)
{
mutex_lock(cpu_hotplug.lock);
if (atomic_read(cpu_hotplug.refcount) == 0)
return true;
mutex_unlock(cpu_hotplug.lock);
return false;
reproduce it with this fix.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
kernel/cpu.c | 56 +++-
1 file changed, 23 insertions(+), 33 deletions(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5d22023..1972b16 100644
--- a/kernel
On Mon, Dec 15, 2014 at 12:21:27PM +0100, David Hildenbrand wrote:
On Wed, Dec 10, 2014 at 03:23:29PM +0100, David Hildenbrand wrote:
Did you look at the -rt patches where this comes from?
https://git.kernel.org/cgit/linux/kernel/git/clrkwllms/rt-linux.git/commit/?h=v3.14.21
On Wed, Dec 10, 2014 at 03:23:29PM +0100, David Hildenbrand wrote:
Did you look at the -rt patches where this comes from?
https://git.kernel.org/cgit/linux/kernel/git/clrkwllms/rt-linux.git/commit/?h=v3.14.21-rt9id=b389ced19ab649438196d132768fe6522d2f052b
https://git.kernel.org/cgit/linux
On Wed, Dec 10, 2014 at 10:23 PM, David Hildenbrand
d...@linux.vnet.ibm.com wrote:
This patch adds the pagefault_count to the thread_info of all
architectures. It will be used to count the pagefault_disable() levels
on a per-thread basis.
We are not reusing the preempt_count
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only* code like
spin_lock(lock);
Is only
OTOH, there is no reason why we need to disable preemption over that
page_fault_disabled() region. There are code pathes which really do
not require to disable preemption for that.
We have that seperated in preempt-rt for obvious reasons and IIRC
Peter Zijlstra tried to distangle it in
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the only
doable thing right now. I am not sure if a completely separated counter is
even
possible, increasing the size
]
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
Signed-off-by: Thomas Huth th...@linux.vnet.ibm.com
Signed-off-by: Cornelia Huck cornelia.h...@de.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
Still looks good to me :)
--
To unsubscribe from this list: send the line unsubscribe
From: Rusty Russell ru...@rustcorp.com.au
It seemed like a good idea, but it's actually a pain when we get more
than 32 feature bits. Just change it to a u32 for now.
Cc: Brian Swetland swetl...@google.com
Cc: Christian Borntraeger borntrae...@de.ibm.com
Signed-off-by: Rusty Russell
That's the whole reason for the patch.
I guess you disagree with it, but it's much easier
to deal with simple integers.
Well, I can live with it :)
clear_bit() and friends are just easier to understand when scanning the code
(at least for me).
--
To unsubscribe from this list: send the
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com]
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the
only
doable thing right now. I am not sure
optimization and the new pagefault_disabled() check.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/kernel.h | 9 +++--
mm/memory.c| 15 ---
2 files changed, 11 insertions(+), 13 deletions(-)
diff --git a/include/linux/kernel.h b/include/linux
- Change documentation of user access methods to reflect the real behavior
- Don't touch the preempt counter, only the pagefault disable counter (future
work)
David Hildenbrand (2):
preempt: track pagefault_disable() calls in the preempt counter
mm, sched: trigger might_sleep() in might_fault
atomic
context or in pagefault_disable() context.
Cleanup the PREEMPT_ACTIVE defines and fix the preempt count documentation on
the way.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/preempt_mask.h | 24 +++-
include/linux/uaccess.h | 21
-
- __might_sleep(__FILE__, __LINE__, 0);
+ if (unlikely(!pagefault_disabled()))
+ __might_sleep(__FILE__, __LINE__, 0);
Same here: so maybe make might_fault a wrapper
around __might_fault as well.
Yes, I also noticed that. It was part of the original code.
For now
On Thu, 27 Nov 2014, David Hildenbrand wrote:
OTOH, there is no reason why we need to disable preemption over that
page_fault_disabled() region. There are code pathes which really do
not require to disable preemption for that.
We have that seperated in preempt-rt for obvious
-off-by: Cornelia Huck cornelia.h...@de.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
/virtio_mmio.c | 3 +++
drivers/virtio/virtio_pci.c| 3 +++
7 files changed, 21 insertions(+)
Looks sane to me.
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
Add low level APIs to test/set/clear feature bits.
For use by transports, to make it easier to
write code independent of feature bit array format.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
include/linux/virtio_config.h | 53
---
1
Negotiate full 64 bit features.
Change u32 to u64, make sure to use 1ULL everywhere.
Note: devices guarantee that VERSION_1 is clear unless
revision 1 is negotiated.
Note: We don't need to re-setup the ccw, but we do it
for clarity.
Based on patches by Rusty, Thomas Huth and Cornelia.
---
Reviewed-by: David Hildenbrand d...@linux.vnet.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Now that we use u64 for bits, we can simply them together.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
drivers/virtio/virtio.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
--
To unsubscribe from
Based on patch by Cornelia Huck.
Note: for consistency, and to avoid sparse errors,
convert all fields, even those no longer in use
for virtio v1.0.
Signed-off-by: Cornelia Huck cornelia.h...@de.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
...
-static unsigned
-by: Cornelia Huck cornelia.h...@de.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
drivers/s390/kvm/virtio_ccw.c | 54
+--
1 file changed, 42 insertions(+), 12 deletions(-)
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
On Mon, Dec 01, 2014 at 09:16:41AM +0100, David Hildenbrand wrote:
Based on patch by Cornelia Huck.
Note: for consistency, and to avoid sparse errors,
convert all fields, even those no longer in use
for virtio v1.0.
Signed-off-by: Cornelia Huck cornelia.h
This will make it easy for transports to validate features and return
failure.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
include/linux/virtio_config.h | 3 ++-
drivers/lguest/lguest_device.c | 4 +++-
drivers/misc/mic/card/mic_virtio.c | 4 +++-
. We can't move the code
directly into kernel.h for now, as that results in ugly header recursions we
can't avoid for now.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/kernel.h | 3 ++-
mm/memory.c| 19 +++
2 files changed, 9 insertions
In general, non-atomic variants of user access functions may not sleep if
pagefaults are disabled.
Let's update all relevant comments in uaccess code. This also refelects the
might_sleep() checks in might_fault().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/avr32/include
!
David
David Hildenbrand (5):
uaccess: add pagefault_count to thread_info
uaccess: count pagefault_disable() levels in pagefault_count
mm, uaccess: trigger might_sleep() in might_fault() when pagefaults
are disabled
uaccess: clearify that uaccess may only sleep if pagefaults
() envionment by
calling pagefault_disabled().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/uaccess.h | 45 ++---
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index
.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/alpha/include/asm/thread_info.h | 1 +
arch/arc/include/asm/thread_info.h| 1 +
arch/arm/include/asm/thread_info.h| 1 +
arch/arm64/include/asm/thread_info.h | 1 +
arch/avr32/include/asm/thread_info.h
This patch introduces CONFIG_DEBUG_PAGEFAULT_COUNT, to detect over-/underflows
in the pagefault_count resulting from a wrong usage of pagefault_enable() and
pagefault_disable().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/uaccess.h | 8 +++-
lib/Kconfig.debug
On Fri, Dec 05, 2014 at 12:18:07PM +0100, David Hildenbrand wrote:
-void might_fault(void)
+void __might_fault(const char *file, int line)
{
/*
* Some code (nfs/sunrpc) uses socket ops on kernel memory while
@@ -3710,21 +3710,16 @@ void might_fault(void
From: David Hildenbrand [...
This should be likely() instead of unlikely(), no?
I'd rather write this
if (pagefault_disabled())
return;
__might_sleep(file, line, 0);
and leave the likely stuff completely away.
Makes perfect sense!
From my experience
Am 05.12.2014 um 12:18 schrieb David Hildenbrand:
This patch adds the pagefault_count to the thread_info of all
architectures. It will be used to count the pagefault_disable() levels
on a per-thread basis.
We are not reusing the preempt_count as this is per cpu on x86 and we want
, therefore never exiting the loop in
cpu_hotplug_begin().
This quick fix wakes up the active_writer proactively. The writer already
goes back to sleep if the ref count isn't already down to 0, so this should be
fine.
Can't reproduce the error with this fix.
Signed-off-by: David Hildenbrand d
The title should of course say active_writer ... grml
David
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at
On Mon, Dec 08, 2014 at 07:13:03PM +0100, David Hildenbrand wrote:
Commit b2c4623dcd07 (rcu: More on deadlock between CPU hotplug and
expedited
grace periods) introduced another problem that can easily be reproduced by
starting/stopping cpus in a loop.
E.g.:
for i in `seq 5000
active_writer is cleared while holding cpuhp_lock, so this should be safe,
right?
You lost me on that one. Don't we get to that piece of code precisely
because we don't hold any of the CPU-hotplug locks? If so, the
writer might well hold all the locks it needs, and might well change
won't lose any wakeups when racing with put_online_cpus().
Can't reproduce it with this fix.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
kernel/cpu.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 90a3d01
The compiler is within its rights to optimize the active_writer local
variable out of existence, thus re-introducing the possible race with
the writer that can pass a NULL pointer to wake_up_process(). So you
really need the ACCESS_ONCE() on the read from cpu_hotplug.active_writer.
Please
Therefore we have to move the condition check inside the
__set_current_state(TASK_UNINTERRUPTIBLE) - schedule();
section to not miss any wake ups when the condition is satisfied.
So wake_up_process() will either see TASK_RUNNING and do nothing or see
TASK_UNINTERRUPTIBLE and set it
On Tue, Dec 09, 2014 at 11:11:01AM +0100, David Hildenbrand wrote:
Therefore we have to move the condition check inside the
__set_current_state(TASK_UNINTERRUPTIBLE) - schedule();
section to not miss any wake ups when the condition is satisfied.
So wake_up_process
On Tue, Dec 09, 2014 at 11:11:01AM +0100, David Hildenbrand wrote:
Therefore we have to move the condition check inside the
__set_current_state(TASK_UNINTERRUPTIBLE) - schedule();
section to not miss any wake ups when the condition is satisfied.
So wake_up_process
check. (otherwise a wakeup might get lost)
Can't reproduce it with this fix.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
kernel/cpu.c | 18 --
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 90a3d01..7489b7a 100644
On Tue, Dec 09, 2014 at 01:23:31PM +0100, David Hildenbrand wrote:
Commit b2c4623dcd07 (rcu: More on deadlock between CPU hotplug and
expedited
grace periods) introduced another problem that can easily be reproduced by
starting/stopping cpus in a loop.
E.g.:
for i in `seq 5000
(sorry if this was already discussed, I ignored most of my emails
I got this week)
On 12/09, David Hildenbrand wrote:
@@ -116,7 +118,13 @@ void put_online_cpus(void)
if (cpu_hotplug.active_writer == current)
return;
if (!mutex_trylock(cpu_hotplug.lock
rearrange the lockdep anotations so we won't get false positives.
Can't reproduce it with this fix.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
kernel/cpu.c | 60 ++--
1 file changed, 26 insertions(+), 34 deletions(-)
diff --git
in put_online_cpus()
anymore.
Also rearrange the lockdep anotations so we won't get false positives.
Can't reproduce it with this fix.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
kernel/cpu.c | 60
++--
1 file changed, 26
compiled on powerpc, arm, sparc, sparc64, arm64, x86_64, i386, mips,
alpha, ia64, xtensa, m68k, microblaze.
Tested on s390.
David Hildenbrand (5):
uaccess: add pagefault_count to thread_info
uaccess: count pagefault_disable() levels in pagefault_count
mm, uaccess: trigger might_sleep
This patch introduces CONFIG_DEBUG_PAGEFAULT_COUNT, to detect over-/underflows
in the pagefault_count resulting from a wrong usage of pagefault_enable() and
pagefault_disable().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/uaccess.h | 8 +++-
lib/Kconfig.debug
In general, non-atomic variants of user access functions may not sleep if
pagefaults are disabled.
Let's update all relevant comments in uaccess code. This also refelects the
might_sleep() checks in might_fault().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/avr32/include
() envionment by
calling pagefault_disabled().
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/uaccess.h | 45 ++---
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index
. We can't move the code
directly into kernel.h for now, as that results in ugly header recursions we
can't avoid for now.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/linux/kernel.h | 3 ++-
mm/memory.c| 18 ++
2 files changed, 8 insertions(+), 13
.
The new counter is added directly below the preempt_count, except for archs
relying on a manual calculation of asm offsets - to minimize the changes.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/alpha/include/asm/thread_info.h | 1 +
arch/arc/include/asm/thread_info.h
On 12/10, David Hildenbrand wrote:
@@ -127,20 +119,16 @@ void put_online_cpus(void)
{
if (cpu_hotplug.active_writer == current)
return;
- if (!mutex_trylock(cpu_hotplug.lock)) {
- atomic_inc(cpu_hotplug.puts_pending);
- cpuhp_lock_release
only thing that is bugging me is this part. Without the lock we can't
guarantee that another get_online_cpus() just arrived and bumped the
refcount
to 0.
Of course this only applies to misuse of put/get_online_cpus.
We could hack some loop that tries to cmp_xchng with the old
cond_resched() before poking RCU
David Hildenbrand (1):
hotplugcpu: Avoid deadlocks by waking active_writer
Hi Ingo, Paul,
Heiko/Christian seem to have hit the bug (hotplugcpu: Avoid deadlocks by waking
active_writer addresses) in 3.18-rc3.
And as commit b2c4623dcd07 was in linux starting
On Thu, Feb 19, 2015 at 03:48:05PM +0100, David Hildenbrand wrote:
Downside is that now that I have to touch all fault handlers, I have to go
through all archs again.
You should be able to borrow from the -rt patches there. They have all
that.
Jup, that's what I partially did.
Thanks
On Mon, Jan 12, 2015 at 03:19:11PM +0100, David Hildenbrand wrote:
Thomas, Peter,
anything that speaks against putting the pagefault_disable counter into
thread_info (my series) instead of task_struct (rt tree)?
IOW, what would be the right place for it?
I think we put
on powerpc, arm, sparc, sparc64, arm64, x86_64, i386, mips,
alpha, ia64, xtensa, m68k, microblaze.
Tested on s390.
David Hildenbrand (5):
uaccess: add pagefault_count to thread_info
uaccess: count pagefault_disable() levels in pagefault_count
mm, uaccess: trigger might_sleep
OK, so if I understand correctly, I need to add the Cc:
stable tags after the last
git-format-patch/git-send-email, but (of course) before
the git-request-pull. Is that the trick?
So I'd try to avoid the unnecessary rebase: teach
git-send-email to not Cc: to -stable, even though
all ips, otherwise the default
(PERF_RECORD_MISC_USER) will be used by error.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
tools/perf/util/machine.c | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/perf/util/machine.c b/tools
On Mon, Mar 30, 2015 at 10:11:00AM +0200, David Hildenbrand wrote:
Commit 2e77784bb7d8 (perf callchain: Move cpumode resolve code to
add_callchain_ip) promised No change in behavior..
As this commit breaks callchains on s390x (symbols not getting resolved,
I think it's a generic
all ips, otherwise the default
(PERF_RECORD_MISC_USER) will be used by error.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
tools/perf/util/machine.c | 25 -
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/tools/perf/util/machine.c b/tools
Bring into line with the commentary for the other structures and their
KVM_EXIT_* cases.
s/commentary/comments/ in the subject and description. Unless you want to add a
lengthy discussion :)
Signed-off-by: Alex Bennée alex.ben...@linaro.org
---
v2
- add comments for other exit
Looks good to me!
Is that a Reviewed-by?
Now it is :)
Reviewed-by: David Hildenbrand d...@linux.vnet.ibm.com
David
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
This commit adds a stub function to support the KVM_SET_GUEST_DEBUG
ioctl. Currently any operation flag will return EINVAL. Actual
Well it won't return -EINVAL if you push in KVM_GUESTDBG_ENABLE or 0.
Any unsupported flag will return -EINVAL. For now, only KVM_GUESTDBG_ENABLE is
supported,
This is a precursor for later patches which will need to do more to
setup debug state before entering the hyp.S switch code. The existing
functionality for setting mdcr_el2 has been moved out of hyp.S and now
uses the value kept in vcpu-arch.mdcr_el2.
This also moves the conditional setting
This commit defines the API headers for guest debugging. There are two
architecture specific debug structures:
- kvm_guest_debug_arch, allows us to pass in HW debug registers
- kvm_debug_exit_arch, signals the exact debug exit and pc
The type of debugging being used is control by the
On Thu, Feb 19, 2015 at 03:48:05PM +0100, David Hildenbrand wrote:
Downside is that now that I have to touch all fault handlers, I have to go
through all archs again.
You should be able to borrow from the -rt patches there. They have all
that.
Hi Peter,
I hadn't much time to work
On Fri, Mar 27, 2015 at 04:40:50PM +0100, David Hildenbrand wrote:
e.g. futex_atomic_op_inuser(): easy to fix, add preempt_enable/disable
respectively.
e.g. futex_atomic_cmpxchg_inatomic(): not so easy / nice to fix.
The inatomic variants rely on the caller to make sure
Commit 2e77784bb7d8 (perf callchain: Move cpumode resolve code to
add_callchain_ip) promised No change in behavior..
As this commit breaks callchains on s390x (symbols not getting resolved,
observed when profiling the kernel), this statement is wrong. The
cpumode must be kept when
This adds support for SW breakpoints inserted by userspace.
We do this by trapping all BKPT exceptions in the
hypervisor (MDCR_EL2_TDE). The kvm_debug_exit_arch carries the address
of the exception. If user-space doesn't know of the breakpoint then we
have a guest inserted breakpoint and
On Tue, Mar 31, 2015 at 04:08:02PM +0100, Alex Bennée wrote:
This commit adds a stub function to support the KVM_SET_GUEST_DEBUG
ioctl. Currently any operation flag will return EINVAL. Actual
functionality will be added with further patches.
Signed-off-by: Alex Bennée
This series therefore does 2 things:
1. Decouple pagefault_disable() from preempt_enable()
...
2. Reenable might_sleep() checks for might_fault()
All seems sensible to me. pagefault_disabled has to go into the
task_struct (rather than being per-cpu) because
* David Hildenbrand d...@linux.vnet.ibm.com wrote:
On Thu, May 07, 2015 at 12:50:53PM +0200, David Hildenbrand wrote:
Just to make sure we have a common understanding (as written in my cover
letter):
Your suggestion won't work with !CONFIG_PREEMPT
On Thu, May 07, 2015 at 12:50:53PM +0200, David Hildenbrand wrote:
Just to make sure we have a common understanding (as written in my cover
letter):
Your suggestion won't work with !CONFIG_PREEMPT (!CONFIG_PREEMPT_COUNT). If
there is no preempt counter, in_atomic() won't work
On Wed, May 06, 2015 at 07:50:25PM +0200, David Hildenbrand wrote:
+/*
+ * Is the pagefault handler disabled? If so, user access methods will not
sleep.
+ */
+#define pagefault_disabled() (current-pagefault_disabled != 0)
So -RT has:
static inline bool pagefault_disabled(void
On Thu, May 07, 2015 at 12:50:53PM +0200, David Hildenbrand wrote:
Just to make sure we have a common understanding (as written in my cover
letter):
Your suggestion won't work with !CONFIG_PREEMPT (!CONFIG_PREEMPT_COUNT). If
there is no preempt counter, in_atomic() won't work
from Thomas Gleixner.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/alpha/mm/fault.c | 5 ++---
arch/arc/mm/fault.c| 2 +-
arch/arm/mm/fault.c| 2 +-
arch/arm64/mm/fault.c | 2 +-
arch/avr32/mm/fault.c | 4 ++--
arch/cris/mm/fault.c | 6
(...);
spin_unlock(lock);
Cross compiled on powerpc, arm, sparc, sparc64, arm64, x86_64, i386,
mips, alpha, ia64, xtensa, m68k, microblaze.
Tested on s390x.
Any feedback very welcome!
Thanks!
David Hildenbrand (15):
uaccess: count pagefault_disable() levels in pagefault_disabled
mm, uaccess: trigger
by disabling preemption
Let's make this explicit, to prepare for pagefault_disable() not
touching preemption anymore.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
arch/arm/include/asm/futex.h | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/arm
exclusion when relying on a get_user()/
put_user() implementation.
Signed-off-by: David Hildenbrand d...@linux.vnet.ibm.com
---
include/asm-generic/futex.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/asm-generic/futex.h b/include/asm-generic/futex.h
index 3586017..e56272c 100644
Let's explicitly disable/enable preemption in the !CONFIG_SMP version
of futex_atomic_op_inuser, to prepare for pagefault_disable() not
touching preemption anymore.
Otherwise we might break mutual exclusion when relying on a get_user()/
put_user() implementation.
Signed-off-by: David Hildenbrand
1 - 100 of 3856 matches
Mail list logo