the counter a second pair of
"function" and "func_repeats" events will be generated. The time
interval gets codded by using up to 48 (32 + 16) bits.
Yordan Karadzhov (VMware) (6):
tracing: Define static void trace_print_time()
tracing: Define new ftrace event &qu
of repeats.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 162 -
1 file changed, 159 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f37f73a9b1b8..1f0e63f5d1f9 100644
The field is used to keep track of the consecutive (on the same CPU) calls
of a single function. This information is needed in order to consolidate
the function tracing record in the cases when a single function is called
number of times.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel
This patch only provides the implementation of the method.
Later we will used it in a combination with a new option for
function tracing.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.c | 34 ++
kernel/trace/trace.h | 4
2 files changed
d here, will be used by this new event to print the time of the
last repeat of a function that is consecutively called number of times.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_output.c | 26 +-
1 file changed, 17 insertions(+), 9 deletions(-)
diff
igned-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 65 --
1 file changed, 38 insertions(+), 27 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f93723ca66bc..f37f73a9b1b8 100644
--- a/kernel/tra
repeated function events with
a single event and save space on the ring buffer
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.h | 3 +++
kernel/trace/trace_entries.h | 22 +
kernel/trace/trace_output.c | 48
3 files changed
The field is used to keep track of the consecutive (on the same CPU) calls
of a single function. This information is needed in order to consolidate
the function tracing record in the cases when a single function is called
number of times.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel
This patch only provides the implementation of the method.
Later we will used it in a combination with a new option for
function tracing.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.c | 26 ++
kernel/trace/trace.h | 4
kernel
repeated function events with
a single event and save space on the ring buffer
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.h | 3 +++
kernel/trace/trace_entries.h | 22 +
kernel/trace/trace_output.c | 47
3 files changed
of repeats.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 161 -
1 file changed, 158 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f37f73a9b1b8..9a3cbdbfd1f7 100644
igned-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 65 --
1 file changed, 38 insertions(+), 27 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f93723ca66bc..f37f73a9b1b8 100644
--- a/kernel/tra
In the case of an overflow of the counter a second pair of
"function" and "func_repeats" events will be generated. The time
interval gets codded by using up to 48 (32 + 16) bits.
Yordan Karadzhov (VMware) (5):
tracing: Define new ftrace event "func_repeats"
Hi Steven,
Hi Steven,
On 6.04.21 г. 1:15, Steven Rostedt wrote:
@@ -235,30 +248,31 @@ static struct tracer function_trace;
static int
func_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
{
- switch (bit) {
- case TRACE_FUNC_OPT_STACK:
- /*
of repeats.
Signed-off-by: Yordan Karadzhov (VMware)
fix last
---
kernel/trace/trace_functions.c | 161 -
1 file changed, 158 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 6c912eb0508a..72d2e07dc103
igned-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 66 --
1 file changed, 40 insertions(+), 26 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f93723ca66bc..6c912eb0508a 100644
--- a/kernel/tra
The field is used to keep track of the consecutive (on the same CPU) calls
of a single function. This information is needed in order to consolidate
the function tracing record in the cases when a single function is called
number of times.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel
repeated function events with
a single event and save space on the ring buffer
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.h | 3 +++
kernel/trace/trace_entries.h | 39 ++
kernel/trace/trace_output.c | 47
3
This patch only provides the implementation of the method.
Later we will used it in a combination with a new option for
function tracing.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.c | 26 ++
kernel/trace/trace.h | 4
2 files changed, 30
of the
"func_repeats" event. 16 bits are used to record the repetition
count. In the case of an overflow of the counter a second pair of
"function" and "func_repeats" events will be generated. The time
interval gets codded by using up to 48 (32 + 16) bits.
The "cpu" parameter is not being used by the function.
Signed-off-by: Yordan Karadzhov (VMware)
---
include/linux/ring_buffer.h | 2 +-
kernel/trace/ring_buffer.c | 2 +-
kernel/trace/trace.c| 8
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/inc
On 4.03.21 г. 18:38, Steven Rostedt wrote:
On Thu, 4 Mar 2021 11:01:37 +0200
"Yordan Karadzhov (VMware)" wrote:
Thanks Yordan for doing this!
I have some comments below.
Hi Steven,
Thank you very much for looking into this!
Your suggestion makes perfect sense. I onl
A declaration of function "int trace_empty(struct trace_iterator *iter)"
shows up twice in the header file kernel/trace/trace.h
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/trace/trace.h b/kernel/trace/tra
This patch only provides the implementation of the method.
Later we will used it in a combination with a new option for
function tracing.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.c | 21 +
kernel/trace/trace.h | 4
2 files changed, 25 insertions
igned-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace_functions.c | 66 --
1 file changed, 40 insertions(+), 26 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index f93723ca66bc..6c912eb0508a 100644
--- a/kernel/tra
of repeats.
Signed-off-by: Yordan Karadzhov (VMware)
fix last
---
kernel/trace/trace_functions.c | 157 -
1 file changed, 154 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 6c912eb0508a..fbf60ff93ffb
The field is used to keep track of the consecutive (on the same CPU) calls
of a single function. This information is needed in order to consolidate
the function tracing record in the cases when a single function is called
number of times.
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel
repeated function events with
a single event and save space on the ring buffer
Signed-off-by: Yordan Karadzhov (VMware)
---
kernel/trace/trace.h | 3 +++
kernel/trace/trace_entries.h | 16 +
kernel/trace/trace_output.c | 44
3 files changed, 63
we will record only the first call, followed by an event showing the
number of repeats.
Yordan Karadzhov (VMware) (5):
tracing: Define new ftrace event "func_repeats"
tracing: Add "last_func_repeats" to struct trace_array
tracing: Add method for recording "func_
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 2f0df49c89acaa58571d509830bc481250699885
Gitweb:
https://git.kernel.org/tip/2f0df49c89acaa58571d509830bc481250699885
Author:Steven Rostedt (VMware)
AuthorDate:Fri, 11 Dec 2020 16:37:54
From: "Steven Rostedt (VMware)"
Currently, the only way to get access to the registers of a function via a
ftrace callback is to set the "FL_SAVE_REGS" bit in the ftrace_ops. But as this
saves all regs as if a breakpoint were to trigger (for use with kprobes), it
is
From: "Steven Rostedt (VMware)"
In preparation to have arguments of a function passed to callbacks attached
to functions as default, change the default callback prototype to receive a
struct ftrace_regs as the forth parameter instead of a pt_regs.
For callbacks that set the FL_SAVE
From: "Steven Rostedt (VMware)"
When CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS is available, the ftrace call
will be able to set the ip of the calling function. This will improve the
performance of live kernel patching where it does not need all the regs to
be stored just to change the i
ftrace_instruction_pointer_set() in the live patching code.
Steven Rostedt (VMware) (3):
ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs
ftrace/x86: Allow for arguments to be passed in to ftrace_regs by default
livepatch: Use the default ftrace_ops instead of REGS
From: "Steven Rostedt (VMware)"
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
T
From: "Steven Rostedt (VMware)"
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
T
From: "Steven Rostedt (VMware)"
If for some reason a function is called that triggers the recursion
detection of live patching, trigger a warning. By not executing the live
patch code, it is possible that the old unpatched function will be called
placing the system into an unknown st
From: "Steven Rostedt (VMware)"
The preempt_count() is not a simple location in memory, it could be part of
per_cpu code or more. Each access to preempt_count(), or one of its accessor
functions (like in_interrupt()) takes several cycles. By reading
preempt_count() once, and then d
for various archs
- Use trace_recursion flags in current for protecting recursion of recursion
recording
- Make the recursion logic a little cleaner
- Export GPL the recursion recording
Steven Rostedt (VMware) (11):
ftrace: Move the recursion testing into global headers
ftrace: Add
From: "Steven Rostedt (VMware)"
Currently, if a callback is registered to a ftrace function and its
ftrace_ops does not have the RECURSION flag set, it is encapsulated in a
helper function that does the recursion for it.
Really, all the callbacks should have their own recursion
From: "Steven Rostedt (VMware)"
If a ftrace callback requires "rcu_is_watching", then it adds the
FTRACE_OPS_FL_RCU flag and it will not be called if RCU is not "watching".
But this means that it will use a trampoline when called, and this slows
down the funct
From: "Steven Rostedt (VMware)"
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
T
From: "Steven Rostedt (VMware)"
To make it easier for ftrace callbacks to have recursion protection, provide
a ftrace_test_recursion_trylock() and ftrace_test_recursion_unlock() helper
that tests for recursion.
Link: https://lkml.kernel.org/r/20201028115612.634927...@goodmis.org
From: "Steven Rostedt (VMware)"
Now that all callbacks are recursion safe, reverse the meaning of the
RECURSION flag and rename it from RECURSION_SAFE to simply RECURSION.
Now only callbacks that request to have recursion protecting it will
have the added trampoline to do so.
A
From: "Steven Rostedt (VMware)"
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
T
From: "Steven Rostedt (VMware)"
This adds CONFIG_FTRACE_RECORD_RECURSION that will record to a file
"recursed_functions" all the functions that caused recursion while a
callback to the function tracer was running.
Cc: Jonathan Corbet
Cc: Guo Ren
Cc: "James E.J. Bot
The following commit has been merged into the core/static_call branch of tip:
Commit-ID: 547305a64632813286700cb6d768bfe773df7d19
Gitweb:
https://git.kernel.org/tip/547305a64632813286700cb6d768bfe773df7d19
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 01 Oct 2020 21:27
Add a new libtraceevent man page with documentation of these debug APIs:
tep_print_printk
tep_print_funcs
tep_set_test_filters
tep_plugin_print_options
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
v2 changes:
- Removed an extra interval from the example's
Add documentation of tep_add_plugin_path() API in the libtraceevent
plugin man page.
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
v2 changes:
- Fixed grammar mistakes, found by Steven Rostedt.
.../Documentation/libtraceevent-plugins.txt | 25 +--
1 file changed, 23
ed-by: Ben Hutchings
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
v1 of the patch is here:
https://lore.kernel.org/r/20200924070609.100771-2-tz.stoya...@gmail.com
v2 changes (addressed Steven's comments):
- Removed leading underscores from the names of newly hidden internal
functions.
v3 changes
Add a new libtraceevent man page with documentation of these debug APIs:
tep_print_printk
tep_print_funcs
tep_set_test_filters
tep_plugin_print_options
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
.../Documentation/libtraceevent-debug.txt | 95
Add documentation of tep_add_plugin_path() API in the libtraceevent plugin man
page.
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
.../Documentation/libtraceevent-plugins.txt | 22 +--
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/tools/lib/traceevent
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
v1 of the patch is here:
https://lore.kernel.org/r/20200924070609.100771-2-tz.stoya...@gmail.com
v2 changes (addressed Steven's comments):
- Removed leading underscores from the names of newly hidden internal
functions.
v3 changes (addressed Steven
ugin_paths
tep_peek_char
tep_buffer_init
tep_get_input_buf_ptr
tep_get_input_buf
tep_read_token
tep_free_token
tep_free_event
tep_free_format_field
Reported-by: Ben Hutchings
Signed-off-by: Tzvetomir Stoyanov (VMware)
---
v1 of the patch is here:
https://lore.kernel.org/r/2020092407060
The following commit has been merged into the core/static_call branch of tip:
Commit-ID: d25e37d89dd2f41d7acae0429039d2f0ae8b4a07
Gitweb:
https://git.kernel.org/tip/d25e37d89dd2f41d7acae0429039d2f0ae8b4a07
Author:Steven Rostedt (VMware)
AuthorDate:Tue, 18 Aug 2020 15:57
The following commit has been merged into the sched/core branch of tip:
Commit-ID: a87e749e8fa1aaef9b4db32e21c2795e69ce67bf
Gitweb:
https://git.kernel.org/tip/a87e749e8fa1aaef9b4db32e21c2795e69ce67bf
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 19 Dec 2019 16:44:54 -05
The following commit has been merged into the sched/core branch of tip:
Commit-ID: c3a340f7e7eadac7662ab104ceb16432e5a4c6b2
Gitweb:
https://git.kernel.org/tip/c3a340f7e7eadac7662ab104ceb16432e5a4c6b2
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 19 Dec 2019 16:44:53 -05
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 590d69796346353878b275c5512c664e3f875f24
Gitweb:
https://git.kernel.org/tip/590d69796346353878b275c5512c664e3f875f24
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 19 Dec 2019 16:44:52 -05
From: Thomas Hellstrom
A huge pud page can theoretically be faulted in racing with pmd_alloc()
in __handle_mm_fault(). That will lead to pmd_alloc() returning an
invalid pmd pointer. Fix this by adding a pud_trans_unstable() function
similar to pmd_trans_unstable() and check whether the pud is
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 9bdff5b6436655d42dd30253c521e86ce07b9961
Gitweb:
https://git.kernel.org/tip/9bdff5b6436655d42dd30253c521e86ce07b9961
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 17 Oct 2019 17:05:23 -04
The following commit has been merged into the perf/core branch of tip:
Commit-ID: a5e05abc6b8d81148b35cd8632a4a6252383d968
Gitweb:
https://git.kernel.org/tip/a5e05abc6b8d81148b35cd8632a4a6252383d968
Author:Steven Rostedt (VMware)
AuthorDate:Thu, 17 Oct 2019 17:05:22 -04
From: Thomas Hellstrom
LLVM's assembler doesn't accept the short form INL instruction:
inl (%%dx)
but instead insists on the output register to be explicitly specified.
This was previously fixed for the VMWARE_PORT macro. Fix it also for
the VMWARE_HYPERCALL macro.
Cc:
From: Thomas Hellstrom
The platform detection VMWARE_PORT macro uses the VMWARE_HYPERVISOR_PORT
definition, but expects it to be an integer. However, when it was moved
to the new vmware.h include file, it was changed to be a string to better
fit into the VMWARE_HYPERCALL set of macros. This
From: Thomas Hellstrom
Two fixes for recently introduced regressions:
Patch 1 is more or less idential to a previous patch fixing the VMW_PORT
macro on LLVM's assembler. However, that patch left out the VMW_HYPERCALL
macro (probably not configured for use), so let's fix that also.
Patch 2
On 10/8/19 2:34 PM, Thomas Hellström (VMware) wrote:
Hi, Christoph,
Following our previous discussion I wonder if something along the
lines of the following could work / be acceptible
typedef unsigned long dma_pfn_t /* Opaque pfn type. Arch dependent.
This could if needed be a struct
From: Thomas Hellstrom
The platform detection VMWARE_PORT macro uses the VMWARE_HYPERVISOR_PORT
definition, but expects it to be an integer. However, when it was moved
to the new vmware.h include file, it was changed to be a string to better
fit into the VMWARE_HYPERCALL set of macros. This
From: Thomas Hellstrom
Two fixes for recently introduced regressions:
Patch 1 is more or less idential to a previous patch fixing the VMW_PORT
macro on LLVM's assembler. However, that patch left out the VMW_HYPERCALL
macro (probably not configured for use), so let's fix that also.
Patch 2
From: Thomas Hellstrom
LLVM's assembler doesn't accept the short form INL instruction:
inl (%%dx)
but instead insists on the output register to be explicitly specified.
This was previously fixed for the VMWARE_PORT macro. Fix it also for
the VMWARE_HYPERCALL macro.
Fixes: b4dd4f6e3648
On 10/10/19 4:17 PM, Peter Zijlstra wrote:
On Thu, Oct 10, 2019 at 03:24:47PM +0200, Thomas Hellström (VMware) wrote:
On 10/10/19 3:05 PM, Peter Zijlstra wrote:
On Thu, Oct 10, 2019 at 02:43:10PM +0200, Thomas Hellström (VMware) wrote:
+/**
+ * wp_shared_mapping_range - Write-protect all ptes
Hi, Dan,
On 10/16/19 3:44 AM, Dan Williams wrote:
On Tue, Oct 15, 2019 at 3:06 AM Kirill A. Shutemov wrote:
On Tue, Oct 08, 2019 at 11:37:11AM +0200, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom
A huge pud page can theoretically be faulted in racing with pmd_alloc
From: Thomas Hellstrom
For users that want to travers all page table entries pointing into a
region of a struct address_space mapping, introduce a walk_page_mapping()
function.
The walk_page_mapping() function will be initially be used for dirty-
tracking in virtual graphics drivers.
Cc:
From: Thomas Hellstrom
With emulated coherent memory we need to be able to quickly look up
a resource from the MOB offset. Instead of traversing a linked list with
O(n) worst case, use an RBtree with O(log n) worst case complexity.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc:
From: Thomas Hellström
Graphics APIs like OpenGL 4.4 and Vulkan require the graphics driver
to provide coherent graphics memory, meaning that the GPU sees any
content written to the coherent memory on the next GPU operation that
touches that memory, and the CPU sees any content written by the
From: Thomas Hellstrom
Similar to write-coherent resources, make sure that from the user-space
point of view, GPU rendered contents is automatically available for
reading by the CPU.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
From: Thomas Hellstrom
Add two utilities to 1) write-protect and 2) clean all ptes pointing into
a range of an address space.
The utilities are intended to aid in tracking dirty pages (either
driver-allocated system memory or pci device memory).
The write-protect utility should be used in
From: Thomas Hellstrom
Without the lock, anybody modifying a pte from within this function might
have it concurrently modified by someone else.
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
Cc: Michal Hocko
Cc: Huang Ying
Cc: Jérôme Glisse
Cc:
From: Thomas Hellstrom
Add the callbacks necessary to implement emulated coherent memory for
surfaces. Add a flag to the gb_surface_create ioctl to indicate that
surface memory should be coherent.
Also bump the drm minor version to signal the availability of coherent
surfaces.
Cc: Andrew Morton
From: Thomas Hellstrom
The caller needs to make sure that the vma is not torn down during the
lock operation and can also use the i_mmap_rwsem for file-backed vmas.
Remove the BUG_ON. We could, as an alternative, add a test that either
vma->vm_mm->mmap_sem or
/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
@@ -0,0 +1,421 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/**
+ *
+ * Copyright 2019 VMware, Inc., Palo Alto, CA., USA
+ *
+ * Permission is hereby granted, free of charge, to any person obtai
This series converts all users of pagewalk positive callback return values
to use negative values instead, so that the positive values are free for
pagewalk control. Then the return value PAGE_WALK_CONTINUE is introduced.
That value is intended for callbacks to indicate that they've handled the
From: Linus Torvalds
The pagewalk code is being reworked to have positive callback return codes
do walk control. Avoid using positive return codes: "1" is replaced by
"-EBUSY".
Co-developed-by: Thomas Hellstrom
Signed-off-by: Thomas Hellstrom
---
mm/mempolicy.c | 16
1 file
From: Thomas Hellstrom
The pagewalk code is being reworked to have positive callback return codes
mean "walk control". Avoid using positive return codes: "1" is replaced by
"-ENOBUFS".
Signed-off-by: Thomas Hellstrom
---
fs/proc/task_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
From: Linus Torvalds
When we have both a pmd_entry() and a pte_entry() callback, in some
siutaions it is desirable not to traverse the pte level.
Reserve positive callback return values for walk control and define a
return value PAGE_WALK_CONTINUE that means skip lower level traversal
and
From: Thomas Hellstrom
We always do dirty tracking on the PTE level. This means that any huge
pmds we encounter should be read-only and not dirty: We can just skip
those. Write-enabled huge pmds should not exist. They should have been
split when made write-enabled. Warn and attempt to split
On 10/10/19 3:05 PM, Peter Zijlstra wrote:
On Thu, Oct 10, 2019 at 02:43:10PM +0200, Thomas Hellström (VMware) wrote:
+/**
+ * struct wp_walk - Private struct for pagetable walk callbacks
+ * @range: Range for mmu notifiers
+ * @tlbflush_start: Address of first modified pte
+ * @tlbflush_end
From: Thomas Hellstrom
Similar to write-coherent resources, make sure that from the user-space
point of view, GPU rendered contents is automatically available for
reading by the CPU.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
From: Thomas Hellstrom
The caller needs to make sure that the vma is not torn down during the
lock operation and can also use the i_mmap_rwsem for file-backed vmas.
Remove the BUG_ON. We could, as an alternative, add a test that either
vma->vm_mm->mmap_sem or
From: Thomas Hellstrom
For users that want to travers all page table entries pointing into a
region of a struct address_space mapping, introduce a walk_page_mapping()
function.
The walk_page_mapping() function will be initially be used for dirty-
tracking in virtual graphics drivers.
Cc:
From: Thomas Hellstrom
Without the lock, anybody modifying a pte from within this function might
have it concurrently modified by someone else.
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
Cc: Michal Hocko
Cc: Huang Ying
Cc: Jérôme Glisse
Cc:
From: Thomas Hellstrom
Add two utilities to 1) write-protect and 2) clean all ptes pointing into
a range of an address space.
The utilities are intended to aid in tracking dirty pages (either
driver-allocated system memory or pci device memory).
The write-protect utility should be used in
From: Thomas Hellstrom
Add the callbacks necessary to implement emulated coherent memory for
surfaces. Add a flag to the gb_surface_create ioctl to indicate that
surface memory should be coherent.
Also bump the drm minor version to signal the availability of coherent
surfaces.
Cc: Andrew Morton
From: Thomas Hellström
Graphics APIs like OpenGL 4.4 and Vulkan require the graphics driver
to provide coherent graphics memory, meaning that the GPU sees any
content written to the coherent memory on the next GPU operation that
touches that memory, and the CPU sees any content written by the
From: Thomas Hellstrom
With emulated coherent memory we need to be able to quickly look up
a resource from the MOB offset. Instead of traversing a linked list with
O(n) worst case, use an RBtree with O(log n) worst case complexity.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc:
/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
@@ -0,0 +1,421 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/**
+ *
+ * Copyright 2019 VMware, Inc., Palo Alto, CA., USA
+ *
+ * Permission is hereby granted, free of charge, to any person obtai
On 10/10/19 4:07 AM, Linus Torvalds wrote:
On Wed, Oct 9, 2019 at 6:10 PM Thomas Hellström (VMware)
wrote:
Your original patch does exactly the same!
Oh, no. You misread my original patch.
Look again.
The logic in my original patch was very different. It said that
- *if* we have
On 10/10/19 1:51 AM, Linus Torvalds wrote:
On Wed, Oct 9, 2019 at 3:31 PM Thomas Hellström (VMware)
wrote:
On 10/9/19 10:20 PM, Linus Torvalds wrote:
You *have* to call split_huge_pmd() if you're doing to call the
pte_entry() function.
End of story.
So is it that you want pte_entry
On 10/10/19 12:30 AM, Thomas Hellström (VMware) wrote:
On 10/9/19 10:20 PM, Linus Torvalds wrote:
On Wed, Oct 9, 2019 at 1:06 PM Thomas Hellström (VMware)
wrote:
On 10/9/19 9:20 PM, Linus Torvalds wrote:
Don't you get it? There *is* no PTE level if you didn't split.
Hmm, This paragraph makes
On 10/9/19 10:20 PM, Linus Torvalds wrote:
On Wed, Oct 9, 2019 at 1:06 PM Thomas Hellström (VMware)
wrote:
On 10/9/19 9:20 PM, Linus Torvalds wrote:
Don't you get it? There *is* no PTE level if you didn't split.
Hmm, This paragraph makes me think we have very different perceptions about
On 10/9/19 9:20 PM, Linus Torvalds wrote:
No. Your logic is garbage. The above code is completely broken.
YOU CAN NOT AVOID TRHE SPLIT AND THEN GO ON AT THE PTE LEVEL.
Don't you get it? There *is* no PTE level if you didn't split.
Hmm, This paragraph makes me think we have very different
On 10/9/19 6:21 PM, Linus Torvalds wrote:
On Wed, Oct 9, 2019 at 8:27 AM Kirill A. Shutemov wrote:
Do we have any current user that expect split_huge_pmd() in this scenario.
No. There are no current users of the pmd callback and the pte
callback at all, that I could find.
But it looks like
1 - 100 of 527 matches
Mail list logo