On 19/08/25 10:38, Phil Auld wrote:
...
> A few of us use: https://github.com/qais-yousef/sched_tp.git
>
> This has all the current scheduler raw tps exposed, I believe, but would
> need updates for these new ones, of course.
>
> I have a gitlab fork with our perf team uses to get at the ones
On Wed, Aug 20, 2025 at 8:27 AM Steven Rostedt wrote:
>
> On Thu, 24 Jul 2025 19:56:42 +0800
> Huacai Chen wrote:
>
> > On Thu, Jul 24, 2025 at 9:51 AM Bibo Mao wrote:
> > >
> > >
> > >
> > > On 2025/7/24 上午9:46, Steven Rostedt wrote:
> > > > On Thu, 24 Jul 2025 09:39:40 +0800
> > > > Bibo Mao
Masami Hiramatsu , Google wrote:
>
> Good catch! Hmm, previously we guaranteed that the find_first_fprobe_node()
> must be called under rcu read locked or fprobe_mutex locked, so that the
> node list should not be changed. But according to the comment of
> rhltable_lookup(), we need to lock the r
On Mon, 18 Aug 2025 07:33:32 +
Ye Weihua wrote:
> This warning was triggered during testing on v6.16:
>
> notifier callback ftrace_suspend_notifier_call already registered
> WARNING: CPU: 2 PID: 86 at kernel/notifier.c:23
> notifier_chain_register+0x44/0xb0
> ...
> Call Trace:
>
> blocki
On Sun, 17 Aug 2025 16:21:46 +0200
Ben Hutchings wrote:
> Commit 26dda5769509 "tools/bootconfig: Cleanup bootconfig footer size
> calculations" replaced some expressions of type int with the
> BOOTCONFIG_FOOTER_SIZE macro, which expands to an expression of type
> size_t, which is unsigned.
>
> O
On Wed, 13 Aug 2025 02:30:44 +
Tengda Wu wrote:
> Since the reader's hash is always tied to its file descriptor (fd),
> the writer cannot directly manage the reader's hash. To fix this,
> introduce a refcount for ftrace_hash, initialized to 1. The count
> is incremented only when a reader op
On Wed, 13 Aug 2025 02:30:43 +
Tengda Wu wrote:
> The free_ftrace_hash call is just unnecessary in this context since
> we shouldn't free the global hash that we don't own. Remove this call
> to fix the issue.
This is incorrect as it is only unnecessary if it's a read.
The code above is:
On Thu, 24 Jul 2025 19:56:42 +0800
Huacai Chen wrote:
> On Thu, Jul 24, 2025 at 9:51 AM Bibo Mao wrote:
> >
> >
> >
> > On 2025/7/24 上午9:46, Steven Rostedt wrote:
> > > On Thu, 24 Jul 2025 09:39:40 +0800
> > > Bibo Mao wrote:
> > >
> > >>>#define kvm_fpu_load_symbol \
> > >>> {
On Tue, 19 Aug 2025 18:42:55 -0400 Steven Rostedt wrote:
> On Mon, 16 Jun 2025 16:17:35 -0700
> Andrew Morton wrote:
>
> > On Mon, 16 Jun 2025 21:19:18 +0200 David Hildenbrand
> > wrote:
> >
> > > >>> Fixes: 4cc79b3303f22 ("mm/migration: add trace events for base page
> > > >>> and HugeTLB
On Mon, 16 Jun 2025 16:17:35 -0700
Andrew Morton wrote:
> On Mon, 16 Jun 2025 21:19:18 +0200 David Hildenbrand wrote:
>
> > >>> Fixes: 4cc79b3303f22 ("mm/migration: add trace events for base page and
> > >>> HugeTLB migrations")
> > >>> Signed-off-by: Steven Rostedt (Google)
> > >>
> > >> L
On Thu, 14 Aug 2025 13:05:35 -0400
Sasha Levin wrote:
> On Wed,
> Got a small build error:
>
> kernel/trace/trace_functions_graph.c: In function ‘get_return_for_leaf’:
> ./include/linux/stddef.h:16:33: error: ‘struct fgraph_retaddr_ent_entry’ has
> no member named ‘args’
> 16 | #define off
On Tue, 19 Aug 2025 07:41:52 -0600 Nico Pache wrote:
> The following series provides khugepaged with the capability to collapse
> anonymous memory regions to mTHPs.
>
> ...
>
> - I created a test script that I used to push khugepaged to its limits
>while monitoring a number of stats and trac
On Tue, Aug 19, 2025 at 08:37:00PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Block layer maps MMIO memory through dma_map_phys() interface
> with help of DMA_ATTR_MMIO attribute. There is a need to unmap
> that memory with the appropriate unmap function, something which
> wasn't p
On Mon, Jul 28, 2025 at 11:34:56PM +0200, Jiri Olsa wrote:
> Peter, do you have more comments?
I'm not really a fan of this syscall is faster than exception stuff. Yes
it is for current hardware, but I suspect much of this will be a
maintenance burden 'soon'.
Anyway, I'll queue the patches tomor
On Sun, Jul 20, 2025 at 01:21:20PM +0200, Jiri Olsa wrote:
> +static bool __is_optimized(uprobe_opcode_t *insn, unsigned long vaddr)
> +{
> + struct __packed __arch_relative_insn {
> + u8 op;
> + s32 raddr;
> + } *call = (struct __arch_relative_insn *) insn;
Not so
On Tue, Aug 19, 2025, at 20:20, Keith Busch wrote:
> On Tue, Aug 19, 2025 at 08:36:58PM +0300, Leon Romanovsky wrote:
>> static bool blk_dma_map_direct(struct request *req, struct device *dma_dev,
>> struct blk_dma_iter *iter, struct phys_vec *vec)
>> {
>> -iter->addr = dma_ma
On Tue, Aug 19, 2025 at 08:36:59PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Make sure that CPU is not synced and IOMMU is configured to take
> MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
We may have a minor patch conflict here with my unmerged dma metadata
s
On Tue, Aug 19, 2025 at 08:36:55PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
> that operate directly on physical addresses instead of page+offset
> parameters. This provides a more efficient interface for driv
On Tue, Aug 19, 2025 at 08:36:58PM +0300, Leon Romanovsky wrote:
> static bool blk_dma_map_direct(struct request *req, struct device *dma_dev,
> struct blk_dma_iter *iter, struct phys_vec *vec)
> {
> - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr),
> -
On Tue, 19 Aug 2025 10:51:52 +
Luo Gengkun wrote:
> Both tracing_mark_write and tracing_mark_raw_write call
> __copy_from_user_inatomic during preempt_disable. But in some case,
> __copy_from_user_inatomic may trigger page fault, and will call schedule()
> subtly. And if a task is migrated to
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function, something which
wasn't possible before adding new REQ attribute to block layer in
previous patch.
Si
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6 +-
include/linux/blk_ty
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters.
T
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be trea
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Reviewed-by: Juergen Gross
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed, 20 insertions(+), 1 deletion(
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --g
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). The existing semantics where
From: Leon Romanovsky
Extend base DMA page API to handle MMIO flow and follow
existing dma_map_resource() implementation to rely on dma_map_direct()
only to take DMA direct path.
Signed-off-by: Leon Romanovsky
---
kernel/dma/mapping.c | 26 +-
1 file changed, 21 inserti
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary.
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
From: Leon Romanovsky
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
In the following patches, the iommu_dma_map_resource() will be removed
in favour of iommu_dma_map_phys(..., attrs | DMA_ATTR_MMIO) flow.
Signed-off-by: Leon Roman
From: Leon Romanovsky
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems. S
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a phys_addr
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c | 4 ++--
2 files changed, 4 insertions(+
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting dev
Changelog:
v4:
* Fixed kbuild error with mismatch in kmsan function declaration due to
rebase error.
v3: https://lore.kernel.org/all/cover.1755193625.git.l...@kernel.org
* Fixed typo in "cacheable" word
* Simplified kmsan patch a lot to be simple argument refactoring
v2: https://lore.kernel.o
On 8/19/25 6:18 PM, Randy Dunlap wrote:
On 8/19/25 1:49 AM, Mehdi Ben Hadj Khelifa wrote:
-Changed 'Dyamically' to 'Dynamically' in trace/events.rst
under sections 7.1 and 7.3
Signed-off-by: Mehdi Ben Hadj Khelifa
Reviewed-by: Randy Dunlap
Thanks.
---
Documentation/trace/events.rst
On 8/19/25 1:49 AM, Mehdi Ben Hadj Khelifa wrote:
> -Changed 'Dyamically' to 'Dynamically' in trace/events.rst
>
> under sections 7.1 and 7.3
>
> Signed-off-by: Mehdi Ben Hadj Khelifa
Reviewed-by: Randy Dunlap
Thanks.
> ---
> Documentation/trace/events.rst | 8
> 1 file changed,
On Thu, Aug 14, 2025 at 12:15:04PM +0900, Masami Hiramatsu wrote:
> Hi Ryan,
>
> On Wed, 13 Aug 2025 01:21:01 +0900
> Ryan Chung wrote:
>
> > Resolve TODO in `__register_trace_fprobe()`:
> > parse `tf->symbol` robustly (support `sym!filter` and comma-separated
> > lists), trim tokens, ignore e
On Sun, 17 Aug 2025 10:55:14 +0300
Mohammad Gomaa wrote:
> Hello,
>
> This patch adds tracepoints to i2c-core-base to aid with debugging I2C
> probing failrues.
>
> The motivation for this comes from my work in Google Summer of Code (GSoC)
> 2025:
> "ChromeOS Platform Input Device Quality Mon
On Sun, Jul 20, 2025 at 01:21:18PM +0200, Jiri Olsa wrote:
> +static void destroy_uprobe_trampoline(struct uprobe_trampoline *tramp)
> +{
> + /*
> + * We do not unmap and release uprobe trampoline page itself,
> + * because there's no easy way to make sure none of the threads
> +
Hi Juri,
On Tue, Aug 19, 2025 at 04:02:04PM +0200 Juri Lelli wrote:
> On 19/08/25 12:34, Gabriele Monaco wrote:
> >
> >
> > On Tue, 2025-08-19 at 12:12 +0200, Peter Zijlstra wrote:
> > > On Tue, Aug 19, 2025 at 11:56:57AM +0200, Juri Lelli wrote:
> > > > Hi!
> > > >
> > > > On 14/08/25 17:08,
On Tue, Aug 19, 2025 at 7:49 AM Nico Pache wrote:
>
> Now that we can collapse to mTHPs lets update the admin guide to
> reflect these changes and provide proper guidence on how to utilize it.
>
> Reviewed-by: Bagas Sanjaya
> Signed-off-by: Nico Pache
I had git send email error and had to resend
On Tue, Aug 19, 2025 at 7:49 AM Nico Pache wrote:
>
> With mTHP support inplace, let add the per-order mTHP stats for
> exceeding NONE, SWAP, and SHARED.
>
> Signed-off-by: Nico Pache
I had git send email error and had to resend this patch (12) and patch
13, but i forgot the in-reply-to
please i
On Tue, 2025-08-19 at 16:02 +0200, Juri Lelli wrote:
> On 19/08/25 12:34, Gabriele Monaco wrote:
> >
> >
> > On Tue, 2025-08-19 at 12:12 +0200, Peter Zijlstra wrote:
> > > On Tue, Aug 19, 2025 at 11:56:57AM +0200, Juri Lelli wrote:
> > > > Hi!
> > > >
> > > > On 14/08/25 17:08, Gabriele Monaco w
With mTHP support inplace, let add the per-order mTHP stats for
exceeding NONE, SWAP, and SHARED.
Signed-off-by: Nico Pache
---
Documentation/admin-guide/mm/transhuge.rst | 17 +
include/linux/huge_mm.h| 3 +++
mm/huge_memory.c | 7
Now that we can collapse to mTHPs lets update the admin guide to
reflect these changes and provide proper guidence on how to utilize it.
Reviewed-by: Bagas Sanjaya
Signed-off-by: Nico Pache
---
Documentation/admin-guide/mm/transhuge.rst | 19 +--
1 file changed, 13 insertions(+)
On 19/08/25 12:34, Gabriele Monaco wrote:
>
>
> On Tue, 2025-08-19 at 12:12 +0200, Peter Zijlstra wrote:
> > On Tue, Aug 19, 2025 at 11:56:57AM +0200, Juri Lelli wrote:
> > > Hi!
> > >
> > > On 14/08/25 17:08, Gabriele Monaco wrote:
...
> > > > @@ -1482,6 +1486,7 @@ static void update_curr_dl_
Now that we can collapse to mTHPs lets update the admin guide to
reflect these changes and provide proper guidence on how to utilize it.
Reviewed-by: Bagas Sanjaya
Signed-off-by: Nico Pache
---
Documentation/admin-guide/mm/transhuge.rst | 19 +--
1 file changed, 13 insertions(+)
With mTHP support inplace, let add the per-order mTHP stats for
exceeding NONE, SWAP, and SHARED.
Signed-off-by: Nico Pache
---
Documentation/admin-guide/mm/transhuge.rst | 17 +
include/linux/huge_mm.h| 3 +++
mm/huge_memory.c | 7
khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in
some pages being unmapped. Skip these cases until we have a way to check
if its ok to collapse to a smaller mTHP size (like in the case of a
partially mapped folio).
This patch is inspired by Dev Jain's work on khugepaged mTHP s
From: Baolin Wang
When only non-PMD-sized mTHP is enabled (such as only 64K mTHP enabled),
we should also allow kicking khugepaged to attempt scanning and collapsing
64K mTHP. Modify hugepage_pmd_enabled() to support mTHP collapse, and
while we are at it, rename it to make the function name more
There are cases where, if an attempted collapse fails, all subsequent
orders are guaranteed to also fail. Avoid these collapse attempts by
bailing out early.
Signed-off-by: Nico Pache
---
mm/khugepaged.c | 31 ++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff
From: Dev Jain
Pass order to alloc_charge_folio() and update mTHP statistics.
Reviewed-by: Baolin Wang
Acked-by: David Hildenbrand
Co-developed-by: Nico Pache
Signed-off-by: Nico Pache
Signed-off-by: Dev Jain
---
Documentation/admin-guide/mm/transhuge.rst | 8
include/linux/huge_
The khugepaged daemon and madvise_collapse have two different
implementations that do almost the same thing.
Create collapse_single_pmd to increase code reuse and create an entry
point to these two users.
Refactor madvise_collapse and collapse_scan_mm_slot to use the new
collapse_single_pmd funct
Add the order to the tracepoints to give better insight into what order
is being operated at for khugepaged.
Acked-by: David Hildenbrand
Reviewed-by: Baolin Wang
Signed-off-by: Nico Pache
---
include/trace/events/huge_memory.h | 34 +++---
mm/khugepaged.c
The hpage_collapse functions describe functions used by madvise_collapse
and khugepaged. remove the unnecessary hpage prefix to shorten the
function name.
Reviewed-by: Liam R. Howlett
Reviewed-by: Zi Yan
Reviewed-by: Baolin Wang
Acked-by: David Hildenbrand
Signed-off-by: Nico Pache
---
mm/kh
From: Baolin Wang
We have now allowed mTHP collapse, but thp_vma_allowable_order() still only
checks if the PMD-sized mTHP is allowed to collapse. This prevents scanning
and collapsing of 64K mTHP when only 64K mTHP is enabled. Thus, we should
modify the checks to allow all large orders of anonym
The following series provides khugepaged with the capability to collapse
anonymous memory regions to mTHPs.
To achieve this we generalize the khugepaged functions to no longer depend
on PMD_ORDER. Then during the PMD scan, we use a bitmap to track chunks of
pages (defined by KHUGEPAGED_MTHP_MIN_OR
For khugepaged to support different mTHP orders, we must generalize this
to check if the PMD is not shared by another VMA and the order is enabled.
To ensure madvise_collapse can support working on mTHP orders without the
PMD order enabled, we need to convert hugepage_vma_revalidate to take a
bitm
Introduce the ability for khugepaged to collapse to different mTHP sizes.
While scanning PMD ranges for potential collapse candidates, keep track
of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If
mTHPs are en
generalize the order of the __collapse_huge_page_* functions
to support future mTHP collapse.
mTHP collapse can suffer from incosistant behavior, and memory waste
"creep". disable swapin and shared support for mTHP collapse.
No functional changes in this patch.
Reviewed-by: Baolin Wang
Acked-by
On Tue, 2025-08-19 at 12:08 +0200, Juri Lelli wrote:
> On 19/08/25 11:48, Gabriele Monaco wrote:
> > That's a good point, I need to check the actual overhead..
> >
> > One thing to note is that this timer is used only on state
> > constraints,
> > one could write roughly the same monitor like t
On Tue, 2025-08-19 at 11:14 +0200, Juri Lelli wrote:
> On 19/08/25 10:53, Juri Lelli wrote:
> > Hi!
> >
> > On 14/08/25 17:08, Gabriele Monaco wrote:
>
> ...
>
> > > + static bool verify_constraint(enum states curr_state, enum
> > > events event,
> > > + enum states
On Tue, 2025-08-19 at 10:53 +0200, Juri Lelli wrote:
> Hi!
>
> On 14/08/25 17:08, Gabriele Monaco wrote:
> >
> > +
> > +Examples
> > +
>
> Maybe add subsection titles to better mark separation between
> different examples?
Sure, makes sense.
>
> > +The 'wip' (wakeup in preemptive) exa
Both tracing_mark_write and tracing_mark_raw_write call
__copy_from_user_inatomic during preempt_disable. But in some case,
__copy_from_user_inatomic may trigger page fault, and will call schedule()
subtly. And if a task is migrated to other cpu, the following warning will
be trigger:
if (R
On Tue, 2025-08-19 at 12:12 +0200, Peter Zijlstra wrote:
> On Tue, Aug 19, 2025 at 11:56:57AM +0200, Juri Lelli wrote:
> > Hi!
> >
> > On 14/08/25 17:08, Gabriele Monaco wrote:
> > > Add the following tracepoints:
> > >
> > > * sched_dl_throttle(dl):
> > > Called when a deadline entity is
On Tue, Aug 19, 2025 at 11:56:57AM +0200, Juri Lelli wrote:
> Hi!
>
> On 14/08/25 17:08, Gabriele Monaco wrote:
> > Add the following tracepoints:
> >
> > * sched_dl_throttle(dl):
> > Called when a deadline entity is throttled
> > * sched_dl_replenish(dl):
> > Called when a deadline entit
Hi!
On 14/08/25 17:08, Gabriele Monaco wrote:
> Add the following tracepoints:
>
> * sched_dl_throttle(dl):
> Called when a deadline entity is throttled
> * sched_dl_replenish(dl):
> Called when a deadline entity's runtime is replenished
> * sched_dl_server_start(dl):
> Called when a
On Tue, 2025-08-19 at 11:18 +0200, Juri Lelli wrote:
> Hi!
>
> On 14/08/25 17:08, Gabriele Monaco wrote:
>
> ...
>
> > +/*
> > + * ha_monitor_init_env - setup timer and reset all environment
> > + *
> > + * Called from a hook in the DA start functions, it supplies the
> > da_mon
> > + * corre
On 19/08/25 10:53, Juri Lelli wrote:
> Hi!
>
> On 14/08/25 17:08, Gabriele Monaco wrote:
...
> > + static bool verify_constraint(enum states curr_state, enum events event,
> > +enum states next_state)
> > + {
> > + bool res = true;
> > +
> > + /* Validate guards
On Tue, Aug 19, 2025 at 09:49:20AM +0200, Nam Cao wrote:
> On Fri, Aug 15, 2025 at 03:40:16PM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 06, 2025 at 10:01:20AM +0200, Nam Cao wrote:
> >
> > > +/*
> > > + * The two trace points below may not work as expected for fair tasks due
> > > + * to delaye
-Changed 'Dyamically' to 'Dynamically' in trace/events.rst
under sections 7.1 and 7.3
Signed-off-by: Mehdi Ben Hadj Khelifa
---
Documentation/trace/events.rst | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.
On Fri, Aug 15, 2025 at 03:48:51PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 06, 2025 at 10:01:21AM +0200, Nam Cao wrote:
> > Add "real-time scheduling" monitor, which validates that SCHED_RR and
> > SCHED_FIFO tasks are scheduled before tasks with normal and extensible
> > scheduling policies
>
On Fri, Aug 15, 2025 at 03:40:16PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 06, 2025 at 10:01:20AM +0200, Nam Cao wrote:
>
> > +/*
> > + * The two trace points below may not work as expected for fair tasks due
> > + * to delayed dequeue. See:
> > + *
> > https://lore.kernel.org/lkml/179674c6-f8
76 matches
Mail list logo