CC: Guo Ren
> Reported-by: kernel test robot
> Signed-off-by: kernel test robot
> Signed-off-by: Julia Lawall
Reviewed-by: Pekka Enberg
fday won't work.
>
> Signed-off-by: Guo Ren
> Cc: Atish Patra
> Cc: Palmer Dabbelt
> Cc: Vincent Chen
Reviewed-by: Pekka Enberg
utl...@arm.com/
> Signed-off-by: Guo Ren
> Cc: Mark Rutland
> Cc: Pekka Enberg
> Cc: Palmer Dabbelt
Reviewed-by: Pekka Enberg
) helper
> -Split one long line code into two
Please also make no_context() use the new helper. Other than that:
Reviewed-by: Pekka Enberg
On Mon, Nov 30, 2020 at 7:33 AM Eric Lin wrote:
>
> In the page fault handler, an access to user-space memory
> without get/put_user() or copy_from/to_user() routines is
> not resolved properly. Like arm and other architectures,
> we need to let it die earlier in page fault handler.
Fix looks goo
On Mon, Oct 26, 2020 at 08:26:54PM +0800, liush wrote:
> From: Liu Shaohua
>
> The argument to pfn_to_virt() should be pfn not the value of CSR_SATP.
>
> Reviewed-by: Palmer Dabbelt
> Reviewed-by: Anup Patel
> Signed-off-by: liush
Reviewed-by: Pekka Enberg
> ---
Hi,
On Wed, Sep 9, 2020 at 2:20 PM Wei Yongjun wrote:
>
> gcc report build warning as follows:
>
> arch/riscv/mm/fault.c:81:1: warning:
> 'inline' is not at beginning of declaration [-Wold-style-declaration]
>81 | static void inline vmalloc_fault(struct pt_regs *regs, int code,
> unsigned l
For the series:
Reviewed-by: Pekka Enberg
- Pekka
Hi,
On Mon, Aug 31, 2020 at 9:15 AM Zong Li wrote:
> If the sets is one, it means that the cache is fully associative, then
> we don't need to fill the ways number, just keep way number as zero,
> so here we want to find the fully associative case first and make the
> if expression fail at the be
On Fri, Aug 28, 2020 at 10:09 AM Zong Li wrote:
> +uintptr_t get_cache_geometry(u32 level, enum cache_type type)
> +{
> + struct cacheinfo *this_leaf = get_cacheinfo(level, type);
> + uintptr_t ret = (this_leaf->ways_of_associativity << 16 |
> +this_leaf->cohere
Hi,
On Fri, Aug 28, 2020 at 10:09 AM Zong Li wrote:
>
> Set cacheinfo.{size,sets,line_size} for each cache node, then we can
> get these information from userland through auxiliary vector.
>
> Signed-off-by: Zong Li
> ---
> arch/riscv/kernel/cacheinfo.c | 59 ++-
On Wed, Aug 26, 2020 at 5:54 PM Nicholas Piggin wrote:
>
> Cc: Paul Walmsley
> Cc: Palmer Dabbelt
> Cc: Albert Ou
> Cc: linux-ri...@lists.infradead.org
> Acked-by: Palmer Dabbelt
> Signed-off-by: Nicholas Piggin
Reviewed-by: Pekka Enberg
From: Pekka Enberg
The commit 92181f190b649f7ef2b79cbf5c00f26ccc66da2a ("x86: optimise
x86's do_page_fault (C entry point for the page fault path)") from 2009
shows significant stack savings when infrequent page fault handling
paths are moved out of line with the "noinline&qu
under it.
An alternative approach for this patch would be to somehow make the
lock in count_partial() more granular, but I don't know how feasible
that actually is.
Anyway, I am OK with this approach:
Reviewed-by: Pekka Enberg
You still need to convince Christoph, though, because he had
objections over this approach.
- Pekka
Hi Christopher,
On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter wrote:
>
> On Fri, 7 Aug 2020, Pekka Enberg wrote:
>
> > Why do you consider this to be a fast path? This is all partial list
> > accounting when we allocate/deallocate a slab, no? Just like
> > ___s
Hi KyongHo and David,
On 07.08.20 09:08, Pekka Enberg wrote:
> > > I think having more knowledge of DRAM controller details in the OS
> > > would be potentially beneficial for better page allocation policy, so
> > > maybe try come up with something more gener
On Wed, Aug 19, 2020 at 07:29:10PM +0800, Wang Hai wrote:
> Remove asm/io_apic.h which is included more than once
>
> Reported-by: Hulk Robot
> Signed-off-by: Wang Hai
Reviewed-by: Pekka Enberg
On Tue, Aug 18, 2020 at 2:43 PM YueHaibing wrote:
>
> Remove duplicate header which is included twice.
>
> Signed-off-by: YueHaibing
Reviewed-by: Pekka Enberg
On Tue, Aug 11, 2020 at 5:25 AM wrote:
>
> From: Abel Wu
>
> The ALLOC_SLOWPATH statistics is missing in bulk allocation now.
> Fix it by doing statistics in alloc slow path.
>
> Signed-off-by: Abel Wu
Reviewed-by: Pekka Enberg
Hi Marco and Kees,
On Fri, Aug 07, 2020 at 08:06PM +0300, Pekka Enberg wrote:
> > Anything interesting in your .config? The fault does not reproduce
> > with 5.8.0 + x86-64 defconfig.
On Fri, Aug 7, 2020 at 8:18 PM Marco Elver wrote:
> It's quite close to defconfig, just so
Hi Christopher,
On Fri, 7 Aug 2020, Pekka Enberg wrote:
> > I think we can just default to the counters. After all, if I
> > understood correctly, we're talking about up to 100 ms time period
> > with IRQs disabled when count_partial() is called. As this is
> > trigge
Hi Marco,
On Fri, Aug 7, 2020 at 7:07 PM Marco Elver wrote:
> I found that the below debug-code using kmem_cache_alloc(), when using
> slub_debug=Z, results in the following crash:
>
> general protection fault, probably for non-canonical address
> 0xcca41caea170: [#1] PREEMPT SM
t up to 100 ms time period
with IRQs disabled when count_partial() is called. As this is
triggerable from user space, that's a performance bug whatever way you
look at it.
Whoever needs to eliminate these counters from fast-path, can wrap
them in a CONFIG_MAKE_SLABINFO_EXTREMELY_SLOW option.
Hi Cho and David,
On Mon, Aug 3, 2020 at 10:57 AM David Hildenbrand wrote:
>
> On 03.08.20 08:10, pullip@samsung.com wrote:
> > From: Cho KyongHo
> >
> > LPDDR5 introduces rank switch delay. If three successive DRAM accesses
> > happens and the first and the second ones access one rank and t
x27;resource_init' [-Wmissing-prototypes]
>
> Signed-off-by: Zong Li
Reviewed-by: Pekka Enberg
- Pekka
On Thu, Jul 16, 2020 at 10:11 AM Pekka Enberg wrote:
>
> On Thu, Jul 16, 2020 at 9:16 AM Zong Li wrote:
> >
> > Add hearder for missing prototype. Also, static keyword should be at
> > beginning of declaration.
> >
> > Signed-off-by: Zong Li
>
> Which p
On Thu, Jul 16, 2020 at 9:16 AM Zong Li wrote:
>
> Add hearder for missing prototype. Also, static keyword should be at
> beginning of declaration.
>
> Signed-off-by: Zong Li
Which prototype is missing?
- Pekka
Hi Palmer,
On Sat, Jul 11, 2020 at 10:43 PM Palmer Dabbelt wrote:
> This still slightly changes the accounting numbers, but I don't think it does
> so in a way that's meaningful enough to care about. SIGBUS is the only one
> that might happen frequently enough to notice, I doubt anyone cares abo
Hi!
(Sorry for the delay, I missed your response.)
On Fri, Jul 3, 2020 at 12:38 PM xunlei wrote:
>
> On 2020/7/2 PM 7:59, Pekka Enberg wrote:
> > On Thu, Jul 2, 2020 at 11:32 AM Xunlei Pang
> > wrote:
> >> The node list_lock in count_partial() spend long time iter
an success.
>
> Signed-off-by: Muchun Song
Reviewed-by: Pekka Enberg
e site at all.
> > > So we give out a reject list and simulate list in decode-insn.c.
On Sat, Jul 4, 2020 at 2:40 PM Pekka Enberg wrote:
> > Can you elaborate on what you mean by this? Why would you need a
> > single-step facility for kprobes? Is it for executing the instruct
On Sat, Jul 4, 2020 at 6:34 AM wrote:
> The patchset includes kprobe/uprobe support and some related fixups.
Nice!
On Sat, Jul 4, 2020 at 6:34 AM wrote:
> There is no single step exception in riscv ISA, so utilize ebreak to
> simulate. Some pc related instructions couldn't be executed out of li
On 03.07.20 08:34, Pekka Enberg wrote:
> > if (cpusets_enabled()) {
> > *alloc_mask |= __GFP_HARDWALL;
> > if (!in_interrupt() && !ac->nodemask)
> > ac->nodemask = &cpuset_curr
On Fri, Jul 3, 2020 at 9:14 AM Muchun Song wrote:
>
> When we are in the interrupt context, it is irrelevant to the
> current task context. If we use current task's mems_allowed, we
> can fair to alloc pages in the fast path and fall back to slow
> path memory allocation when the current node(whic
On Thu, Jul 2, 2020 at 11:32 AM Xunlei Pang wrote:
> The node list_lock in count_partial() spend long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention, e.g. it cause
> business response-time jitters when accessing "/p
eck out of slab & slub, and call it from
> kmalloc_order() as well. In order to make the code clear, the warning
> message is put in one place.
>
> Signed-off-by: Long Li
Reviewed-by: Pekka Enberg
ting some days ago:
http://lists.infradead.org/pipermail/linux-riscv/2020-June/000775.html
However, your fix is obviously even better. For the generic and riscv parts:
Reviewed-by: Pekka Enberg
- Pekka
ctions where
> appropriate.
Very nice cleanup series to the page table code!
FWIW:
Reviewed-by: Pekka Enberg
: Pekka Enberg
Hi,
On 4/11/19 10:55 AM, Michal Hocko wrote:
Please please have it more rigorous then what happened when SLUB was
forced to become a default
This is the hard part.
Even if you are able to show that SLUB is as fast as SLAB for all the
benchmarks you run, there's bound to be that one workload
much hackiness into the existing
code for now.
On Thu, Mar 28, 2019 at 08:05:31AM +0200, Pekka Enberg wrote:
Unfortunately I am not that brave soul, but I'm wondering what the
complication here is? It shouldn't be too hard to teach calculate_sizes() in
SLUB about a new SLAB_KMEMLEAK fla
Hi,
On 27/03/2019 2.59, Qian Cai wrote:
Unless there is a brave soul to reimplement the kmemleak to embed it's
metadata into the tracked memory itself in a foreseeable future, this
provides a good balance between enabling kmemleak in a low-memory
situation and not introducing too much hackiness
below, so
untag the object before checking for a NULL object there.
Reviewed-by: Pekka Enberg
Hi,
On 01/02/2019 4.34, Christopher Lameter wrote:
On Fri, 1 Feb 2019, Tobin C. Harding wrote:
Currently when displaying /proc/slabinfo if any cache names are too long
then the output columns are not aligned. We could do something fancy to
get the maximum length of any cache name in the syste
For the series:
Reviewed-by: Pekka Enberg
On 25/01/2019 19.38, Matthew Wilcox wrote:
It's never appropriate to map a page allocated by SLAB into userspace.
A buggy device driver might try this, or an attacker might be able to
find a way to make it happen.
Signed-off-by: Matthew Wilcox
Acked-by: Pekka Enberg
A WARN_ON_ONCE()
On 29/12/2018 8.25, Peng Wang wrote:
new_slab_objects() will return immediately if freelist is not NULL.
if (freelist)
return freelist;
One more assignment operation could be avoided.
Signed-off-by: Peng Wang
Reviewed-by: Pekka Enberg
---
mm/slub.c | 3
schedule WORK_CPU_UNBOUND work on
wq_unbound_cpumask CPUs")
CC:
Cc: Joonsoo Kim
Cc: David Rientjes
Cc: Pekka Enberg
Cc: Christoph Lameter
Cc: Tejun Heo
Cc: Lai Jiangshan
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Acked-by: Pekka Enberg
---
mm/slab.c | 3 ++-
1 fi
On Wed, Aug 17, 2016 at 1:03 PM, Srividya Desireddy
wrote:
> This series of patches optimize the memory utilized by zswap for storing
> the swapped out pages.
>
> Zswap is a cache which compresses the pages that are being swapped out
> and stores them into a dynamically allocated RAM-based memory
lled page check is very minimal
>> when compared to the time saved by avoiding compression and allocation in
>> case of zero-filled pages. The load time of a zero-filled page is reduced
>> by 80% when compared to baseline.
On Wed, Aug 17, 2016 at 3:25 PM, Pekka Enberg wrote:
On Wed, Aug 17, 2016 at 1:18 PM, Srividya Desireddy
wrote:
> This patch adds a check in zswap_frontswap_store() to identify zero-filled
> page before compression of the page. If the page is a zero-filled page, set
> zswap_entry.zeroflag and skip the compression of the page and alloction
> of memor
On 01/22/2016 01:12 AM, David Rientjes wrote:
NACK to your patch as it is just covering up buggy code silently. The
problem needs to be addressed in change_memory_common() to return if
there is no size to change (numpages == 0). It's a two line fix to
that function.
So add a WARN_ON there to
On 9/15/15 8:50 PM, Denis Kirjanov wrote:
A good one candidate to return a boolean result
Signed-off-by: Denis Kirjanov
Reviewed-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More
Use kvfree() instead of open-coding it.
Cc: Hariprasad S
Signed-off-by: Pekka Enberg
---
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
b/drivers/net/ethernet/chelsio/cxgb4
Use kvfree() instead of open-coding it.
Cc: David Airlie
Signed-off-by: Pekka Enberg
---
drivers/gpu/drm/nouveau/nouveau_gem.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c
b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 0e690bf
Use kvfree() instead of open-coding it.
Cc: Santosh Raspatur
Signed-off-by: Pekka Enberg
---
drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
b/drivers/net/ethernet
Use kvfree() instead of open-coding it.
Cc: Hoang-Nam Nguyen
Cc: Christoph Raisch
Signed-off-by: Pekka Enberg
---
drivers/infiniband/hw/ehca/ipz_pt_fn.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/infiniband/hw/ehca/ipz_pt_fn.c
b/drivers/infiniband
Use kvfree() instead of open-coding it.
Cc: Dmitry Torokhov
Signed-off-by: Pekka Enberg
---
drivers/input/evdev.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index a18f41b..9d35499 100644
--- a/drivers/input/evdev.c
Use kvfree() instead of open-coding it.
Signed-off-by: Pekka Enberg
---
ipc/util.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index ff3323e..537a41c 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -467,10 +467,7 @@ void ipc_rcu_free(struct
Use kvfree() instead of open-coding it.
Cc: "Nicholas A. Bellinger"
Signed-off-by: Pekka Enberg
---
drivers/target/target_core_transport.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/target/target_core_transport.c
b/driv
Use kvfree() instead of open-coding it.
Cc: Alasdair Kergon
Cc: Mike Snitzer
Signed-off-by: Pekka Enberg
---
drivers/md/dm-stats.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
index f478a4c..492fe6a 100644
--- a/drivers
Use kvfree() instead of open-coding it.
Cc: "James E.J. Bottomley"
Signed-off-by: Pekka Enberg
---
drivers/scsi/cxgbi/libcxgbi.h | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/scsi/cxgbi/libcxgbi.h b/drivers/scsi/cxgbi/libcxgbi.h
index aba1af7..c2eb
Use kvfree() instead of open-coding it.
Cc: Kent Overstreet
Signed-off-by: Pekka Enberg
---
drivers/md/bcache/super.c | 10 ++
drivers/md/bcache/util.h | 10 ++
2 files changed, 4 insertions(+), 16 deletions(-)
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache
Use kvfree() instead of open-coding it.
Cc: Anton Altaparmakov
Signed-off-by: Pekka Enberg
---
fs/ntfs/malloc.h | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/fs/ntfs/malloc.h b/fs/ntfs/malloc.h
index a44b14c..ab172e5 100644
--- a/fs/ntfs/malloc.h
+++ b/fs/ntfs
Use kvfree() instead of open-coding it.
Cc: David Airlie
Signed-off-by: Pekka Enberg
---
include/drm/drm_mem_util.h | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/include/drm/drm_mem_util.h b/include/drm/drm_mem_util.h
index 19a2404..e42495a 100644
--- a/include/drm
Use kvfree() instead of open-coding it.
Signed-off-by: Pekka Enberg
---
kernel/relay.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/kernel/relay.c b/kernel/relay.c
index e9dbaeb..0b4570c 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -81,10 +81,7 @@ static struct
Use kvfree instead of open-coding it.
Cc: "Yan, Zheng"
Cc: Sage Weil
Signed-off-by: Pekka Enberg
---
net/ceph/pagevec.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/ceph/pagevec.c b/net/ceph/pagevec.c
index 096d914..d4f5f22 100644
--- a/net/ceph/pagev
below to inform user:
# perf kmem stat --page --caller
Not found page events. Have you run 'perf kmem record --page' before?
Acked-by: Pekka Enberg
Signed-off-by: Namhyung Kim
Thanks, applied.
I just found the messages a bit odd souding, perhaps:
# perf kmem stat --pag
16] kswapd 2 initialised deferred memory in 1148ms
>
> Once booted the machine appears to work as normal. Boot times were measured
> from the time shutdown was called until ssh was available again. In the
> 64G case, the boot time savings are negligible. On the 1TB machine, the
&g
sed, it does slab allocation analysis for backward compatibility.
Nice addition!
Acked-by: Pekka Enberg
for the whole series.
- Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info
Hi Stephane,
On Tue, Mar 31, 2015 at 1:19 AM, Stephane Eranian wrote:
> +#define BASE_ENT(c, n) [c-'A']=n
> +static const char *base_types['Z'-'A' + 1]={
> + BASE_ENT('B', "byte" ),
> + BASE_ENT('C', "char" ),
> + BASE_ENT('D', "double" ),
> + BASE_ENT('F', "float" ),
> +
On 2/26/15 1:02 PM, Alex Bennée wrote:
If you can have it all it would be nice to preserve buildability all
through your history for bisecting (and the moon on a stick please ;-)
Is the dependency on the kernel sources something that has been stable
over the projects history or something that's
Hi,
On 2/18/15 5:50 PM, Will Deacon wrote:
Thanks for doing this. Since it looks unlikely that kvmtool will ever be
merged back into the kernel tree, it makes sense to cut the dependency
in my opinion.
I am certainly OK with a standalone repository which preserves the
history. Will, would you
Sanders
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
On Wed, Feb 4, 2015 at 10:38 PM, Daniel Sanders
wrote:
> I don't believe the bug to be LLVM specific but GCC doesn't normally
> encounter the problem. I haven't been able to identify exactly what GCC is
> doing better (probably inlining) but it seems that GCC is managing to
> optimize to the p
On 2/3/15 3:37 PM, Daniel Sanders wrote:
This patch moves the initialization of the size_index table slightly
earlier so that the first few kmem_cache_node's can be safely allocated
when KMALLOC_MIN_SIZE is large.
The patch looks OK to me but how is this related to LLVM?
- Pekka
--
To unsubscr
ked-by: Christoph Lameter
Acked-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
27;s the (1 << MAX_ORDER) optimization that confused me. Perhaps
add a comment there to make it more obvious?
I'm fine with the optimization:
Reviewed-by: Pekka Enberg
- Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ndition that the current cpu has no
> percpu slab attached to it.
>
> Signed-off-by: Christoph Lameter
Reviewed-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo
e by setting the lowest bit in the freelist address
> and use the start address of a page if no other address is available
> for list termination.
>
> This will allow us to determine the page struct address from a
> freelist pointer in the future.
>
> Signed-off-by: Christoph La
On Wed, Dec 10, 2014 at 6:30 PM, Christoph Lameter wrote:
> We can use virt_to_page there and only invoke the costly function if
> actually a node is specified and we have to check the NUMA locality.
>
> Increases the cost of allocating on a specific NUMA node but then that
> was never cheap since
On Wed, Dec 10, 2014 at 6:30 PM, Christoph Lameter wrote:
> Avoid using the page struct address on free by just doing an
> address comparison. That is easily doable now that the page address
> is available in the page struct and we already have the page struct
> address of the object to be freed c
t; number of invocations of page_address(). Those are mostly only used for
> debugging though so this should have no performance benefit.
>
> Signed-off-by: Christoph Lameter
Reviewed-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On Wed, Dec 10, 2014 at 6:30 PM, Christoph Lameter wrote:
> Somehow the two branches in __slab_alloc do the same.
> Unify them.
>
> Signed-off-by: Christoph Lameter
Reviewed-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
th
void *cache_alloc_node(struct kmem_cache
*cachep, gfp_t flags,
void *obj;
int x;
- VM_BUG_ON(nodeid > num_online_nodes());
+ VM_BUG_ON(nodeid < 0 || nodeid >= MAX_NUMNODES);
n = get_node(cachep, nodeid);
BUG_ON(!n);
Reviewed-by: Pekka Enber
: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 05/23/2014 11:16 PM, Mike Snitzer wrote:
On Tue, Mar 25 2014 at 2:07pm -0400,
Christoph Lameter wrote:
On Tue, 25 Mar 2014, Mike Snitzer wrote:
This patch still isn't upstream. Who should be shepherding it to Linus?
Pekka usually does that.
Acked-by: Christoph Lameter
This still has
Fabio Estevam
,Christoph Lameter , David Rientjes
, Pekka Enberg
Subject: [PATCH] mm: slub: Place count_partial() outside CONFIG_SLUB_DEBUG if
block
On Mon, 12 May 2014, Jim Davis wrote:
Building with the attached random configuration file,
mm/slub.c: In function ‘show_slab_objects’:
mm/s
On Tue, May 6, 2014 at 6:32 AM, Linus Torvalds
wrote:
> On Mon, May 5, 2014 at 8:25 PM, David Miller wrote:
>>
>>> Sam Ravnborg wrote:
There is a related patch in this area which I think is not yet applied.
See: https://lkml.org/lkml/2014/4/18/28
Maybe this is relat
Hello,
I'm seeing the following with v3.15-rc2:
$ ~/bin/perf report --gtk
GTK browser requested but could not find libperf-gtk.so
The library file is in $HOME/lib64 and perf attempts to look it up.
However, printing out dlerror() output shows the following:
[penberg@localhost hornet]$ ~/bin/per
ngful.
Signed-off-by: Sasha Levin
Vegard probably should take a closer look at this but:
Acked-by: Pekka Enberg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kerne
Hi Linus,
Please pull the latest SLAB tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux.git slab/next
The biggest change is byte-sized freelist indices which reduces slab freelist
memory usage:
https://lkml.org/lkml/2013/12/2/64
Pekka
-
On 03/11/2014 10:30 AM, Joonsoo Kim wrote:
-8<-
From ff6fe77fb764ca5bf8705bf53d07d38e4111e84c Mon Sep 17 00:00:00 2001
From: Joonsoo Kim
Date: Tue, 11 Mar 2014 14:14:25 +0900
Subject: [PATCH] slab: remove kernel_map_pages() optimization in slab
poisoning
If CONFIG
On 02/20/2014 12:14 AM, David Rientjes wrote:
Kmemcheck should use the preferred interface for parsing command line
arguments, kstrto*(), rather than sscanf() itself. Use it appropriately.
Signed-off-by: David Rientjes
Acked-by: Pekka Enberg
Andrew, can you pick this up?
---
arch/x86
On 03/03/2014 03:14 AM, Namhyung Kim wrote:
The __hpp__color_fmt used in the gtk code can be replace by the
generic code with small change in print_fn callback.
This is a preparation to upcoming changes and no functional changes
intended.
Cc: Jiri Olsa
Cc: Pekka Enberg
Signed-off-by
On Tue, Feb 11, 2014 at 2:14 PM, Paul E. McKenney
wrote:
> In contrast, from kfree() to a kmalloc() returning some of the kfree()ed
> memory, I believe the kfree()/kmalloc() implementation must do any needed
> synchronization and ordering. But that is a different set of examples,
> for example, t
Hi Paul,
On Sun, Feb 9, 2014 at 4:00 AM, Paul E. McKenney
wrote:
> From what I can see, (A) works by accident, but is kind of useless because
> you allocate and free the memory without touching it. (B) and (C) are the
> lightest touches I could imagine, and as you say, both are bad. So I
> beli
Hi Paul,
On 01/02/2014 10:33 PM, Paul E. McKenney wrote:
From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory
allocators would deal with the following sort of race:
A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1;
CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2);
On Fri, Dec 13, 2013 at 9:03 AM, Joonsoo Kim wrote:
> Hello, Pekka.
>
> Below is updated patch for 5/5 in this series.
> Now I get acks from Christoph to all patches in this series.
> So, could you merge this patchset? :)
> If you want to resend wholeset with proper ack, I will do it
> with pleasu
Hi Linus,
Please pull the latest SLAB tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux.git slab/next
It contains random bug fixes that have accumulated in my inbox over
the past few months.
Pekka
-->
The following changes since co
1 - 100 of 917 matches
Mail list logo