Re: Serious performance degradation in Linux 4.15

2018-02-16 Thread Mel Gorman
e failing because they rely on expect. You could try running with --no-monitor. -- Mel Gorman SUSE Labs

Re: Page allocator bottleneck

2017-11-08 Thread Mel Gorman
n for such a move. I haven't posted the patches properly yet because mmotm is carrying too many patches as it is and this patch indirectly depends on the contents. I also didn't write memory hot-remove support which would be a requirement before merging. I hadn't intended to put further effort into it until I had some evidence the approach had promise. My own testing indicated it worked but the drivers I was using for network tests did not allocate intensely enough to show any major gain/loss. -- Mel Gorman SUSE Labs

Re: Page allocator bottleneck

2017-11-03 Thread Mel Gorman
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote: > > > On 18/09/2017 12:16 PM, Tariq Toukan wrote: > > > > > > On 15/09/2017 1:23 PM, Mel Gorman wrote: > > > On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote: > > > &g

Re: Page allocator bottleneck

2017-09-15 Thread Mel Gorman
it's currently stalled as everyone that was previously involved is too busy. -- Mel Gorman SUSE Labs

Re: Heads-up: two regressions in v4.11-rc series

2017-04-20 Thread Mel Gorman
On Thu, Apr 20, 2017 at 11:00:42AM +0200, Jesper Dangaard Brouer wrote: > Hi Linus, > > Just wanted to give a heads-up on two regressions in 4.11-rc series. > > (1) page allocator optimization revert > > Mel Gorman and I have been playing with optimizing the page allocator

Re: [PATCH] Revert "mm, page_alloc: only use per-cpu allocator for irq-safe requests"

2017-04-15 Thread Mel Gorman
On Sat, Apr 15, 2017 at 09:28:33PM +0200, Jesper Dangaard Brouer wrote: > On Sat, 15 Apr 2017 15:53:50 +0100 > Mel Gorman <mgor...@techsingularity.net> wrote: > > > This reverts commit 374ad05ab64d696303cec5cc8ec3a65d457b7b1c. While the > > patch worked great for user

Re: [PATCH] mm, page_alloc: re-enable softirq use of per-cpu page allocator

2017-04-15 Thread Mel Gorman
On Fri, Apr 14, 2017 at 12:10:27PM +0200, Jesper Dangaard Brouer wrote: > On Mon, 10 Apr 2017 14:26:16 -0700 > Andrew Morton <a...@linux-foundation.org> wrote: > > > On Mon, 10 Apr 2017 16:08:21 +0100 Mel Gorman <mgor...@techsingularity.net> > > wrote: > >

[PATCH] Revert "mm, page_alloc: only use per-cpu allocator for irq-safe requests"

2017-04-15 Thread Mel Gorman
as follows Baseline v4.10.0 : 60316 Mbit/s Current 4.11.0-rc6: 47491 Mbit/s This patch: 60662 Mbit/s As this is a regression, I wish to revert to noirq allocator for now and go back to the drawing board. Signed-off-by: Mel Gorman <mgor...@techsingularity.net> Reported-by: Tariq

Re: [PATCH] mm, page_alloc: re-enable softirq use of per-cpu page allocator

2017-04-11 Thread Mel Gorman
his again outside of an rc cycle. That would be preferable to releasing 4.11 with a known regression. -- Mel Gorman SUSE Labs

Re: Page allocator order-0 optimizations merged

2017-04-10 Thread Mel Gorman
> What type of allocations is the benchmark doing? In particular, what context is the microbenchmark allocating from? Lastly, how did you isolate the patch, did you test two specific commits in mainline or are you comparing 4.10 with 4.11-rcX? -- Mel Gorman SUSE Labs

[PATCH] mm, page_alloc: re-enable softirq use of per-cpu page allocator

2017-04-10 Thread Mel Gorman
4d and try again at a later date to offset the irq enable/disable overhead. Fixes: 374ad05ab64d ("mm, page_alloc: only use per-cpu allocator for irq-safe requests") Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com> Signed-off-by: Mel Gorman <mgor...@techsingula

Re: [Nbd] [PATCH 3/4] treewide: convert PF_MEMALLOC manipulations to new helpers

2017-04-06 Thread Mel Gorman
etwork traffic that is not involved with swap. This means that under heavy swap load, it was perfectly possible for unrelated traffic to get dropped for quite some time. -- Mel Gorman SUSE Labs

Re: in_irq_or_nmi() and RFC patch

2017-04-05 Thread Mel Gorman
On Mon, Apr 03, 2017 at 01:05:06PM +0100, Mel Gorman wrote: > > Started performance benchmarking: > > 163 cycles = current state > > 183 cycles = with BH disable + in_irq > > 218 cycles = with BH disable + in_irq + irqs_disabled > > > > Thus, the perform

Re: in_irq_or_nmi() and RFC patch

2017-04-03 Thread Mel Gorman
On Thu, Mar 30, 2017 at 05:07:08PM +0200, Jesper Dangaard Brouer wrote: > On Thu, 30 Mar 2017 14:04:36 +0100 > Mel Gorman <mgor...@techsingularity.net> wrote: > > > On Wed, Mar 29, 2017 at 09:44:41PM +0200, Jesper Dangaard Brouer wrote: > > > > Regardle

Re: in_irq_or_nmi() and RFC patch

2017-03-30 Thread Mel Gorman
softirq use of per-cpu page allocator > > From: Jesper Dangaard Brouer <bro...@redhat.com> > Other than the slightly misleading comments about NMI which could explain "this potentially misses an NMI but an NMI allocating pages is brain damaged", I don't see a problem. The irqs_disabled() check is a subtle but it's not earth shattering and it still helps the 100GiB cases with the limited cycle budget to process packets. -- Mel Gorman SUSE Labs

Re: Page allocator order-0 optimizations merged

2017-03-27 Thread Mel Gorman
If Tariq confirms it works for him as well, this looks far safer patch than having a dedicate IRQ-safe queue. Your concern about the BH scheduling point is valid but if it's proven to be a problem, there is still the option of a partial revert. -- Mel Gorman SUSE Labs

Re: Page allocator order-0 optimizations merged

2017-03-27 Thread Mel Gorman
I currently have available. For 4.11, it's safer to revert and try again later bearing in mind that softirqs are in the critical allocation path for some drivers. I'll prepare a patch. -- Mel Gorman SUSE Labs

Re: Page allocator order-0 optimizations merged

2017-03-23 Thread Mel Gorman
On Thu, Mar 23, 2017 at 02:43:47PM +0100, Jesper Dangaard Brouer wrote: > On Wed, 22 Mar 2017 23:40:04 + > Mel Gorman <mgor...@techsingularity.net> wrote: > > > On Wed, Mar 22, 2017 at 07:39:17PM +0200, Tariq Toukan wrote: > > > > > > This modificatio

Re: Page allocator order-0 optimizations merged

2017-03-22 Thread Mel Gorman
ocator, > and see a drastic degradation in BW, from 47.5 G in v4.10 to 31.4 G in > v4.11-rc1 (34% drop). > I noticed queued_spin_lock_slowpath occupies 62.87% of CPU time. Can you get the stack trace for the spin lock slowpath to confirm it's from IRQ context? -- Mel Gorman SUSE Labs

Re: [RFC PATCH 2/4] page_pool: basic implementation of page_pool

2017-01-09 Thread Mel Gorman
ions and preferably with your reviewed-by or ack. I would then hand patch 4 over to you for addition to a series that added in-kernel callers to alloc_pages_bulk() be that the generic pool recycle or modifying drivers. You are then free to modify the API to suit your needs without having to figure out

Re: [PATCH net-next V2 05/11] net/mlx5e: Support RX multi-packet WQE (Striding RQ)

2016-04-19 Thread Mel Gorman
On Tue, Apr 19, 2016 at 06:25:32PM +0200, Jesper Dangaard Brouer wrote: > On Mon, 18 Apr 2016 07:17:13 -0700 > Eric Dumazet wrote: > > > On Mon, 2016-04-18 at 16:05 +0300, Saeed Mahameed wrote: > > > On Mon, Apr 18, 2016 at 3:48 PM, Eric Dumazet

Re: FlameGraph of mlx4 early drop with order-0 pages

2016-04-17 Thread Mel Gorman
On Sun, Apr 17, 2016 at 07:24:32PM +0200, Jesper Dangaard Brouer wrote: > On Sun, 17 Apr 2016 14:23:57 +0100 > Mel Gorman <mgor...@techsingularity.net> wrote: > > > > Signing off, heading for the plane soon... see you at MM-summit! > > > > Indeed and w

Re: FlameGraph of mlx4 early drop with order-0 pages

2016-04-17 Thread Mel Gorman
mmit! Indeed and we'll slap some sort of plan together. If there is a slot free, we might spend 15-30 minutes on it. Failing that, we'll grab a table somewhere. We'll see how far we can get before considering a page-recycle layer that preserves cache coherent state. -- Mel Gorman SUSE Labs

Re: [PATCH 00/28] Optimise page alloc/free fast paths v3

2016-04-15 Thread Mel Gorman
On Fri, Apr 15, 2016 at 02:44:02PM +0200, Jesper Dangaard Brouer wrote: > On Fri, 15 Apr 2016 09:58:52 +0100 > Mel Gorman <mgor...@techsingularity.net> wrote: > > > There were no further responses to the last series but I kept going and > > added a few more small

Re: [Lsf] [Lsf-pc] [LSF/MM TOPIC] Generic page-pool recycle facility?

2016-04-11 Thread Mel Gorman
uences for users that require high-order pages for functional reasons. I tried something like that once (http://thread.gmane.org/gmane.linux.kernel/807683) but didn't pursue it to the end as it was a small part of the problem I was dealing with at the time. It shouldn't be ruled out but it should be considered a last resort. -- Mel Gorman SUSE Labs

Re: [Lsf-pc] [LSF/MM TOPIC] Generic page-pool recycle facility?

2016-04-11 Thread Mel Gorman
and locking on each individual pool which could offset some of the performance benefit of using the pool in the first place. > I actually think we are better off providing a generic page pool > interface the drivers can use. Instead of the situation where drivers > and subsystems invent their own, which does not cooperate in OOM > situations. > If it's offsetting DMA setup/teardown then I'd be a bit happier. If it's yet-another-page allocator to bypass the core allocator then I'm less happy. -- Mel Gorman SUSE Labs

Re: [Lsf-pc] [LSF/MM TOPIC] Generic page-pool recycle facility?

2016-04-11 Thread Mel Gorman
tiple instances private to drivers or tasks will require shrinker implementations and the complexity may get unwieldly. -- Mel Gorman SUSE Labs

Re: [PATCH 5/8] mm: memcontrol: account socket memory on unified hierarchy

2015-11-12 Thread Mel Gorman
don't actually know and hopefully the bug will be able to determine if upstream is really affected or not. There is also a link to this bug on the upstream project so there is some chance they are aware https://github.com/systemd/systemd/issues/1715 Bottom line, there is legimate confusion over whether cg

Re: [PATCH] mm: make page pfmemalloc check more robust

2015-08-13 Thread Mel Gorman
Gorman mgor...@suse.de -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: 2.6.23-rc6-mm1: Build failures on ppc64_defconfig

2007-09-24 Thread Mel Gorman
PROTECTED] I've confirmed that this patch fixes the build error in question. Acked-by: Mel Gorman [EMAIL PROTECTED] --- drivers/net/spider_net.c | 24 +++- 1 file changed, 11 insertions(+), 13 deletions(-) diff -ruNp a/drivers/net/spider_net.c b/drivers/net

Re: 2.6.23-rc6-mm1: Build failure on ppc64 drivers/net/ehea/ehea_main.c

2007-09-18 Thread Mel Gorman
My apologies for the repost, this should have gone to netdev and Dave Miller as well. On (18/09/07 17:20), Mel Gorman didst pronounce: Hi Andrew, PPC64 failed to build with the driver drivers/net/ehea with the following error CC [M] drivers/net/ehea/ehea_main.o drivers/net/ehea

Re: 2.6.18-mm2 boot failure on x86-64

2006-10-17 Thread Mel Gorman
a comment explaining why all those PAGE_ALIGN()s are in there. and include it in -mm. Does it fix a patch in -mm or is it needed in mainline? The bug in my list was reported to be present in mainline [1]. Confirmed. This bug is present in 2.6.19-rc2 -- Mel Gorman Part-time Phd Student

Re: 2.6.18-mm2 boot failure on x86-64

2006-10-09 Thread Mel Gorman
On Fri, 6 Oct 2006, Vivek Goyal wrote: On Fri, Oct 06, 2006 at 01:03:50PM -0500, Steve Fox wrote: On Fri, 2006-10-06 at 18:11 +0100, Mel Gorman wrote: On (06/10/06 11:36), Vivek Goyal didst pronounce: Where is bss placed in physical memory? I guess bss_start and bss_stop from System.map

Re: 2.6.19-rc1: known regressions (v2) - xfrm_register_mode

2006-10-09 Thread Mel Gorman
which fix the boot issue, but it is not clear to me if either of these are acceptable fixes. I suggest taking Vivek's. -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab - To unsubscribe from

Re: 2.6.18-mm2 boot failure on x86-64

2006-10-05 Thread Mel Gorman
] __free_pages_ok+0x64/0x247 [807cca72] free_all_bootmem_core+0xcc/0x1a9 [807ca08b] numa_free_all_bootmem+0x3b/0x77 [807c915e] mem_init+0x44/0x186 [807bc5f0] start_kernel+0x17b/0x207 [807bc168] _sinittext+0x168/0x16c ... lots more of those ... -- Mel Gorman