S_PER_SECTION,
> + .min_chunk = PAGES_PER_SECTION,
> + .max_threads = max_threads,
> + };
> +
> + padata_do_multithreaded();
> + deferred_init_mem_pfn_range_in_zone(, zone, , ,
> + epfn_align);
> }
> zone_empty:
> /* Sanity check that the next zone really is unpopulated */
So I am not a huge fan of using deferred_init_mem_pfn_range_in zone
simply for the fact that we end up essentially discarding the i value
and will have to walk the list repeatedly. However I don't think the
overhead will be that great as I suspect there aren't going to be
systems with that many ranges. So this is probably fine.
Reviewed-by: Alexander Duyck
tatistic probably has limited value,
> especially since a zone grows on demand so that the page count can vary,
> just remove it.
>
> The boot message now looks like
>
> node 0 deferred pages initialised in 97ms
>
> Signed-off-by: Daniel Jordan
> Suggested-by: Al
On Thu, May 21, 2020 at 8:37 AM Daniel Jordan
wrote:
>
> On Wed, May 20, 2020 at 06:29:32PM -0700, Alexander Duyck wrote:
> > On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
> > > @@ -1814,16 +1815,44 @@ deferred_init_maxorder(u64 *i, struct zone *zone,
> >
On Wed, May 20, 2020 at 6:29 PM Alexander Duyck
wrote:
>
> On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
> wrote:
> >
> > Deferred struct page init is a significant bottleneck in kernel boot.
> > Optimizing it maximizes availability for large-memory systems and a
On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
wrote:
>
> Deferred struct page init is a significant bottleneck in kernel boot.
> Optimizing it maximizes availability for large-memory systems and allows
> spinning up short-lived VMs as needed without having to leave them
> running. It also
On 10/4/2018 10:39 PM, Stephen Rothwell wrote:
Hi Guenter,
On Thu, 4 Oct 2018 18:33:02 -0700 Guenter Roeck wrote:
Most of the boot failures are hopefully fixed with
https://lore.kernel.org/patchwork/patch/995254/
I have added that commit to linux-next today.
After getting over that I
On Thu, Oct 4, 2018 at 4:25 AM Robin Murphy wrote:
>
> On 04/10/18 00:48, Alexander Duyck wrote:
> > It appears that in commit 9d7a224b463e ("dma-direct: always allow dma mask
> > <= physiscal memory size") the logic of the test was changed from a "<"
OMMU.
Fixes: 9d7a224b463e ("dma-direct: always allow dma mask <= physiscal memory
size")
Signed-off-by: Alexander Duyck
---
kernel/dma/direct.c |4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 5a0806b5351b.
On Thu, Sep 27, 2018 at 3:38 PM Christoph Hellwig wrote:
>
> This way an architecture with less than 4G of RAM can support dma_mask
> smaller than 32-bit without a ZONE_DMA. Apparently that is a common
> case on powerpc.
>
> Signed-off-by: Christoph Hellwig
> Reviewed-by: Robin Murphy
> ---
>
On Wed, Sep 26, 2018 at 11:32 AM Mike Rapoport wrote:
>
> On Wed, Sep 26, 2018 at 09:58:41AM -0700, Alexander Duyck wrote:
> > On Fri, Sep 14, 2018 at 5:11 AM Mike Rapoport
> > wrote:
> > >
> > > All architecures use memblock for early memory
On Fri, Sep 14, 2018 at 5:11 AM Mike Rapoport wrote:
>
> All architecures use memblock for early memory management. There is no need
> for the CONFIG_HAVE_MEMBLOCK configuration option.
>
> Signed-off-by: Mike Rapoport
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index
On Tue, Mar 27, 2018 at 2:35 PM, Benjamin Herrenschmidt
wrote:
> On Tue, 2018-03-27 at 10:46 -0400, Sinan Kaya wrote:
>> combined buffers.
>>
>> Alex:
>> "Don't bother. I can tell you right now that for x86 you have to have a
>> wmb() before the writel().
>
> No, this
gt;>>> I was always puzzled by this: The intention of _relaxed() on ARM
>> >>>> (where it originates) was to skip the barrier that serializes DMA
>> >>>> with MMIO, not to skip the serialization between MMIO and locks.
>> >>>
>> &
On Fri, Jun 16, 2017 at 11:10 AM, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not a public API and will go away. Instead properly
> unwind based on the loop counter.
>
> Signed-off-by: Christoph Hellwig
> Acked-by: Dave Jiang
> Acked-By: Vinod
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Alexander Duyck <alexander.h.du...@intel.com>
---
arch/powerpc/kernel/dma.c |9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/ke
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Alexander Duyck <alexander.h.du...@intel.com>
---
arch/powerpc/kernel/dma.c |9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/ke
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Alexander Duyck <alexander.h.du...@intel.com>
---
arch/powerpc/kernel/dma.c |9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/ke
17 matches
Mail list logo