Hi,
We use genalloc for managing certain pools of physical memory. genalloc
currently uses unsigned long for virtual addresses and phys_addr_t for
physical addresses. Our ARM LPAE systems have 64-bit physical addresses
but unsigned long is still 32 bits. Using gen_pool_add breaks with
On 3/19/2013 2:54 PM, Andrew Morton wrote:
On Thu, 14 Mar 2013 16:05:27 -0700 Laura Abbott lau...@codeaurora.org wrote:
Hi,
We use genalloc for managing certain pools of physical memory. genalloc
currently uses unsigned long for virtual addresses and phys_addr_t for
physical addresses. Our
the actual start_pfn to
be used elsewhere.
Change-Id: I13e2f53f50db294f38ec86138c17c6fe29f0ee82
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
include/linux/mmzone.h |6 ++
mm/page_alloc.c|4 +++-
2 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/include/linux
On 2/18/2013 6:46 AM, Mel Gorman wrote:
On Sat, Feb 16, 2013 at 10:26:30AM -0800, Linus Torvalds wrote:
On Fri, Feb 15, 2013 at 3:44 AM, Ingo Molnar mi...@kernel.org wrote:
c060f943d092 may be related as you config does not have
CONFIG_SPARSEMEM defined.
Right, that's the commit causing the
Hi,
On 9/14/2012 6:41 PM, Hugh Dickins wrote:
On Tue, 11 Sep 2012, Laura Abbott wrote:
When a buffer is added to the LRU list, a reference is taken which is
not dropped until the buffer is evicted from the LRU list. This is the
correct behavior, however this LRU reference will prevent
it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
fs/buffer.c | 38
Hi,
I've been observing a high rate of failures with CMA allocations on my
ARM system. I've set up a test case set up with a 56MB CMA region that
essentially does the following:
total_failures = 0;
loop forever:
loop_failure = 0;
for (i = 0; i
On 8/29/2012 6:03 PM, Laura Abbott wrote:
My quick and dirty workaround for testing is to remove the GFP_MOVABLE
flag from find_or_create_page but this seems significantly less than
optimal. Ideally, it seems like the buffers should be evicted from the
LRU when trying to drop (expand
On 12/6/2012 2:12 AM, Mel Gorman wrote:
On Tue, Dec 04, 2012 at 02:10:01PM -0800, Laura Abbott wrote:
The current calculation in pfn_to_bitidx assumes that
(pfn - zone-zone_start_pfn) pageblock_order will return the
same bit for all pfn in a pageblock. If zone_start_pfn is not
aligned
that calling {get,set}_pageblock_migratetype on a single
page will not set the migratetype for the full block. Fix this by
rounding down zone_start_pfn when doing the bitidx calculation.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
mm/page_alloc.c |2 +-
1 files changed, 1 insertions(+), 1
On 12/6/2012 12:05 PM, Laura Abbott wrote:
The current calculation in pfn_to_bitidx assumes that
(pfn - zone-zone_start_pfn) pageblock_order will return the
same bit for all pfn in a pageblock. If zone_start_pfn is not
aligned to pageblock_nr_pages, this may not always be correct.
Consider
that calling {get,set}_pageblock_migratetype on a single
page will not set the migratetype for the full block. Fix this by
rounding down zone_start_pfn when doing the bitidx calculation.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
mm/page_alloc.c |2 +-
1 files changed, 1 insertions(+), 1
that calling {get,set}_pageblock_migratetype on a single
page will not set the migratetype for the full block. Fix this by
rounding down zone_start_pfn when doing the bitidx calculation.
Signed-off-by: Laura Abbott lau...@codeaurora.org
Acked-by: Mel Gorman mgor...@suse.de
---
mm/page_alloc.c |2
On 1/7/2013 2:31 PM, Andrew Morton wrote:
On Sat, 5 Jan 2013 11:28:31 -0800
Laura Abbott lau...@codeaurora.org wrote:
The current calculation in pfn_to_bitidx assumes that
(pfn - zone-zone_start_pfn) pageblock_order will return the
same bit for all pfn in a pageblock. If zone_start_pfn
On 8/26/2013 12:56 PM, Mark Brown wrote:
On Mon, Aug 12, 2013 at 09:51:59PM -0700, Laura Abbott wrote:
On 7/30/2013 12:05 PM, Kees Cook wrote:
- RO and W^X kernel page table protections (similar to x86's
DEBUG_RODATA and DEBUG_SET_MODULE_RONX; it's not clear to me how much
LPAE and PXN
appropriately.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
fs/buffer.c | 17 +++--
1 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 4d74335..b53f863 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1561,6 +1561,9 @@ void
Cc: Hirokazu Takata tak...@linux-m32r.org,
Cc: Michal Simek mon...@monstr.eu
Cc: David Howells dhowe...@redhat.com
Cc: Koichi Yasutake yasutake.koi...@jp.panasonic.com
Cc: Chen Liqin liqin.li...@gmail.com,
Cc: Lennox Wu lennox...@gmail.com
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch
...@arm.linux.org.uk
Cc: Tony Luck tony.l...@intel.com
Cc: Fenghua Yu fenghua...@intel.com
Benjamin Herrenschmidt b...@kernel.crashing.org
Paul Mackerras pau...@samba.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/Kbuild
On 3/5/2014 5:38 AM, Fengguang Wu wrote:
Greetings,
I got the below dmesg and the first bad commit is
commit c060f943d0929f3e429c5d9522290584f6281d6e
Author: Laura Abbott lau...@codeaurora.org
AuthorDate: Fri Jan 11 14:31:51 2013 -0800
Commit: Linus Torvalds torva...@linux
(e.g. last since page needed is a higher order page), it
is not possible to detect that the page was skipped. The fix is to
bail out if the loop immediately if we are in strict mode. There's
no benfit to continuing anyway since we need all pages to be
isolated.
Signed-off-by: Laura Abbott lau
On 3/6/2014 2:22 AM, Vlastimil Babka wrote:
On 03/06/2014 03:26 AM, Laura Abbott wrote:
We received several reports of bad page state when freeing CMA pages
previously allocated with alloc_contig_range:
1[ 1258.084111] BUG: Bad page state in process Binder_A pfn:63202
1[ 1258.089763
. Additionally, drop the error checking based on
nr_strict_required and just check the pfn ranges. This matches with
what isolate_freepages_range does.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
v2: Addressed several comments by Vlastimil
mm/compaction.c | 20 +---
1
catalin.mari...@arm.com
Acked-by: Santosh Shilimkar santosh.shilim...@ti.com
Acked-by: Kukjin Kim kgene@samsung.com
Tested-by: Marek Szyprowski m.szyprow...@samsung.com
Tested-by: Leif Lindholm leif.lindh...@linaro.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/include/asm/mach/arch.h
-foundation.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
include/linux/memblock.h |2 ++
mm/memblock.c|5 +
2 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 1ef6636..8a20a51 100644
--- a/include
per Grant's suggestion.
v2: Implemented full commandline support for mem@addr
Laura Abbott (2):
mm/memblock: add memblock_get_current_limit
arm: Get rid of meminfo
arch/arm/include/asm/mach/arch.h |4 +-
arch/arm/include/asm/memblock.h |3 +-
arch/arm/include/asm
On 2/10/2014 9:28 AM, Courtney Cavin wrote:
On Mon, Feb 10, 2014 at 04:25:34AM +0100, Laura Abbott wrote:
On 2/6/2014 6:09 PM, Courtney Cavin wrote:
On Wed, Feb 05, 2014 at 01:02:31AM +0100, Laura Abbott wrote:
memblock is now fully integrated into the kernel and is the prefered
method
catalin.mari...@arm.com
Acked-by: Santosh Shilimkar santosh.shilim...@ti.com
Tested-by: Leif Lindholm leif.lindh...@linaro.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/include/asm/mach/arch.h |4 +-
arch/arm/include/asm/memblock.h |3 +-
arch/arm/include/asm
Appart from setting the limit of memblock, it's also useful to be able
to get the limit to avoid recalculating it every time. Add the function
to do so.
Acked-by: Catalin Marinas catalin.mari...@arm.com
Acked-by: Santosh Shilimkar santosh.shilim...@ti.com
Signed-off-by: Laura Abbott lau
for mem@addr
Laura Abbott (2):
mm/memblock: add memblock_get_current_limit
arm: Get rid of meminfo
arch/arm/include/asm/mach/arch.h |4 +-
arch/arm/include/asm/memblock.h |3 +-
arch/arm/include/asm/setup.h | 23 --
arch/arm/kernel/atags_parse.c
The stack canary for ARM is currently the same across reboots
due to lack of randomness early enough. Add ARCH_WANT_OF_RANDOMNESS
to allow devices to add whatever randomness they need.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/Kconfig |3 +++
arch/arm/kernel
is present the
function is called. Note that this must happen on the flattened
devicetree to ensure the randomness gets added to the pool
early enough to make a difference.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
drivers/of/Kconfig|7 ++
drivers/of/Makefile
-by: Laura Abbott lau...@codeaurora.org
---
init/main.c |9 -
1 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/init/main.c b/init/main.c
index eb03090..63d0596 100644
--- a/init/main.c
+++ b/init/main.c
@@ -489,11 +489,6 @@ asmlinkage void __init start_kernel(void
have would be reasonable if only seeded earlier.
Thanks,
Laura
Laura Abbott (3):
of: Add early randomness hooks
arm: Add ARCH_WANT_OF_RANDOMNESS
init: Move stack canary initialization after setup_arch
arch/arm/Kconfig |3 ++
arch/arm/kernel/vmlinux.lds.S |1
On 2/12/2014 3:51 AM, Arnd Bergmann wrote:
On Wednesday 12 February 2014, Laura Abbott wrote:
This is an RFC to seed the random number pool earlier when using devicetree.
The big issue this is trying to solve is the fact that the stack canary for
ARM tends to be the same across bootups
On 2/12/2014 8:49 AM, Grant Likely wrote:
On Tue, 11 Feb 2014 17:33:24 -0800, Laura Abbott lau...@codeaurora.org wrote:
The stack canary for ARM is currently the same across reboots
due to lack of randomness early enough. Add ARCH_WANT_OF_RANDOMNESS
to allow devices to add whatever randomness
On 2/12/2014 7:09 AM, Grygorii Strashko wrote:
Hi Laura,
On 02/11/2014 11:14 PM, Laura Abbott wrote:
memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo, migrate to using memblock directly instead
On 3/12/2014 9:08 AM, Grygorii Strashko wrote:
On 03/12/2014 03:38 PM, Russell King - ARM Linux wrote:
On Wed, Mar 12, 2014 at 03:09:53PM +0200, Grygorii Strashko wrote:
Hi Russell,
On 03/12/2014 10:54 AM, Russell King - ARM Linux wrote:
On Tue, Feb 18, 2014 at 02:15:33PM -0800, Laura Abbott
that had a similar root cause.
I also managed to independently come up with a similar solution. This
has been tested somewhat but not in wide distribution.
Thanks,
Laura
- 8 --
From 2aa000fbd4189d967c45c4f1ac5aee812ed83082 Mon Sep 17 00:00:00 2001
From: Laura Abbott lau...@codeaurora.org
Date
: David Howells dhowe...@redhat.com
Cc: Koichi Yasutake yasutake.koi...@jp.panasonic.com
Cc: Chen Liqin liqin.li...@gmail.com,
Cc: Lennox Wu lennox...@gmail.com
Acked-by: Jesper Nilsson jesper.nils...@axis.com
Acked-by: David Howells dhowe...@redhat.com
Signed-off-by: Laura Abbott lau
we take this through your tree
as suggested by Will?
Thanks,
Laura
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-March/240435.html
Laura Abbott (2):
lib/scatterlist: Make ARCH_HAS_SG_CHAIN an actual Kconfig
Cleanup useless architecture versions of scatterlist.h
arch/alpha
fenghua...@intel.com
Cc: Tony Luck tony.l...@intel.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Paul Mackerras pau...@samba.org
Cc: Martin Schwidefsky schwidef...@de.ibm.com
Cc: Heiko Carstens heiko.carst...@de.ibm.com
Cc: Andrew Morton a...@linux-foundation.org
Signed-off-by: Laura
On 10/29/2013 8:02 AM, Zhang Mingjun wrote:
It would move the cost to the CMA paths so I would complain less. Bear
in mind as well that forcing everything to go through free_one_page()
means that every free goes through the zone lock. I doubt you have any
machine large enough
On 10/29/2013 10:40 PM, Minchan Kim wrote:
We've had a similar patch in our tree for a year and a half because
of CMA migration failures, not just for a speedup in allocation
time. I understand that CMA is not the fast case or the general use
case but the problem is that the cost of CMA failure
On 10/28/2013 4:42 AM, zhang.ming...@linaro.org wrote:
From: Mingjun Zhang troy.zhangming...@linaro.org
free_contig_range frees cma pages one by one and MIGRATE_CMA pages will be
used as MIGRATE_MOVEABLE pages in the pcp list, it causes unnecessary
migration action when these pages reused by
On 5/7/2014 4:17 AM, Dan Carpenter wrote:
These days most people don't use git to send patches so I have added a
section about that.
Do you mean most people *do* use git to send patches? Or most people don't
use e-mail clients?
Laura
--
Qualcomm Innovation Center, Inc. is a member of Code
1c2f87c22566cd057bc8cde10c37ae9da1a1bb76
Author: Laura Abbott lau...@codeaurora.org
Date: Sun Apr 13 22:54:58 2014 +0100
ARM: 8025/1: Get rid of meminfo
memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo
On 6/11/2014 12:19 PM, Geert Uytterhoeven wrote:
Hi Laura,
On Wed, Jun 11, 2014 at 7:32 PM, Laura Abbott lau...@codeaurora.org wrote:
On 6/11/2014 4:40 AM, Geert Uytterhoeven wrote:
With current mainline, I get an early crash on r8a7791/koelsch:
BUG: Bad page state in process swapper pfn
);
if (pageno = cma-count) {
-mutex_unlock(cma_mutex);
+mutex_unlock(cma-lock);
break;
}
bitmap_set(cma-bitmap, pageno, count);
Best regards
Acked-by: Laura Abbott lau...@codeaurora.org
Who actually ended up picking up that patch? I sent it out
On 5/12/2014 10:04 AM, Laura Abbott wrote:
I'm going to see about running this through tests internally for comparison.
Hopefully I'll get useful results in a day or so.
Thanks,
Laura
We ran some tests internally and found that for our purposes these patches made
the benchmarks worse vs
On 5/12/2014 7:37 AM, Pintu Kumar wrote:
Hi,
Thanks for the reply.
From: a...@arndb.de
To: linux-arm-ker...@lists.infradead.org
CC: pint...@outlook.com; linux...@kvack.org; linux-kernel@vger.kernel.org;
linaro-mm-...@lists.linaro.org
Subject: Re:
Hi,
On 5/7/2014 5:32 PM, Joonsoo Kim wrote:
CMA is introduced to provide physically contiguous pages at runtime.
For this purpose, it reserves memory at boot time. Although it reserve
memory, this reserved memory can be used for movable memory allocation
request. This usecase is beneficial to
the devicetree mailing list this time around
to get more input on this.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/249180.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/249528.html
Laura Abbott (5):
lib/genalloc.c: Add power aligned algorithm
lib
After allocating an address from a particular genpool,
there is no good way to verify if that address actually
belongs to a genpool. Introduce addr_in_gen_pool which
will return if an address plus size falls completely
within the genpool range.
Signed-off-by: Laura Abbott lau...@codeaurora.org
Neither CMA nor noncoherent allocations support atomic allocations.
Add a dedicated atomic pool to support this.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm64/Kconfig | 1 +
arch/arm64/mm/dma-mapping.c | 155 +++-
2 files
One of the more common algorithms used for allocation
is to align the start address of the allocation to
the order of size requested. Add this as an algorithm
option for genalloc.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
include/linux/genalloc.h | 4
lib/genalloc.c
ARM currently uses a bitmap for tracking atomic allocations.
genalloc already handles this type of memory pool allocation
so switch to using that instead.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/Kconfig | 1 +
arch/arm/mm/dma-mapping.c | 144
For architectures without coherent DMA, memory for DMA may
need to be remapped with coherent attributes. Factor out
the the remapping code from arm and put it in a
common location to reduced code duplication.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/mm/dma-mapping.c
Appart from setting the limit of memblock, it's also useful to be able
to get the limit to avoid recalculating it every time. Add the function
to do so.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
include/linux/memblock.h |2 ++
mm/memblock.c|5 +
2 files
memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo, migrate to using memblock directly instead of meminfo as
an intermediate.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/include/asm/mach
Hi,
This is v2 of the patch to get rid of meminfo. This should cover all the
bases. Testing would be appreciated.
Thanks,
Laura
Laura Abbott (2):
mm/memblock: add memblock_get_current_limit
arm: Get rid of meminfo
arch/arm/include/asm/mach/arch.h |4 +-
arch/arm/include/asm
Thanks for this.
Laura, did you have additional patches adding
asm-generic/dma-contiguous.h?
no, asm-generic/dma-contiguous.h was an old file which was later
removed. I missed this when rebasing from my older branch to mainline.
You can have
Acked-by: Laura Abbott lau...@codeaurora.org
Thanks
On 1/3/2014 11:31 PM, Minchan Kim wrote:
Hello,
On Fri, Jan 03, 2014 at 02:08:52PM -0800, Laura Abbott wrote:
On 1/3/2014 10:23 AM, Dave Hansen wrote:
On 01/02/2014 01:53 PM, Laura Abbott wrote:
The goal here is to allow as much lowmem to be mapped as if the block of memory
was not reserved
On 1/8/2014 11:04 PM, Joonsoo Kim wrote:
Cma pages can be allocated by not only order 0 request but also high order
request. So, we should consider to account free cma page in the both
places.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
On 1/9/2014 7:50 PM, Mark Salter wrote:
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 987a7f5..038fb75 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -36,6 +36,7 @@
#include asm/cpu.h
#include asm/cputype.h
#include asm/elf.h
+#include
On 1/17/2014 6:32 AM, Mel Gorman wrote:
Developers occasionally try and optimise PFN scanners by using page_order
but miss that in general it requires zone-lock. This has happened twice for
compaction.c and rejected both times. This patch clarifies the documentation
of page_order and adds a
is_vmalloc_addr already does the range checking against VMALLOC_START and
VMALLOC_END. Use it.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/mm/iomap.c |3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/arch/arm/mm/iomap.c b/arch/arm/mm/iomap.c
index
There's no need to use VMALLOC_START and VMALLOC_END with
__get_vm_area when get_vm_area does the exact same thing.
Convert over.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
drivers/acpi/apei/ghes.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers
. We were debating if this is just part of finding the
correct size for vmalloc or if there is a need for vmalloc_upper=
- People who like bike shedding more than I do can suggest better
config names if there is sufficient interest in the series.
Laura Abbott (11):
mce
vmalloc already provides a macro to calculat the total vmalloc size,
VMALLOC_TOTAL. Use it.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
drivers/md/dm-bufio.c |4 ++--
drivers/md/dm-stats.c |2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/md/dm
vmalloc already gives a useful macro to calculate the total vmalloc
size. Use it.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
mm/percpu.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index 0d10def..afbf352 100644
--- a/mm
There is no need to call __get_vm_area with VMALLOC_START and
VMALLOC_END when get_vm_area already does that. Call get_vm_area
directly.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
drivers/iommu/omap-iovmm.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git
Instead of manually checking the bounds of VMALLOC_START and
VMALLOC_END, just use is_vmalloc_addr. That's what the function
was designed for.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
.../staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c |3 +--
1 files changed, 1 insertions(+), 2
With CONFIG_INTERMIX_VMALLOC, we can no longer assume all vmalloc
is contained between VMALLOC_START and VMALLOC_END. For code that
relies on operating on the vmalloc space, use
for_each_potential_vmalloc_area to track each area separately.
Signed-off-by: Laura Abbott lau...@codeaurora.org
.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/kvm/mmu.c| 12
arch/arm/mm/ioremap.c | 12
arch/arm/mm/mmu.c |9 +++--
3 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index
consisting of vmalloc.
Signed-off-by: Laura Abbott lau...@codeaurora.org
Signed-off-by: Neeti Desai nee...@codeaurora.org
---
arch/arm/Kconfig |3 +
arch/arm/mm/init.c | 104
arch/arm/mm/mm.h |1 +
arch/arm/mm/mmu.c | 29
-by: Laura Abbott lau...@codeaurora.org
Signed-off-by: Neeti Desai nee...@codeaurora.org
---
include/linux/mm.h |6 ++
include/linux/vmalloc.h | 31
mm/Kconfig |6 ++
mm/vmalloc.c| 119 --
4 files changed
dma_contiguous_remap is only
really concerned with the remapping, introduce iotable_init_novmreserve
to allow remapping of pages without reserving the virtual address
in vmalloc space.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/include/asm/mach/map.h |2 ++
arch/arm/mm/dma
On 1/3/2014 10:23 AM, Dave Hansen wrote:
On 01/02/2014 01:53 PM, Laura Abbott wrote:
The goal here is to allow as much lowmem to be mapped as if the block of memory
was not reserved from the physical lowmem region. Previously, we had been
hacking up the direct virt - phys translation to ignore
On 7/30/2013 12:05 PM, Kees Cook wrote:
I'd like to propose the topic of catching up to x86 exploit
mitigations and security features, and potentially identifying
ARM-unique mitigations/features that could be implemented. Several
years ago, with Nicolas Pitre doing all the real work, I
On 3/13/2014 12:07 PM, Kees Cook wrote:
On Fri, Feb 21, 2014 at 2:09 PM, Kees Cook keesc...@chromium.org wrote:
On Fri, Feb 21, 2014 at 5:20 AM, Russell King - ARM Linux
li...@arm.linux.org.uk wrote:
On Fri, Feb 21, 2014 at 12:37:04PM +, Dave Martin wrote:
It would be good if someone
On 2/17/2014 4:34 AM, Dave Martin wrote:
On Fri, Feb 14, 2014 at 11:11:07AM -0800, Kees Cook wrote:
On Fri, Feb 14, 2014 at 8:22 AM, Dave Martin dave.mar...@arm.com wrote:
On Thu, Feb 13, 2014 at 05:04:10PM -0800, Kees Cook wrote:
Introduce CONFIG_DEBUG_RODATA to mostly match the x86 config,
On 5/1/2014 6:08 AM, Grant Likely wrote:
On Thu, 3 Apr 2014 10:04:58 -0700, Laura Abbott lau...@codeaurora.org
wrote:
memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo, migrate to using memblock
...@ti.com
Acked-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
include/linux/memblock.h | 2 ++
mm/memblock.c| 5 +
2 files changed, 7 insertions(+)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 1ef6636
for mem@addr
Laura Abbott (2):
mm/memblock: add memblock_get_current_limit
arm: Get rid of meminfo
arch/arm/Kconfig | 5 --
arch/arm/boot/compressed/atags_to_fdt.c | 2 +
arch/arm/include/asm/mach/arch.h | 4 +-
arch/arm/include/asm/memblock.h
...@lakedaemon.net
Acked-by: Catalin Marinas catalin.mari...@arm.com
Acked-by: Santosh Shilimkar santosh.shilim...@ti.com
Tested-by: Leif Lindholm leif.lindh...@linaro.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm/Kconfig | 5 --
arch/arm/boot/compressed
The Kconfig for CONFIG_STRICT_DEVMEM is missing despite being
used in mmap.c. Add it.
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
arch/arm64/Kconfig.debug | 14 ++
1 file changed, 14 insertions(+)
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index
On 4/9/2014 9:12 AM, Kees Cook wrote:
On Wed, Apr 9, 2014 at 2:02 AM, Steve Capper steve.cap...@linaro.org wrote:
Hi Kees,
On Mon, Apr 07, 2014 at 08:15:10PM -0700, Kees Cook wrote:
This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata
and LPAE.
Thanks!
-Kees
You are welcome to add
Tested-by: Laura Abbott lau...@codeaurora.org
Laura
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
On 4/1/2014 3:04 AM, Alexander Holler wrote:
CONFIG_DEBUG_SET_MODULE_RONX sounds like a nice security feature, but
things might fail late (and unexpected) if module code is set to read-only
while CONFIG_JUMP_LABEL is enabled (e.g. modprobe bridge).
Avoid this.
Signed-off-by: Alexander
On 4/1/2014 3:34 PM, Kees Cook wrote:
On Mon, Mar 24, 2014 at 3:47 AM, Jon Medhurst (Tixy) t...@linaro.org wrote:
On Sun, 2014-03-23 at 16:21 -0600, Kees Cook wrote:
For this stage, how about I make this depends on KEXEC=n
KPROBES=n?
There's also ftrace (CONFIG_DYNAMIC_FTRACE I believe)
On 4/2/2014 3:10 PM, Rabin Vincent wrote:
Make ftrace work with CONFIG_DEBUG_SET_MODULE_RONX by making module text
writable around the place where ftrace does its work, like it is done on
x86 in the patch which introduced CONFIG_DEBUG_SET_MODULE_RONX,
84e1c6bb38eb (x86: Add RO/NX protection
outside the
range of phys_addr_t. Add range checks for the base and size if
phys_addr_t is smaller than u64.
Reported-by: Geert Uytterhoeven ge...@linux-m68k.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
Geert, can you drop my other patch and give this a test to see if it fixes
your
outside the
range of phys_addr_t. Add range checks for the base and size if
phys_addr_t is smaller than u64.
Reported-by: Geert Uytterhoeven ge...@linux-m68k.org
Tested-by: Geert Uytterhoeven ge...@linux-m68k.org
Signed-off-by: Laura Abbott lau...@codeaurora.org
---
v2: Switched to sizeof
it is installed
into lru so that the bh of lru must be freed before migrating the page.
This frees every bh of lru. We could free only bh of migrating page.
But searching lru costs more than invalidating entire lru.
Signed-off-by: Gioh Kim gioh@lge.com
Acked-by: Laura Abbott lau
On 7/18/2014 6:53 AM, Catalin Marinas wrote:
On Wed, Jul 02, 2014 at 07:03:36PM +0100, Laura Abbott wrote:
+void *dma_common_pages_remap(struct page **pages, size_t size,
+unsigned long vm_flags, pgprot_t prot,
+const void *caller)
+{
+struct
On 7/9/2014 3:33 PM, Olof Johansson wrote:
On Wed, Jul 2, 2014 at 11:03 AM, Laura Abbott lau...@codeaurora.org wrote:
After allocating an address from a particular genpool,
there is no good way to verify if that address actually
belongs to a genpool. Introduce addr_in_gen_pool which
On 7/4/2014 6:42 AM, Thierry Reding wrote:
On Wed, Jul 02, 2014 at 11:03:37AM -0700, Laura Abbott wrote:
[...]
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
[...]
index f5190ac..02a1939 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -26,6
On 7/4/2014 6:35 AM, Thierry Reding wrote:
On Wed, Jul 02, 2014 at 11:03:38AM -0700, Laura Abbott wrote:
[...]
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
[...]
+static struct gen_pool *atomic_pool;
+
+#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
+static
On 7/18/2014 6:43 AM, Catalin Marinas wrote:
On Wed, Jul 02, 2014 at 07:03:38PM +0100, Laura Abbott wrote:
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 4164c5a..a2487f1 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
[...]
static
On 7/22/2014 2:03 PM, Catalin Marinas wrote:
On Tue, Jul 22, 2014 at 07:06:44PM +0100, Arnd Bergmann wrote:
[...]
+ if (!addr)
+ goto destroy_genpool;
+
+ memset(addr, 0, atomic_pool_size);
+ __dma_flush_range(addr, addr +
1 - 100 of 2861 matches
Mail list logo