Without that, regulators are left in the mode last set by the bootloader or
by the kernel the device was rebooted from. This leads to various problems
like non-working peripherals.
Signed-off-by: Ivaylo Dimitrov
---
arch/arm/boot/dts/omap3-n900.dts | 9 +
1 file changed, 9 insertions(+)
Hi all,
We are pleased to announce another update of Intel GVT-g for KVM.
Intel GVT-g for KVM (a.k.a. KVMGT) is a full GPU virtualization solution with
mediated pass-through, starting from 4th generation Intel Core(TM) processors
with Intel Graphics processors. A virtual GPU instance is mainta
According to the TRM, SCM CONTROL_CSIRXFE register is on offset 0x6c
Signed-off-by: Ivaylo Dimitrov
---
arch/arm/boot/dts/omap34xx.dtsi | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/omap34xx.dtsi b/arch/arm/boot/dts/omap34xx.dtsi
index 5cdba1f..e446562 100
On Fri, Apr 15, 2016 at 05:37:32PM -0600, Jason Gunthorpe wrote:
> On Sat, Apr 16, 2016 at 12:23:28AM +0300, Leon Romanovsky wrote:
>
> > Intel as usual decided to do it in their way and the result is presented
> > on this mailing list.
>
> Dennis was pretty clear he was going to send the patches
On Fri, Apr 15, 2016 at 07:28:01PM -0400, Ira Weiny wrote:
> On Sat, Apr 16, 2016 at 12:23:28AM +0300, Leon Romanovsky wrote:
> Do you have a technical reason that this patch series does not fix the
> write/writev issue brought up by Al?
Sure, I truly believe that we can do common API in a months
On Friday 15 April 2016 09:18 PM, Alexey Brodkin wrote:
> And now the question is how to force DRM subsystem or just that driver
> to use whatever predefined (say via device tree) location in memory
> for data buffer allocation.
It seems this is pretty easy to do with DT reserved-memory binding.
Hi
Il giorno 16/apr/2016, alle ore 00:45, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Sat, Apr 16, 2016 at 12:08:44AM +0200, Paolo Valente wrote:
>> Maybe the source of confusion is the fact that a simple sector-based,
>> proportional share scheduler always distributes total bandwidth
>> accor
Hi Al,
[auto build test WARNING on pm/linux-next]
[also build test WARNING on v4.6-rc3 next-20160415]
[if your patch is applied to the wrong git tree, please drop us a note to help
improving the system]
url:
https://github.com/0day-ci/linux/commits/Al-Stone/Force-cppc_cpufreq-to-report
We've calculated @len to be the bytes we need for '/..' entries from
@kn_from to the common ancestor, and calculated @nlen to be the extra
bytes we need to get from the common ancestor to @kn_to. We use them
as such at the end. But in the loop copying the actual entries, we
overwrite @nlen. Use
On Sat, 2016-04-16 at 11:55 +0900, Sergey Senozhatsky wrote:
> On (04/08/16 02:31), Sergey Senozhatsky wrote:
> >
> > Hello,
> >
> > This patch set makes printk() completely asynchronous: new messages
> > are getting upended to the kernel printk buffer, but instead of 'direct'
> > printing the ac
On Fri, Apr 15, 2016 at 03:45:34PM +0300, Andy Shevchenko wrote:
> On Wed, 2016-04-13 at 17:40 +0100, Mark Brown wrote:
> > On Wed, Apr 13, 2016 at 07:21:53PM +0300, Andy Shevchenko wrote:
> > >
> > > On Wed, 2016-04-13 at 21:47 +0530, Vinod Koul wrote:
> > > >
> > > > On Wed, Apr 13, 2016 at 07:
On Thu, Apr 14, 2016 at 08:23:26PM +0200, Robert Jarzmik wrote:
> Vinod Koul writes:
>
> > On Mon, Mar 28, 2016 at 11:32:24PM +0200, Robert Jarzmik wrote:
> >> In the current state, upon bus error the driver will spin endlessly,
> >> relaunching the last tx, which will fail again and again :
> >>
On Thu, 14 Apr 2016 10:48:29 -0600 Toshi Kani wrote:
> When CONFIG_FS_DAX_PMD is set, DAX supports mmap() using pmd page
> size. This feature relies on both mmap virtual address and FS
> block (i.e. physical address) to be aligned by the pmd page size.
> Users can use mkfs options to specify FS
On Thu, 14 Apr 2016 13:39:22 -0400 Matthew Wilcox wrote:
> On Tue, Apr 05, 2016 at 01:55:23PM -0700, Hugh Dickins wrote:
> > zap_pmd_range()'s CONFIG_DEBUG_VM !rwsem_is_locked(&mmap_sem) BUG()
> > will be invalid with huge pagecache, in whatever way it is implemented:
> > truncation of a hugely-m
Julian Calaby writes:
> Hi Kalle,
>
> On Sat, Apr 16, 2016 at 4:25 AM, Kalle Valo wrote:
>> Byeoungwook Kim writes:
>>
>>> rtl_*_delay() functions were reused same codes about addr variable.
>>> So i have converted to rtl_addr_delay() from code about addr variable.
>>>
>>> Signed-off-by: Byeoun
On Tue, 2016-04-12 at 23:13 +0200, Wolfram Sang wrote:
> Hi,
>
> thanks for the submission!
>
> On Tue, Mar 08, 2016 at 02:23:51AM +0800, Liguo Zhang wrote:
> > Signal complete() in the i2c irq handler after one transfer done,
> > and then wait_for_completion_timeout() will return, this procedure
On Fri, Apr 15, 2016 at 09:02:06PM -0600, Andreas Dilger wrote:
> Wouldn't it make sense to have helpers like "inode_read_lock(inode)" or
> similar,
> so that it is consistent with other parts of the code and easier to find?
> It's a bit strange to have the filesystems use "inode_lock()" and some
From: Oleg Drokin
I noticed that the logic in fadvise64_64 syscall is incorrect
for partial pages. While first page of the region is correctly skipped
if it is partial, the last page of the region is mistakenly discarded.
This leads to problems for applications that read data in
non-page-aligned
On Fri, Apr 15, 2016 at 09:02:02PM -0600, Andreas Dilger wrote:
> Looks very interesting, and long awaited. How do you see the parallel
> operations moving forward? Staying as lookup only, or moving on to parallel
> modifications as well?
lookup + readdir. Not even atomic_open at this point, a
On Apr 15, 2016, at 6:52 PM, Al Viro wrote:
>
> The thing appears to be working. It's in vfs.git#work.lookups; the
> last 5 commits are the infrastructure (fs/namei.c and fs/dcache.c; no changes
> in fs/*/*) + actual switch to rwsem.
>
> The missing bits: down_write_killable() (ther
On Apr 15, 2016, at 6:55 PM, Al Viro wrote:
>
> From: Al Viro
>
> ta-da!
>
> The main issue is the lack of down_write_killable(), so the places
> like readdir.c switched to plain inode_lock(); once killable
> variants of rwsem primitives appear, that'll be dealt with.
>
> lockdep side also mi
On (04/08/16 02:31), Sergey Senozhatsky wrote:
> Hello,
>
> This patch set makes printk() completely asynchronous: new messages
> are getting upended to the kernel printk buffer, but instead of 'direct'
> printing the actual print job is performed by a dedicated kthread.
> This has the advantage t
Hi Kalle,
On Sat, Apr 16, 2016 at 4:25 AM, Kalle Valo wrote:
> Byeoungwook Kim writes:
>
>> rtl_*_delay() functions were reused same codes about addr variable.
>> So i have converted to rtl_addr_delay() from code about addr variable.
>>
>> Signed-off-by: Byeoungwook Kim
>> Reviewed-by: Julian C
> - blk_queue_max_discard_sectors(brd->brd_queue, UINT_MAX);
> + blk_queue_max_discard_sectors(brd->brd_queue, UINT_MAX >> 9);
Shouldn't we fix the issue by capping to UINT_MAX >> 9 inside
blk_queue_max_discard_sectors? That way we'll prevent against having
issues like this in any other d
Signed-off-by: Christoph Hellwig
---
include/linux/interrupt.h | 10 +
kernel/irq/Makefile | 1 +
kernel/irq/affinity.c | 54 +++
3 files changed, 65 insertions(+)
create mode 100644 kernel/irq/affinity.c
diff --git a/include/linux/
Allow drivers to pass in the affinity mask from the generic interrupt
layer, and spread queues based on that. If the driver doesn't pass in
a mask we will create it using the genirq helper. As this helper was
modelled after the blk-mq algorithm there should be no change in behavior.
XXX: Just as
Set the affinity_mask before allocating vectors. And for now we also
need a little hack after allocation, hopefully someone smarter than me
can move this into the core code.
Signed-off-by: Christoph Hellwig
---
drivers/pci/irq.c | 16 +++-
1 file changed, 15 insertions(+), 1 deletio
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 88 +
1 file changed, 23 insertions(+), 65 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index ff3c8d7..82730bf 100644
--- a/drivers/nvme/host/pci.c
+++ b/
From: Thomas Gleixner
This allows optimized interrupt allocation and affinity settings for multi
queue devices MSI-X interrupts.
If the device holds a pointer to a cpumask, then this mask is used to:
- allocate the interrupt descriptor on the proper nodes
- set the default interrupt affini
From: Thomas Gleixner
This optional cpumask will be used by the irq core code to optimize interrupt
allocation and affinity setup for multiqueue devices.
Signed-off-by: Thomas Gleixner
---
include/linux/device.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/device.h b/i
Signed-off-by: Christoph Hellwig
---
include/linux/interrupt.h | 2 ++
kernel/irq/manage.c | 14 ++
2 files changed, 16 insertions(+)
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 67bc1e1f..ae345da 100644
--- a/include/linux/interrupt.h
+++ b/include
This series enhances the irq and PCI code to allow spreading around MSI and
MSI-X vectors so that they have per-cpu affinity if possible, or at least
per-node. For that it takes the algorithm from blk-mq, moves it to
a common place, and makes it available through a vastly simplified PCI
interrupt
Hide all the MSI-X vs MSI vs legacy bullshit, and provide an array of
interrupt vectors in the pci_dev structure, and ensure we get proper
interrupt affinity by default.
Signed-off-by: Christoph Hellwig
---
drivers/pci/irq.c | 89 -
drivers/p
On slow platforms with unreliable TSC, such as QEMU emulated machines,
it is possible for the kernel to request the next event in the past. In
that case, in the current implementation of xen_vcpuop_clockevent, we
simply return -ETIME. To be precise the Xen returns -ETIME and we pass
it on. However
Not sure if this is the right place to post. If it is not please direct me to
where I should go.
I am running x86_64 kernel 4.4.6 on an Intel Xeon D system. This is an SOC
system that includes dual 10G ethernet using the ixgbe driver.
I have also tested this on kernels 4.2 through 4.6rc3 with t
On 01/29, Stefan Agner wrote:
> If a clock gets enabled early during boot time, it can lead to a PLL
> startup. The wait_lock function makes sure that the PLL is really
> stareted up before it gets used. However, the function sleeps which
> leads to scheduling and an error:
> bad: scheduling from t
From: Al Viro
Signed-off-by: Al Viro
---
fs/orangefs/file.c| 4 ++--
fs/orangefs/orangefs-kernel.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
index ae92795..491e82c 100644
--- a/fs/orangefs/file.c
+++ b/fs/orangef
From: Al Viro
... and explain the non-obvious logics in case when lookup yields
a different dentry.
Signed-off-by: Al Viro
---
fs/exportfs/expfs.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
index c46f1a1..402c5ca 100
From: Al Viro
Signed-off-by: Al Viro
---
fs/dcache.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index e9de4d9..33cad8a 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2363,11 +2363,19 @@ EXPORT_SYMBOL(d_rehash);
static inline
From: Al Viro
Signed-off-by: Al Viro
---
fs/overlayfs/super.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
index 14cab38..4c26225 100644
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -378,9 +378,7 @@ static inline
From: Al Viro
Signed-off-by: Al Viro
---
fs/namei.c | 23 +--
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/fs/namei.c b/fs/namei.c
index c0d551f..6fb33a7 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1603,8 +1603,15 @@ static struct dentry *lookup_slow(const
From: Al Viro
... and have it use inode_lock()
Signed-off-by: Al Viro
---
fs/reiserfs/ioctl.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/reiserfs/ioctl.c b/fs/reiserfs/ioctl.c
index 036a1fc..f49afe7 100644
--- a/fs/reiserfs/ioctl.c
+++ b/fs/reiserfs/ioctl.c
@@
From: Al Viro
ta-da!
The main issue is the lack of down_write_killable(), so the places
like readdir.c switched to plain inode_lock(); once killable
variants of rwsem primitives appear, that'll be dealt with.
lockdep side also might need more work
Signed-off-by: Al Viro
---
fs/btrfs/ioctl.c
From: Al Viro
marked as such when (would be) parallel lookup is about to pass them
to actual ->lookup(); unmarked when
* __d_add() is about to make it hashed, positive or not.
* __d_move() (from d_splice_alias(), directly or via
__d_unalias()) puts a preexisting dentry in its plac
From: Al Viro
If we *do* run into an in-lookup match, we need to wait for it to
cease being in-lookup. Fortunately, we do have unused space in
in-lookup dentries - d_lru is never looked at until it stops being
in-lookup.
So we can stash a pointer to wait_queue_head from stack frame of
the calle
From: Al Viro
We'll need to verify that there's neither a hashed nor in-lookup
dentry with desired parent/name before adding to in-lookup set.
One possible solution would be to hold the parent's ->d_lock through
both checks, but while the in-lookup set is relatively small at any
time, dcache is
From: Al Viro
We will need to be able to check if there is an in-lookup
dentry with matching parent/name. Right now it's impossible,
but as soon as start locking directories shared such beasts
will appear.
Add a secondary hash for locating those. Hash chains go through
the same space where d_a
From: Al Viro
Signed-off-by: Al Viro
---
fs/kernfs/mount.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
index b67dbcc..e006d30 100644
--- a/fs/kernfs/mount.c
+++ b/fs/kernfs/mount.c
@@ -120,9 +120,8 @@ struct dentry *kernfs_node_
From: Al Viro
Signed-off-by: Al Viro
---
fs/dcache.c | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 32ceae3..e9de4d9 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1772,11 +1772,11 @@ void d_instantiate(struct dentry *entry, s
From: Al Viro
grab a reference to dentry we'd got the sucker from, and return
that dentry via *wait, rather than just returning the address of
->i_mutex.
Signed-off-by: Al Viro
---
fs/configfs/dir.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/config
From: Al Viro
Signed-off-by: Al Viro
---
fs/ocfs2/aops.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 1581240..f048a33 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -2311,7 +2311,7 @@ static void ocfs2_dio_end_io_write(s
> -Original Message-
> From: KY Srinivasan
> Sent: Friday, April 15, 2016 9:01 AM
> To: 'Alexander Duyck'
> Cc: David Miller ; Netdev
> ; linux-kernel@vger.kernel.org;
> de...@linuxdriverproject.org; o...@aepfle.de; Robo Bot
> ; Jason Wang ;
> e...@mellanox.com; ja...@mellanox.com; yevge
The thing appears to be working. It's in vfs.git#work.lookups; the
last 5 commits are the infrastructure (fs/namei.c and fs/dcache.c; no changes
in fs/*/*) + actual switch to rwsem.
The missing bits: down_write_killable() (there had been a series
posted introducing just that; for
On 03/11, Peter Ujfalusi wrote:
> of_find_node_by_name() will call of_node_put() on the node so we need to
> get it first to avoid warnings.
> The cfg_node needs to be put after we have finished processing the
> properties.
>
> Signed-off-by: Peter Ujfalusi
> ---
Applied to clk-next
--
Qualcom
On 03/07, Franklin S Cooper Jr wrote:
> Add tblck to the pwm nodes. This insures that the ehrpwm driver has access
> to the time-based clk.
>
> Do not remove similar entries for ehrpwm node. Later patches will switch
> from using ehrpwm node name to pwm. But to maintain ABI compatibility we
> shou
On 04/15, Jiancheng Xue wrote:
> Hi,
>
> On 2016/3/31 16:10, Jiancheng Xue wrote:
> > From: Jiancheng Xue
> >
> > The CRG(Clock and Reset Generator) block provides clock
> > and reset signals for other modules in hi3519 soc.
> >
> > Signed-off-by: Jiancheng Xue
> > Acked-by: Rob Herring
> > A
On 03/31, Jiancheng Xue wrote:
> diff --git a/drivers/clk/hisilicon/clk-hi3519.c
> b/drivers/clk/hisilicon/clk-hi3519.c
> new file mode 100644
> index 000..ee9df82
> --- /dev/null
> +++ b/drivers/clk/hisilicon/clk-hi3519.c
> @@ -0,0 +1,129 @@
> +/*
> + * Hi3519 Clock Driver
> + *
> + * Copyrig
This is probably the last update before the mm summit. Main forcus is on
khugepaged stability.
khugepaged is in more reasonable shape now. I missed quite a few corner
cases on first try. I run this version via LTP, trinity and syzkaller
without crashes so far.
The patchset is on top of v4.6-rc3 p
The idea borrowed from Peter's patch from patchset on speculative page
faults[1]:
Instead of passing around the endless list of function arguments,
replace the lot with a single structure so we can change context
without endless function signature changes.
The changes are mostly mechanical with e
On 04/04, Stefan Agner wrote:
> Similar to an earlier fix for the SAI clocks, the DCU clock hierarchy
> mixes the bus clock with the display controllers pixel clock. Tests
> have shown that the gates in CCM_CCGR3/9 registers do not control
> the DCU pixel clock, but only the register access clock (
On Fri, Apr 15, 2016 at 05:19:27PM -0700, Shi, Yang wrote:
> On 4/15/2016 5:09 PM, Paul E. McKenney wrote:
> >On Fri, Apr 15, 2016 at 04:45:32PM -0700, Shi, Yang wrote:
> >>On 4/15/2016 4:26 PM, Paul E. McKenney wrote:
> >>>On Fri, Apr 15, 2016 at 01:28:11PM -0700, Yang Shi wrote:
> When buildi
THP_FILE_ALLOC: how many times huge page was allocated and put page
cache.
THP_FILE_MAPPED: how many times file huge page was mapped.
Signed-off-by: Kirill A. Shutemov
---
include/linux/vm_event_item.h | 7 +++
mm/memory.c | 1 +
mm/vmstat.c | 2 ++
3 fil
This is preparation of vmscan for file huge pages. We cannot write out
huge pages, so we need to split them on the way out.
Signed-off-by: Kirill A. Shutemov
---
mm/vmscan.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index e9fe17c96ef8..df56f6a2dbfe 10064
Homepage url of git in HOWTO document was updated by commit
e234ebf7881c013b654113f0a208977ac3ce1d01 ("Documentation/HOWTO: update
git home URL") but not applied to several translations. This commit
updates them.
Signed-off-by: SeongJae Park
---
Documentation/ko_KR/HOWTO | 6 +++---
Documentati
It is possible that acpi_evaluate_integer might fail and value would not be
set to any value so correct this defect by returning 0 in case of an
error. This is also the correct thing to return because the backlight
subsystem will print the old value of brightness in this case.
Signed-off-by: Giedr
The idea (and most of code) is borrowed again: from Hugh's patchset on
huge tmpfs[1].
Instead of allocation pte page table upfront, we postpone this until we
have page to map in hands. This approach opens possibility to map the
page as huge if filesystem supports this.
Comparing to Hugh's patch I
Basic scheme is the same as for anon THP.
Main differences:
- File pages are on radix-tree, so we have head->_count offset by
HPAGE_PMD_NR. The count got distributed to small pages during split.
- mapping->tree_lock prevents non-lockless access to pages under split
over radix-tree;
File COW for THP is handled on pte level: just split the pmd.
It's not clear how benefitial would be allocation of huge pages on COW
faults. And it would require some code to make them work.
I think at some point we can consider teaching khugepaged to collapse
pages in COW mappings, but allocatin
This patch extends khugepaged to support collapse of tmpfs/shmem pages.
We share fair amount of infrastructure with anon-THP collapse.
Few design points:
- First we are looking for VMA which can be suitable for mapping huge
page;
- If the VMA maps shmem file, the rest scan/collapse opera
Add description of THP handling into unevictable-lru.txt.
Signed-off-by: Kirill A. Shutemov
---
Documentation/vm/unevictable-lru.txt | 21 +
1 file changed, 21 insertions(+)
diff --git a/Documentation/vm/unevictable-lru.txt
b/Documentation/vm/unevictable-lru.txt
index fa3b5
Here's basic implementation of huge pages support for shmem/tmpfs.
It's all pretty streight-forward:
- shmem_getpage() allcoates huge page if it can and try to inserd into
radix tree with shmem_add_to_page_cache();
- shmem_add_to_page_cache() puts the page onto radix-tree if there's
From: Hugh Dickins
Provide a shmem_get_unmapped_area method in file_operations, called
at mmap time to decide the mapping address. It could be conditional
on CONFIG_TRANSPARENT_HUGEPAGE, but save #ifdefs in other places by
making it unconditional.
shmem_get_unmapped_area() first calls the usual
khugepaged implementation grew to the point when it deserve separate
file in source.
Let's move it to mm/khugepaged.c.
Signed-off-by: Kirill A. Shutemov
---
include/linux/huge_mm.h| 10 +
include/linux/khugepaged.h |6 +
mm/Makefile|2 +-
mm/huge_memory.c
Let's wire up existing madvise() hugepage hints for file mappings.
MADV_HUGEPAGE advise shmem to allocate huge page on page fault in the
VMA. It only has effect if the filesystem is mounted with huge=advise or
huge=within_size.
MADV_NOHUGEPAGE prevents hugepage from being allocated on page fault
Splitting THP PMD is simple: just unmap it as in DAX case.
Unlike DAX, we also remove the page from rmap and drop reference.
Signed-off-by: Kirill A. Shutemov
---
mm/huge_memory.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
i
With postponed page table allocation we have chance to setup huge pages.
do_set_pte() calls do_set_pmd() if following criteria met:
- page is compound;
- pmd entry in pmd_none();
- vma has suitable size and alignment;
Signed-off-by: Kirill A. Shutemov
---
include/linux/huge_mm.h | 2 ++
mm/
Let's add ShmemHugePages and ShmemPmdMapped fields into meminfo and
smaps. It indicates how many times we allocate and map shmem THP.
NR_ANON_TRANSPARENT_HUGEPAGES is renamed to NR_ANON_THPS.
Signed-off-by: Kirill A. Shutemov
---
drivers/base/node.c| 13 +
fs/proc/meminfo.c
vma_addjust_trans_huge() splits pmd if it's crossing VMA boundary.
During split we munlock the huge page which requires rmap walk.
rmap wants to take the lock on its own.
Let's move vma_adjust_trans_huge() outside i_mmap_rwsem to fix this.
Signed-off-by: Kirill A. Shutemov
---
mm/mmap.c | 4 ++-
copy_page_range() has a check for "Don't copy ptes where a page fault
will fill them correctly." It works on VMA level. We still copy all page
table entries from private mappings, even if they map page cache.
We can simplify copy_huge_pmd() a bit by skipping file PMDs.
We don't map file private p
split_huge_pmd() for file mappings (and DAX too) is implemented by just
clearing pmd entry as we can re-fill this area from page cache on pte
level later.
This means we don't need deposit page tables when file THP is mapped.
Therefore we shouldn't try to withdraw a page table on zap_huge_pmd()
fil
Both variants of khugepaged_alloc_page() do up_read(&mm->mmap_sem)
first: no point keep it inside the function.
Signed-off-by: Kirill A. Shutemov
---
mm/khugepaged.c | 25 ++---
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
For shmem/tmpfs we only need to tweak truncate_inode_page() and
invalidate_mapping_pages().
Signed-off-by: Kirill A. Shutemov
---
mm/truncate.c | 22 --
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index b00272810871..4f931ca933
As with anon THP, we only mlock file huge pages if we can prove that the
page is not mapped with PTE. This way we can avoid mlock leak into
non-mlocked vma on split.
We rely on PageDoubleMap() under lock_page() to check if the the page
may be PTE mapped. PG_double_map is set by page_add_file_rmap(
On Fri 15 Apr 13:17 PDT 2016, John Stultz wrote:
> On Mon, Mar 28, 2016 at 8:37 PM, Bjorn Andersson
> wrote:
> > From: Bjorn Andersson
> >
> > This introduces the peripheral image loader, for loading WCNSS firmware
> > and boot the core on e.g. MSM8974. The firmware is verified and booted
> > wi
These flags are in use for file THP.
Signed-off-by: Kirill A. Shutemov
---
include/linux/page-flags.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 517707ae8cd1..9d518876dd6a 100644
--- a/include/linux/page
This patch adds new mount option "huge=". It can have following values:
- "always":
Attempt to allocate huge pages every time we need a new page;
- "never":
Do not allocate huge pages;
- "within_size":
Only allocate huge page if it will be fully within i_size.
Naive approach: on mapping/unmapping the page as compound we update
->_mapcount on each 4k page. That's not efficient, but it's not obvious
how we can optimize this. We can look into optimization later.
PG_double_map optimization doesn't work for file pages since lifecycle
of file pages is differe
change_huge_pmd() has assert which is not relvant for file page.
For shared mapping it's perfectly fine to have page table entry
writable, without explicit mkwrite.
Signed-off-by: Kirill A. Shutemov
---
mm/huge_memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/hug
For now, we would have HPAGE_PMD_NR entries in radix tree for every huge
page. That's suboptimal and it will be changed to use Matthew's
multi-order entries later.
'add' operation is not changed, because we don't need it to implement
hugetmpfs: shmem uses its own implementation.
Signed-off-by: Ki
The new helper is similar to radix_tree_maybe_preload(), but tries to
preload number of nodes required to insert (1 << order) continuous
naturally-aligned elements.
This is required to push huge pages into pagecache.
Signed-off-by: Kirill A. Shutemov
---
include/linux/radix-tree.h | 1 +
lib/r
Add info about tmpfs/shmem with huge pages.
Signed-off-by: Kirill A. Shutemov
---
Documentation/vm/transhuge.txt | 130 +
1 file changed, 93 insertions(+), 37 deletions(-)
diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
index
We always have vma->vm_mm around.
Signed-off-by: Kirill A. Shutemov
---
arch/alpha/mm/fault.c | 2 +-
arch/arc/mm/fault.c | 2 +-
arch/arm/mm/fault.c | 2 +-
arch/arm64/mm/fault.c | 2 +-
arch/avr32/mm/fault.c | 2 +-
arch/cris/mm/fault.c
On 4/15/2016 5:09 PM, Paul E. McKenney wrote:
On Fri, Apr 15, 2016 at 04:45:32PM -0700, Shi, Yang wrote:
On 4/15/2016 4:26 PM, Paul E. McKenney wrote:
On Fri, Apr 15, 2016 at 01:28:11PM -0700, Yang Shi wrote:
When building locktorture test into kernel image, it keeps printing out
stats informa
Obsolete info about regression postings were removed by commit
5645a717c6ee61e67d38aa9f15cb9db074e1e99d ("Documentation: HOWTO: remove
obsolete info about regression postings") but not applied to
translations. This commit applies the change to translations.
Signed-off-by: SeongJae Park
---
Docu
On 04/13, Eric Anholt wrote:
> Signed-off-by: Eric Anholt
> ---
Acked-by: Stephen Boyd
Or can I merge this? It wasn't addressed To: me so who knows.
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On 04/13, Eric Anholt wrote:
> In poweroff, we set the reset bit and the power down bit, but only
> managed to unset the reset bit for poweron. This meant that if HDMI
> did -EPROBE_DEFER after it had grabbed its clocks, we'd power down the
> PLLH (that had been on at boot time) and never recover.
From: Weidong Wang
Date: Thu, 14 Apr 2016 15:43:52 +0800
> When tested the PHY SGMII Loopback:
> 1.set the LOOPBACK bit,
> 2.set the autoneg to AUTONEG_DISABLE, it calls the
> genphy_setup_forced which will clear the bit.
>
> The BMCR_LOOPBACK bit should be preserved.
>
> As Florian pointed out
On Fri, Apr 15, 2016 at 04:45:32PM -0700, Shi, Yang wrote:
> On 4/15/2016 4:26 PM, Paul E. McKenney wrote:
> >On Fri, Apr 15, 2016 at 01:28:11PM -0700, Yang Shi wrote:
> >>When building locktorture test into kernel image, it keeps printing out
> >>stats information even though there is no lock type
On 04/14, Yoshinori Sato wrote:
> Some SoC use 16bit-word register. And required 16bit-word access.
> This changes add 16-bit access mode.
>
> Signed-off-by: Yoshinori Sato
Please implement a custom divider for your hardware instead of
adding this support to the core. You can call functions such
On 04/14, Masahiro Yamada wrote:
>
> OK, now I notice another problem in my code;
> if foo_clk_init() fails for reason [2],
> clk_disable() WARN's due to zero enable_count.
>
> if (WARN_ON(core->enable_count == 0))
> return;
>
>
>
> Perhaps, I got screwed up by splitting clock init st
1 - 100 of 1033 matches
Mail list logo