On 06. 05. 20, 10:02, Hyunki Koo wrote:
> Support 32-bit access for the TX/RX hold registers UTXH and URXH.
>
> This is required for some newer SoCs.
>
> Signed-off-by: Hyunki Koo
> Reviewed-by: Krzysztof Kozlowski
> Tested on Odroid HC1 (Exynos5422):
> Tested-by: Krzysztof Kozlowski
> ---
>
Hi Andrew,
On Mon, May 18, 2020 at 04:06:56PM -0700, Andrew Morton wrote:
> On Mon, 18 May 2020 14:13:50 -0700 Minchan Kim wrote:
>
> > Andrew, I sent this patch without folding into previous syscall introducing
> > patches because it could be arguable. If you want to fold it into each
> >
On 2020/5/19 上午9:51, Cindy Lu wrote:
Hi ,Jason
It works ok in the latest version of qemu vdpa code , So I think the
patch is ok.
Thanks
Cindy
Thanks for the testing, (btw, we'd better not do top posting when
discuss in the community).
So,
Acked-by: Jason Wang
On Wed, May 13, 2020
Display BAT flags the same way as page flags: rwx and wimg
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 37 ++-
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/bats.c b/arch/powerpc/mm/ptdump/bats.c
Display the size of areas mapped with BATs.
For that, the size display for pages is refactorised.
Signed-off-by: Christophe Leroy
---
v2: Add missing include of linux/seq_file.h (Thanks to kbuild test robot)
---
arch/powerpc/mm/ptdump/bats.c | 4
arch/powerpc/mm/ptdump/ptdump.c | 23
The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.
The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space.
In order to have all flags fit on a 80 chars wide screen,
reduce the flags to 1 char (2 where ambiguous).
No cache is 'i'
User is 'ur' (Supervisor would be sr)
Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
that was ambiguous because that's not entirely right)
Signed-off-by:
Doing kasan pages allocation in MMU_init is too early, kernel doesn't
have access yet to the entire memory space and memblock_alloc() fails
when the kernel is a bit big.
Do it from kasan_init() instead.
Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support")
Cc: sta...@vger.kernel.org
PPC_PIN_TLB options are dedicated to the 8xx, move them into
the 8xx Kconfig.
While we are at it, add some text to explain what it does.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 20 ---
arch/powerpc/platforms/8xx/Kconfig | 41
Commit 55c8fc3f4930 ("powerpc/8xx: reintroduce 16K pages with HW
assistance") redefined pte_t as a struct of 4 pte_basic_t, because
in 16K pages mode there are four identical entries in the page table.
But hugepd entries for 8M pages require only one entry of size
pte_basic_t. So there is no point
PPC64 takes 3 additional parameters compared to PPC32:
- mm
- address
- huge
These 3 parameters will be needed in order to perform different
action depending on the page size on the 8xx.
Make pte_update() prototype identical for PPC32 and PPC64.
This allows dropping an #ifdef in
In order to alloc sub-arches to alloc KASAN regions using optimised
methods (Huge pages on 8xx, BATs on BOOK3S, ...), declare
kasan_init_region() weak.
Also make kasan_init_shadow_page_tables() accessible from outside,
so that it can be called from the specific kasan_init_region()
functions if
On PPC32, __ptep_test_and_clear_young() takes the mm->context.id
In preparation of standardising pte_update() params between PPC32 and
PPC64, __ptep_test_and_clear_young() need mm instead of mm->context.id
Replace context param by mm.
Signed-off-by: Christophe Leroy
---
Allocate static page tables for the fixmap area. This allows
setting mappings through page tables before memblock is ready.
That's needed to use early_ioremap() early and to use standard
page mappings with fixmap.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/fixmap.h | 4
Up to now, linear and IMMR mappings are managed via huge TLB entries
through specific code directly in TLB miss handlers. This implies
some patching of the TLB miss handlers at startup, and a lot of
dedicated code.
Remove all this specific dedicated code.
For now we are back to normal handling
Setting init mem to NX shall depend on sinittext being mapped by
block, not on stext being mapped by block.
Setting text and rodata to RO shall depend on stext being mapped by
block, not on sinittext being mapped by block.
Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX")
CONFIG_8xx_COPYBACK was there to help disabling copyback cache mode
for debuging hardware. But nobody will design new boards with 8xx now.
All 8xx platforms select it, so make it the default and remove
the option.
Also remove the Mx_RESETVAL values which are pretty useless and hide
the real
Mapping RO data as ROX is not an issue since that data
cannot be modified to introduce an exploit.
PPC64 accepts to have RO data mapped ROX, as a trade off
between kernel size and strictness of protection.
On PPC32, kernel size is even more critical as amount of
memory is usually small.
Now that space have been freed next to the DTLB miss handler,
it's associated DTLB perf handling can be brought back in
the same place.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git
On Thu, May 14, 2020 at 01:04:38AM +0200, Daniel Borkmann wrote:
> Aside from comments on list, the series looks reasonable to me. For BPF
> the bpf_probe_read() helper would be slightly penalized for probing user
> memory given we now test on copy_from_kernel_nofault() first and if that
> fails
Prepare ITLB handler to handle _PAGE_HUGE when CONFIG_HUGETLBFS
is enabled. This means that the L1 entry has to be kept in r11
until L2 entry is read, in order to insert _PAGE_HUGE into it.
Also move pgd_offset helpers before pte_update() as they
will be needed there in next patch.
Only 40x still uses PTE_ATOMIC_UPDATES.
40x cannot not select CONFIG_PTE64_BIT.
Drop handling of PTE_ATOMIC_UPDATES:
- In nohash/64
- In nohash/32 for CONFIG_PTE_64BIT
Keep PTE_ATOMIC_UPDATES only for nohash/32 for !CONFIG_PTE_64BIT
Signed-off-by: Christophe Leroy
---
The code to setup linear and IMMR mapping via huge TLB entries is
not called anymore. Remove it.
Also remove the handling of removed code exits in the perf driver.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 8 +-
arch/powerpc/kernel/head_8xx.S
Similar to PPC64, accept to map RO data as ROX as a trade off between
between security and memory usage.
Having RO data executable is not a high risk as RO data can't be
modified to forge an exploit.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 26
pte_update() is a bit special for the 8xx. At the time
being, that's an #ifdef inside the nohash/32 pte_update().
As we are going to make it even more special in the coming
patches, create a dedicated version for pte_update() for 8xx.
Signed-off-by: Christophe Leroy
---
Pinned TLB are 8M. Now that there is no strict boundary anymore
between text and RO data, it is possible to use 8M pinned executable
TLB that covers both text and RO data.
When PIN_TLB_DATA or PIN_TLB_TEXT is selected, enforce 8M RW data
alignment and allow STRICT_KERNEL_RWX.
Signed-off-by:
Add a function to early map kernel memory using huge pages.
For 512k pages, just use standard page table and map in using 512k
pages.
For 8M pages, create a hugepd table and populate the two PGD
entries with it.
This function can only be used to create page tables at startup. Once
the regular
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy
---
Map the IMMR area with a single 512k huge page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index 72fb75f2a5f1..f8fff1fa72e3 100644
---
At the time being, 512k huge pages are handled through hugepd page
tables. The PMD entry is flagged as a hugepd pointer and it
means that only 512k hugepages can be managed in that 4M block.
However, the hugepd table has the same size as a normal page
table, and 512k entries can therefore be
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also enforce a 512kB data alignment
minimum. That's a trade-off between size of speed, taking into
account that DEBUG_PAGEALLOC is a debug option. Anyway the
Now that linear and IMMR dedicated TLB handling is gone, kernel
boundary address comparison is similar in ITLB miss handler and
in DTLB miss handler.
Create a macro named compare_to_kernel_boundary.
When TASK_SIZE is strictly below 0x8000 and PAGE_OFFSET is
above 0x8000, it is enough to
Only early debug requires IMMR to be mapped early.
No need to set it up and pin it in assembly. Map it
through page tables at udbg init when necessary.
If CONFIG_PIN_TLB_IMMR is selected, pin it once we
don't need the 32 Mb pinned RAM anymore.
Signed-off-by: Christophe Leroy
---
v2: Disable
Pinned TLBs cannot be modified when the MMU is enabled.
Create a function to rewrite the pinned TLB entries with MMU off.
To set pinned TLB, we have to turn off MMU, disable pinning,
do a TLB flush (Either with tlbie and tlbia) then reprogam
the TLB entries, enable pinning and turn on MMU.
If
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor
In order to properly display information regardless of the page size,
it is necessary to take into account real page size.
Signed-off-by: Christophe Leroy
Fixes: cabe8138b23c ("powerpc: dump as a single line areas mapping a single
physical page.")
Cc: sta...@vger.kernel.org
---
v3: Fixed sizes
Implement a kasan_init_region() dedicated to 8xx that
allocates KASAN regions using huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/8xx.c| 74 ++
arch/powerpc/mm/kasan/Makefile | 1 +
2 files changed, 75 insertions(+)
create mode
512k pages are now standard pages, so only 8M pages
are hugepte.
No more handling of normal page tables through hugepd allocation
and freeing, and hugepte helpers can also be simplified.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 7 +++
As the 8xx now manages 512k pages in standard page tables,
it doesn't need CONFIG_PPC_MM_SLICES anymore.
Don't select it anymore and remove all related code.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 64
At startup, map 32 Mbytes of memory through 4 pages of 8M,
and PIN them inconditionnaly. They need to be pinned because
KASAN is using page tables early and the TLBs might be
dynamically replaced otherwise.
Remove RSV4I flag after installing mappings unless
CONFIG_PIN_TLB_ is selected.
On Mon, May 18, 2020 at 06:10:58PM -0700, Hugh Dickins wrote:
> Hi Pavel,
>
> On Mon, 18 May 2020, Pavel Machek wrote:
>
> > Hi!
> >
> > > This may not risk an actual deadlock, since shmem inodes do not take
> > > part in writeback accounting, but there are several easy ways to avoid
> > > it.
Map linear memory space with 512k and 8M pages whenever
possible.
Three mappings are performed:
- One for kernel text
- One for RO data
- One for the rest
Separating the mappings is done to be able to update the
protection later when using STRICT_KERNEL_RWX.
The ITLB miss handler now need to
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile| 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++
At the time being, KASAN_SHADOW_END is 0x1, which
is 0 in 32 bits representation.
This leads to a couple of issues:
- kasan_remap_early_shadow_ro() does nothing because the comparison
k_cur < k_end is always false.
- In ptdump, address comparison for markers display fails and the
marker's
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Signed-off-by: Christophe Leroy
---
Commit 45ff3c559585 ("powerpc/kasan: Fix parallel loading of
modules.") added spinlocks to manage parallele module loading.
Since then commit 47febbeeec44 ("powerpc/32: Force KASAN_VMALLOC for
modules") converted the module loading to KASAN_VMALLOC.
The spinlocking has then become unneeded and
kasan_remap_early_shadow_ro() and kasan_unmap_early_shadow_vmalloc()
are both updating the early shadow mapping: the first one sets
the mapping read-only while the other clears the mapping.
Refactor and create kasan_update_early_region()
Signed-off-by: Christophe Leroy
---
Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/8xx.c| 30 +++---
arch/powerpc/mm/ptdump/shared.c | 30 +++---
The 8xx is about to map kernel linear space and IMMR using huge
pages.
In order to display those pages properly, ptdump needs to handle
hugepd tables at PGD level.
For the time being do it only at PGD level. Further patches may
add handling of hugepd tables at lower level for other platforms
On Mon, May 18, 2020 at 08:00:25PM -0700, Saravana Kannan wrote:
> When SYNC_STATE_ONLY support was added in commit 05ef983e0d65 ("driver
> core: Add device link support for SYNC_STATE_ONLY flag"),
> device_link_add() incorrectly skipped adding the new SYNC_STATE_ONLY
> device link to the
For platforms using shared.c (4xx, Book3e, Book3s/32),
also handle the _PAGE_COHERENT flag with corresponds to the
M bit of the WIMG flags.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/shared.c | 5 +
1 file changed, 5 insertions(+)
diff --git
This is the start of the stable review cycle for the 5.6.14 release.
There are 192 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Thu, 21 May 2020 05:45:41 +.
Anything
On Mon, May 18, 2020 at 07:35:48PM +0200, Greg Kroah-Hartman wrote:
> From: Alexandre Belloni
>
> [ Upstream commit 99f81afc139c6edd14d77a91ee91685a414a1c66 ]
I notice 99f81afc139c has been reverted in mainline with commit b43bd72835a5.
The revert commit points out that:
"It was papering
On Thu, May 14, 2020 at 10:13:18AM +0900, Masami Hiramatsu wrote:
> > + bool strict)
> > {
> > long ret;
> > mm_segment_t old_fs = get_fs();
> >
> > + if (!probe_kernel_read_allowed(dst, src, size, strict))
> > + return -EFAULT;
>
> Could you make this return
On Mon, May 18, 2020 at 10:34:34PM +, Olsak, Marek wrote:
> [AMD Official Use Only - Internal Distribution Only]
>
> Hi Greg,
>
> I disagree with this. Bumping the driver version will have implications on
> other new features, because it's like an ABI barrier exposing new
> functionality.
On Mon, May 18, 2020 at 07:10:45PM -0700, Guenter Roeck wrote:
> On 5/18/20 10:34 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.6.14 release.
> > There are 194 patches in this series, all will be posted as a response
> > to this one. If anyone has any
From: Pierre-Louis Bossart
this is a preparatory patch before the introduction of the
sdw_master_type. The SoundWire slave support is slightly modified with
the use of a sdw_slave_type, and the uevent handling move to
slave.c (since it's not necessary for the master).
No functionality change
We need to enable runtime_pm on master device with generic helpers,
so that a Slave-initiated wake is propagated to the bus parent.
Signed-off-by: Bard Liao
---
drivers/soundwire/master.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/soundwire/master.c
From: Pierre-Louis Bossart
In the existing SoundWire code, Master Devices are not explicitly
represented - only SoundWire Slave Devices are exposed (the use of
capital letters follows the SoundWire specification conventions).
With the existing code, the bus is handled without using a proper
From: Pierre-Louis Bossart
In preparation for future extensions, rename functions to use
sdw_bus_master prefix and add a parent and fwnode argument to
sdw_bus_master_add to help with device registration in follow-up
patches.
No functionality change, just renames and additional arguments.
The
Adding an unique id for each bus.
Suggested-by: Vinod Koul
Signed-off-by: Bard Liao
---
drivers/soundwire/bus.c | 20
include/linux/soundwire/sdw.h | 2 ++
2 files changed, 22 insertions(+)
diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
index
This series adds sdw master devices support.
changes in v2:
- Allocate sdw_master_device dynamically
- Use unique bus id as master id
- Keep checking parent devices
- Enable runtime_pm on Master device
Bard Liao (2):
soundwire: bus: add unique bus id
soundwire: master: add runtime pm
The GC860 has one GPU device which has a 2d and 3d core. In this case
we want to expose perfmon information for both cores.
The driver has one array which contains all possible perfmon domains
with some meta data - doms_meta. Here we can see that for the GC860
two elements of that array are
On 5/19/2020 1:17 PM, Kishon Vijay Abraham I wrote:
Dilip,
On 5/19/2020 9:26 AM, Dilip Kota wrote:
On 5/18/2020 9:49 PM, Kishon Vijay Abraham I wrote:
Dilip,
On 5/15/2020 1:43 PM, Dilip Kota wrote:
This patch series adds Intel ComboPhy driver, respective yaml schemas
Changes on v8:
On Mon, May 18, 2020 at 11:26:29AM -0700, Luck, Tony wrote:
> Maybe it isn't pretty. But I don't see another practical solution.
>
> The VMM is doing exactly the right thing here. It should not trust
> that the guest will behave and not touch the poison location again.
> If/when the guest does
On 26. 02. 20 3:16, Guenter Roeck wrote:
> On 2/24/20 3:26 PM, Franz Forstmayr wrote:
>> Add initial support for INA260 power monitor with integrated shunt.
>> Registers are different from other INA2xx devices, that's why a small
>> translation table is used.
>>
>> Signed-off-by: Franz Forstmayr
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.
There is some helpful background in [2]: basically, this is a
On Mon 18 May 22:06 PDT 2020, Vinod Koul wrote:
> The xhci-pci-renesas module exports symbols for xhci-pci to load the
> RAM/ROM on renesas xhci controllers. We had dependency which works
> when both the modules are builtin or modules.
>
> But if xhci-pci is inbuilt and xhci-pci-renesas in
Dilip,
On 5/19/2020 9:26 AM, Dilip Kota wrote:
>
> On 5/18/2020 9:49 PM, Kishon Vijay Abraham I wrote:
>> Dilip,
>>
>> On 5/15/2020 1:43 PM, Dilip Kota wrote:
>>> This patch series adds Intel ComboPhy driver, respective yaml schemas
>>>
>>> Changes on v8:
>>> As per PHY Maintainer's request
On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang
wrote:
> We can cleanup code a little by call detach_page_private here.
>
> ...
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space
> *mapping,
> if (rc !=
On Mon, 2020-05-18 at 20:44 -0700, Andrew Morton wrote:
> On Tue, 19 May 2020 11:29:46 +0800 王程刚 wrote:
>
> > Function pr_notice print max length maybe less than the command line length,
> > need more times to print all.
> > For example, arm64 has 2048 bytes command line length, but printk
Hi all,
After merging the drm-msm tree, today's linux-next build (arm
multi_v7_defconfig) failed like this:
ERROR: modpost: "__aeabi_ldivmod" [drivers/gpu/drm/msm/msm.ko] undefined!
ERROR: modpost: "__aeabi_uldivmod" [drivers/gpu/drm/msm/msm.ko] undefined!
Caused by commit
04d9044f6c57
On Fri, 15 May 2020 at 00:12, Jeffrey Hugo wrote:
>
> Introduction:
> Qualcomm Cloud AI 100 is a PCIe adapter card which contains a dedicated
> SoC ASIC for the purpose of efficently running Deep Learning inference
> workloads in a data center environment.
>
> The offical press release can be
> From: Anson Huang
> Sent: Tuesday, May 19, 2020 11:56 AM
>
> Convert the i.MX GPT binding to DT schema format using json-schema.
>
> Signed-off-by: Anson Huang
Reviewed-by: Dong Aisheng
Regards
Aisheng
The xhci-pci-renesas module exports symbols for xhci-pci to load the
RAM/ROM on renesas xhci controllers. We had dependency which works
when both the modules are builtin or modules.
But if xhci-pci is inbuilt and xhci-pci-renesas in module, we get below
linker error:
drivers/usb/host/xhci-pci.o:
On Mon, May 18, 2020 at 06:55:00PM +0200, Borislav Petkov wrote:
> On Mon, May 18, 2020 at 08:36:25AM -0700, Luck, Tony wrote:
> > The VMM gets the page fault (because the unmapping of the guest
> > physical address is at the VMM EPT level). The VMM can't map a new
> > page into that guest
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.
There is some helpful background in [2]: basically, this is a
On 19-05-20, 00:37, Anders Roxell wrote:
> On Mon, 18 May 2020 at 21:57, Vinod Koul wrote:
> >
> > Hi Anders,
>
> Hi Vinod,
>
> >
> > On 18-05-20, 19:53, Anders Roxell wrote:
> > > On Wed, 6 May 2020 at 08:01, Vinod Koul wrote:
> > > >
> > > > Some rensas controller like uPD720201 and
Hi Rob,
On 19/5/2020 2:27 am, Rob Herring wrote:
On Thu, May 14, 2020 at 8:08 PM Ramuthevar, Vadivel MuruganX
wrote:
Hi Rob,
On 14/5/2020 8:57 pm, Rob Herring wrote:
On Wed, 13 May 2020 18:46:14 +0800, Ramuthevar,Vadivel MuruganX wrote:
From: Ramuthevar Vadivel Murugan
Add YAML file for
On 2020/05/19 12:31, Xiaoming Ni wrote:
> Some boundary (.extra1 .extra2) constants (E.g: neg_one two) in
> sysctl.c are used in multiple features. Move these variables to
> sysctl_vals to avoid adding duplicate variables when cleaning up
> sysctls table.
>
> Signed-off-by: Xiaoming Ni
>
From: Ramuthevar Vadivel Murugan
This patch adds the new IP of Nand Flash Controller(NFC) support
on Intel's Lightning Mountain(LGM) SoC.
DMA is used for burst data transfer operation, also DMA HW supports
aligned 32bit memory address and aligned data access by default.
DMA burst of 8
From: Ramuthevar Vadivel Murugan
Add YAML file for dt-bindings to support NAND Flash Controller
on Intel's Lightning Mountain SoC.
Signed-off-by: Ramuthevar Vadivel Murugan
---
.../devicetree/bindings/mtd/intel,lgm-nand.yaml| 91 ++
1 file changed, 91 insertions(+)
On 18-05-20, 19:53, Jassi Brar wrote:
> That is a client/protocol property and has nothing to do with the
> controller dt node.
That's what I am concerned about, i.e. different ways of passing the
doorbell number via DT.
--
viresh
This patch adds the new IP of Nand Flash Controller(NFC) support
on Intel's Lightning Mountain(LGM) SoC.
DMA is used for burst data transfer operation, also DMA HW supports
aligned 32bit memory address and aligned data access by default.
DMA burst of 8 supported. Data register used to support the
> On May 14, 2020, at 11:48 PM, HORIGUCHI NAOYA(堀口 直也)
> wrote:
>
> I'm very sorry to be quiet for long, but I think that I agree with
> this patchset and try to see what happend if merged into mmtom,
> although we need rebaseing to latest mmotm and some basic testing.
Looks like Oscar have
Convert the i.MX TPM binding to DT schema format using json-schema.
Signed-off-by: Anson Huang
Reviewed-by: Dong Aisheng
---
Changes since V1:
- remove unnecessary maxItems for clocks/clock-names.
---
.../devicetree/bindings/timer/nxp,tpm-timer.txt| 28 --
Convert the i.MX GPT binding to DT schema format using json-schema.
Signed-off-by: Anson Huang
---
Changes since V1:
- remove unnecessary compatible item descriptions;
- remove unnecessary maxItems for clocks/clock-names;
---
.../devicetree/bindings/timer/fsl,imxgpt.txt |
This patch series converts i.MX GPT, TPM and system counter timer
binding to json-schema, test build passed.
Changes compared to V1 are listed in each patch.
Anson Huang (3):
dt-bindings: timer: Convert i.MX GPT to json-schema
dt-bindings: timer: Convert i.MX TPM to json-schema
On Mon, May 18, 2020 at 10:40 PM Viresh Kumar wrote:
>
> On 18-05-20, 18:29, Bjorn Andersson wrote:
> > On Thu 14 May 22:17 PDT 2020, Viresh Kumar wrote:
> > > This stuff has been doing rounds on the mailing list since several years
> > > now with no agreed conclusion by all the parties. And here
Convert the i.MX SYSCTR binding to DT schema format using json-schema.
Signed-off-by: Anson Huang
Reviewed-by: Dong Aisheng
---
No changes.
---
.../devicetree/bindings/timer/nxp,sysctr-timer.txt | 25 --
.../bindings/timer/nxp,sysctr-timer.yaml | 54 ++
2
On Tue, May 19, 2020 at 1:30 AM Gregory Farnum wrote:
>
> Maybe we resolved this conversation; I can't quite tell...
I think v2 patch wraps it up...
[...]
> > >
> > > Questions:
> > > 1. Does sync() result in fully purging inodes on MDS?
> >
> > I don't think so, but again, that code is not
On 5/18/2020 9:49 PM, Kishon Vijay Abraham I wrote:
Dilip,
On 5/15/2020 1:43 PM, Dilip Kota wrote:
This patch series adds Intel ComboPhy driver, respective yaml schemas
Changes on v8:
As per PHY Maintainer's request add description in comments for doing
register access through
Convert the i.MX reset binding to DT schema format using json-schema.
Signed-off-by: Anson Huang
Reviewed-by: Dong Aisheng
---
Changes since V2:
- remove unnecessary compatible item descriptions.
---
.../devicetree/bindings/reset/fsl,imx-src.txt | 49 -
tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
master
head: 642b151f45dd54809ea00ecd3976a56c1ec9b53d
commit: 295bcca84916cb5079140a89fccb472bb8d1f6e2 linux/bits.h: add compile time
sanity check of GENMASK inputs
date: 6 weeks ago
config: arm-defconfig (attached as
On Wed, May 13, 2020 at 4:28 AM Vitaly Wool wrote:
>
>
>
> On Wed, May 13, 2020, 2:36 AM Qian Cai wrote:
>>
>> Put zswap z3fold pages into the memory and then offline those memory would
>> trigger an infinite loop here in
>>
>> __offline_pages() --> do_migrate_range() because there is no error
On Mon, May 18, 2020 at 8:47 PM Luis Henriques wrote:
>
> Similarly to commit 03f219041fdb ("ceph: check i_nlink while converting
> a file handle to dentry"), this fixes another corner case with
> name_to_handle_at/open_by_handle_at. The issue has been detected by
> xfstest generic/467, when
On Tue, 19 May 2020 11:29:46 +0800 王程刚 wrote:
> Function pr_notice print max length maybe less than the command line length,
> need more times to print all.
> For example, arm64 has 2048 bytes command line length, but printk maximum
> length is only 1024 bytes.
I can see why that might be a
On 18-05-20, 20:37, Bjorn Andersson wrote:
> On Mon 18 May 20:31 PDT 2020, Viresh Kumar wrote:
>
> > On 18-05-20, 11:40, Bjorn Andersson wrote:
> > > It most certainly does.
> > >
> > > With INTERCONNECT as a bool we can handle its absence with stub
> > > functions - like every other framework
Like patch b1b3f49 ("ARM: config: sort select statements alphanumerically")
, we sort all our select statements alphanumerically by using the perl
script in patch b1b3f49 as above.
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file
1 - 100 of 2068 matches
Mail list logo