Print the afs_operation debug_id when logging an unexpected change in the
data version. This allows the logged message to be matched against
tracelines.
Signed-off-by: David Howells
cc: linux-...@lists.infradead.org
cc: linux-cach...@redhat.com
cc: linux-fsde...@vger.kernel.org
Link:
Disable use of the fscache I/O routined by the AFS filesystem. It's about
to transition to passing iov_iters down and fscache is about to have its
I/O path to use iov_iter, so all that needs to change.
Signed-off-by: David Howells
cc: linux-...@lists.infradead.org
cc: linux-cach...@redhat.com
Pass a pointer to the page being accessed into the dirty region helpers so
that the size of the page can be determined in case it's a transparent huge
page.
This also required the page to be passed into the afs_page_dirty trace
point - so there's no need to specifically pass in the index or
Add an alternate API by which the cache can be accessed through a kiocb,
doing async DIO, rather than using the current API that tells the cache
where all the pages are.
The new API is intended to be used in conjunction with the netfs helper
library. A filesystem must pick one or the other and
Take a reference on a page when PG_private_2 is set and drop it once the
bit is unlocked[1].
Reported-by: Linus Torvalds
Signed-off-by: David Howells
cc: Matthew Wilcox
cc: Linus Torvalds
cc: linux...@kvack.org
cc: linux-cach...@redhat.com
cc: linux-...@lists.infradead.org
cc:
Add an interface to the netfs helper library for reading data from the
cache instead of downloading it from the server and support for writing
data just downloaded or cleared to the cache.
The API passes an iov_iter to the cache read/write routines to indicate the
data/buffer to be used. This is
On Wed, Mar 10, 2021 at 8:30 AM Pavel Machek wrote:
>
> Hi!
>
> > > > I'd like people from Intel to contact me. There's more to fix there,
> > > > and AFAICT original author went away.
> > >
> > > The following message to was
> > > undeliverable.
> >
> > > : Recipient
> > > +address rejected:
Add a helper to do the pre-reading work for the netfs write_begin address
space op.
Changes
- Added flag to netfs_subreq_terminated() to indicate that the caller may
have been running async and stuff that might sleep needs punting to a
workqueue (can't use in_softirq()[1]).
Signed-off-by:
te_alloc_map(mm, pmdp, addr);
> > } else if (sz == PMD_SIZE) {
> > - if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) &&
> > - pud_none(READ_ONCE(*pudp)))
> > + if (want_pmd_share(vma, addr) && pud_none(READ_O
Gather statistics from the netfs interface that can be exported through a
seqfile. This is intended to be called by a later patch when viewing
/proc/fs/fscache/stats.
Signed-off-by: David Howells
Reviewed-by: Jeff Layton
cc: Matthew Wilcox
cc: linux...@kvack.org
cc: linux-cach...@redhat.com
Add three tracepoints to track the activity of the read helpers:
(1) netfs/netfs_read
This logs entry to the read helpers and also expansion of the range in
a readahead request.
(2) netfs/netfs_rreq
This logs the progress of netfs_read_request objects which track
read
Add a pair of helper functions:
(*) netfs_readahead()
(*) netfs_readpage()
to do the work of handling a readahead or a readpage, where the page(s)
that form part of the request may be split between the local cache, the
server or just require clearing, and may be single pages and transparent
Add unlock_page_fscache() as an alias of unlock_page_private_2(). This
allows a page 'locked' with PG_fscache to be unlocked.
Add wait_on_page_fscache() to wait for PG_fscache to be unlocked.
[Linus suggested putting the fscache-themed functions into the
caching-specific headers rather than
Mickaël Salaün writes:
> From: Mickaël Salaün
>
> Being able to easily change root directories enable to ease some
> development workflow and can be used as a tool to strengthen
> unprivileged security sandboxes. chroot(2) is not an access-control
> mechanism per se, but it can be used to
Move the PG_fscache related helper funcs (such as SetPageFsCache()) to
linux/netfs.h rather than linux/fscache.h as the intention is to move to a
model where they're used by the network filesystem and the helper library,
but not by fscache/cachefiles itself.
Signed-off-by: David Howells
cc:
Make a netfs helper module to manage read request segmentation, caching
support and transparent huge page support on behalf of a network
filesystem.
Signed-off-by: David Howells
Reviewed-by: Jeff Layton
cc: Matthew Wilcox
cc: linux...@kvack.org
cc: linux-cach...@redhat.com
cc:
Add interface documentation for the netfs helper library.
Signed-off-by: David Howells
---
Documentation/filesystems/index.rst |1
Documentation/filesystems/netfs_library.rst | 526 +++
2 files changed, 527 insertions(+)
create mode 100644
Provide a function, readahead_expand(), that expands the set of pages
specified by a readahead_control object to encompass a revised area with a
proposed size and length.
The proposed area must include all of the old area and may be expanded yet
more by this function so that the edges align on
On 3/10/21 10:24 AM, Roi Dayan wrote:
>
>
> On 2021-03-08 5:11 AM, Jia-Ju Bai wrote:
>> When slave is NULL or slave_ops->ndo_neigh_setup is NULL, no error
>> return code of bond_neigh_init() is assigned.
>> To fix this bug, ret is assigned with -EINVAL in these cases.
>>
>> Fixes:
Add a function, unlock_page_private_2(), to unlock PG_private_2 analogous
to that of PG_lock. Add a kerneldoc banner to that indicating the example
usage case.
A wrapper will need to be placed in the netfs header in the patch that adds
that.
[This implements a suggestion by Linus[1] to not mix
On Wed, Mar 10, 2021 at 4:09 AM Masahiro Yamada wrote:
>
> This piece of code converts the target suffix to the dtc -O option:
>
> *.dtb -> -O dtb
> *.dt.yaml -> -O yaml
>
> Commit ce88c9c79455 ("kbuild: Add support to build overlays (%.dtbo)")
> added the third case:
>
>
On Wed, Mar 10, 2021 at 01:51:28PM +, Matthew Wilcox (Oracle) wrote:
> There's no need to give the page an address_space. Leaving the
> page->mapping as NULL will cause the VM to handle set_page_dirty()
> the same way that it's set now, and that was the only reason to
> set the address_space
Add an iterator, ITER_XARRAY, that walks through a set of pages attached to
an xarray, starting at a given page and offset and walking for the
specified amount of bytes. The iterator supports transparent huge pages.
The iterate_xarray() macro calls the helper function with rcu_access()
helped.
Here's a set of patches to do two things:
(1) Add a helper library to handle the new VM readahead interface. This
is intended to be used unconditionally by the filesystem (whether or
not caching is enabled) and provides a common framework for doing
caching, transparent huge
On Wed, Mar 10, 2021 at 08:37:37AM -0800, kan.li...@linux.intel.com wrote:
> From: Ricardo Neri
>
> Add feature enumeration to identify a processor with Intel Hybrid
> Technology: one in which CPUs of more than one type are the same package.
> On a hybrid processor, all CPUs support the same
On Wed, Mar 10, 2021 at 10:41:18AM -0600, Pierre-Louis Bossart wrote:
> would this work?
> if (!IS_ENABLED(CONFIG_DMI))
> return 0;
Build time dependencies aren't going to help anything, arm64 (and to my
understanding some future x86 systems, LynxPoint IIRC) supports both DT
and ACPI and so
Convert the NXP FlexSPI binding to DT schema format using json-schema.
Signed-off-by: Kuldeep Singh
---
.../bindings/spi/nxp,spi-nxp-fspi.yaml| 85 +++
.../devicetree/bindings/spi/spi-nxp-fspi.txt | 43 --
MAINTAINERS | 2 +-
3
On Wed, Mar 10, 2021 at 05:37:25PM +0100, Takashi Iwai wrote:
> Mark Brown wrote:
> > > did you mean if (!IS_ENABLED(CONFIG_ACPI)) ?
> > Is there a runtime check?
> Well, basically both DMI and ACPI are completely different things, so
> I don't think it's right to check the availability of ACPI
On Wed, Mar 10, 2021 at 5:08 PM Wolfram Sang wrote:
>
> On Wed, Mar 10, 2021 at 03:47:10PM +0100, Rafael J. Wysocki wrote:
> > On Fri, Mar 5, 2021 at 7:29 PM Rafael J. Wysocki wrote:
> > >
> > > From: Rafael J. Wysocki
> > >
> > > The ACPI_MODULE_NAME() definition is only used by the message
>
On 3/10/21 8:37 AM, kan.li...@linux.intel.com wrote:
> - err = perf_pmu_register(, "cpu", PERF_TYPE_RAW);
> - if (err)
> - goto out2;
> + if (!is_hybrid()) {
> + err = perf_pmu_register(, "cpu", PERF_TYPE_RAW);
> + if (err)
> +
On Wed 10 Mar 01:37 CST 2021, Rakesh Pillai wrote:
> Add the WPSS remoteproc node in dts for
> PIL loading.
>
> Signed-off-by: Rakesh Pillai
> ---
> - This change is dependent on the below patch series
> 1) https://lore.kernel.org/patchwork/project/lkml/list/?series=487403
> 2)
On Wed, 10 Mar 2021 17:03:40 +0800
Tony Lu wrote:
> On Tue, Mar 09, 2021 at 12:40:11PM -0500, Steven Rostedt wrote:
> > The above shows 10 bytes wasted for this event.
> >
>
> I use pahole to read vmlinux.o directly with defconfig and
> CONFIG_DEBUG_INFO enabled, the result shows 22
On Wed, Mar 10, 2021 at 9:38 AM Krzysztof Kozlowski
wrote:
> --- a/drivers/clk/socfpga/Kconfig
> +++ b/drivers/clk/socfpga/Kconfig
> @@ -1,6 +1,17 @@
> # SPDX-License-Identifier: GPL-2.0
> +config COMMON_CLK_SOCFPGA
> + bool "Intel SoCFPGA family clock support" if COMPILE_TEST &&
>
Add the DCC(Data Capture and Compare) device tree node entry along with
the addresses for register regions.
Signed-off-by: Souradeep Chowdhury
---
arch/arm64/boot/dts/qcom/sm8150.dtsi | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi
Added the entries for all the files added as a part of driver support for
DCC(Data Capture and Compare).
Signed-off-by: Souradeep Chowdhury
---
MAINTAINERS | 8
1 file changed, 8 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index d92f85c..fb28218 100644
--- a/MAINTAINERS
+++
Add the sysfs variables to expose the user space functionalities
like DCC enable, disable, configure addresses and software triggers.
Also add the necessary methods along with the same.
Signed-off-by: Souradeep Chowdhury
---
drivers/soc/qcom/dcc.c | 1179
Documentation for Data Capture and Compare(DCC) device tree bindings
in yaml format.
Signed-off-by: Souradeep Chowdhury
---
.../devicetree/bindings/arm/msm/qcom,dcc.yaml | 49 ++
1 file changed, 49 insertions(+)
create mode 100644
The DCC is a DMA engine designed to store register values either in
case of a system crash or in case of software triggers manually done
by the user. Using DCC hardware and the sysfs interface of the driver
the user can exploit various functionalities of DCC. The user can specify
the register
The DCC is a DMA Engine designed to capture and store data
during system crash or software triggers. The DCC operates
based on link list entries which provides it with data and
addresses and the function it needs to perform. These
functions are read, write and loop. Added the basic driver
in this
DCC(Data Capture and Compare) is a DMA engine designed for debugging purposes.
In case of a system
crash or manual software triggers by the user the DCC hardware stores the value
at the register
addresses which can be used for debugging purposes. The DCC driver provides the
user with sysfs
On Wed 10 Mar 01:28 CST 2021, Rakesh Pillai wrote:
> Add WPSS PIL loading support for SC7280 SoCs.
>
Acked-by: Bjorn Andersson
But can you please follow up with a patch that converts this to yaml?
Regards,
Bjorn
> Signed-off-by: Rakesh Pillai
> ---
>
On 10 Mar 2021, at 11:23, Michal Hocko wrote:
> On Mon 08-03-21 16:18:52, Mike Kravetz wrote:
> [...]
>> Converting larger to smaller hugetlb pages can be accomplished today by
>> first freeing the larger page to the buddy allocator and then allocating
>> the smaller pages. However, there are
On 09/03/2021 17:57, Marc Zyngier wrote:
On Mon, 01 Mar 2021 14:23:14 +,
Steven Price wrote:
The VMM may not wish to have it's own mapping of guest memory mapped
with PROT_MTE because this causes problems if the VMM has tag checking
enabled (the guest controls the tags in physical RAM and
On Wed 10-03-21 08:05:36, Minchan Kim wrote:
> On Wed, Mar 10, 2021 at 02:07:05PM +0100, Michal Hocko wrote:
[...]
> > The is a lot of churn indeed. Have you considered adding $FOO_lglvl
> > variants for those so that you can use them for your particular case
> > without affecting most of existing
Hello,
syzbot found the following issue on:
HEAD commit:0d7588ab riscv: process: Fix no prototype for arch_dup_tas..
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git
fixes
console output: https://syzkaller.appspot.com/x/log.txt?x=1212c6e6d0
kernel config:
qianli zhao writes:
> Hi,Oleg
>
> Thanks for your replay.
>
>> To be honest, I don't understand the changelog. It seems that you want
>> to uglify the kernel to simplify the debugging of buggy init? Or what?
>
> My patch is for the following purpose:
> 1. I hope to fix the occurrence of
On Wed 10 Mar 01:28 CST 2021, Rakesh Pillai wrote:
> Add support for PIL loading of WPSS processor for SC7280
> WPSS boot will be requested by the wifi driver and hence
> disable auto-boot for WPSS. Also add a separate shutdown
> sequence handler for WPSS.
>
> Signed-off-by: Rakesh Pillai
> ---
From: Zhang Rui
Alder Lake RAPL support is the same as previous Sky Lake.
Add Alder Lake model for RAPL.
Reviewed-by: Andi Kleen
Signed-off-by: Zhang Rui
---
arch/x86/events/rapl.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
index
From: Kan Liang
Compared with the Rocket Lake, the CORE C1 Residency Counter is added
for Alder Lake, but the CORE C3 Residency Counter is removed. Other
counters are the same.
Create a new adl_cstates for Alder Lake. Update the comments
accordingly.
The External Design Specification (EDS) is
From: Kan Liang
The uncore subsystem for Alder Lake is similar to the previous Tiger
Lake.
The difference includes:
- New MSR addresses for global control, fixed counters, CBOX and ARB.
Add a new adl_uncore_msr_ops for uncore operations.
- Add a new threshold field for CBOX.
- New PCIIDs for
From: Kan Liang
Alder Lake Hybrid system has two different types of core, Golden Cove
core and Gracemont core. The Golden Cove core is registered to
"cpu_core" PMU. The Gracemont core is registered to "cpu_atom" PMU.
The difference between the two PMUs include:
- Number of GP and fixed counters
From: Kan Liang
Implement filter_match callback for X86, which check whether an event is
schedulable on the current CPU.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
arch/x86/events/core.c | 10 ++
arch/x86/events/perf_event.h | 1 +
2 files changed, 11 insertions(+)
From: Kan Liang
Current Hardware events and Hardware cache events have special perf
types, PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE. The two types don't
pass the PMU type in the user interface. For a hybrid system, the perf
subsystem doesn't know which PMU the events belong to. The first
From: Kan Liang
PPERF and SMI_COUNT MSRs are also supported on Alder Lake.
The External Design Specification (EDS) is not published yet. It comes
from an authoritative internal source.
The patch has been tested on real hardware.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
From: Kan Liang
The attribute_group for Hybrid PMUs should be different from the previous
cpu PMU. For example, cpumask is required for a Hybrid PMU. The PMU type
should be included in the event and format attribute.
Add hybrid_attr_update for the Hybrid PMU.
Check the PMU type in is_visible()
From: Kan Liang
Hybrid PMUs have different events and formats. In theory, Hybrid PMU
specific attributes should be maintained in the dedicated struct
x86_hybrid_pmu, but it wastes space because the events and formats are
similar among Hybrid PMUs.
To reduce duplication, all hybrid PMUs will
From: Kan Liang
Different hybrid PMUs have different PMU capabilities and events. Perf
should registers a dedicated PMU for each of them.
To check the X86 event, perf has to go through all possible hybrid pmus.
Only the PMU for the boot CPU is registered in init_hw_perf_events()
because the
From: Kan Liang
Each Hybrid PMU has to check its own number of counters and mask fixed
counters before registration.
The intel_pmu_check_num_counters will be reused later when registering a
dedicated hybrid PMU.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
From: Kan Liang
Different hybrid PMU may have different extra registers, e.g. Core PMU
may have offcore registers, frontend register and ldlat register. Atom
core may only have offcore registers and ldlat register. Each hybrid PMU
should use its own extra_regs.
An Intel Hybrid system should
From: Kan Liang
Each Hybrid PMU has to check and update its own extra registers before
registration.
The intel_pmu_check_extra_regs will be reused later when registering a
dedicated hybrid PMU.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
arch/x86/events/intel/core.c | 37
From: Kan Liang
The temporary pmu assignment in event_init is unnecessary.
The assignment was introduced by commit 8113070d6639 ("perf_events:
Add fast-path to the rescheduling code"). At that time, event->pmu is
not assigned yet when initializing an event. The assignment is required.
However,
From: Kan Liang
The PMU capabilities are different among hybrid PMUs. Perf should dump
the PMU capabilities information for each hybrid PMU.
Factor out x86_pmu_show_pmu_cap() which shows the PMU capabilities
information. The function will be reused later when registering a
dedicated hybrid PMU.
From: Kan Liang
Each Hybrid PMU has to check and update its own event constraints before
registration.
The intel_pmu_check_event_constraints will be reused later when
registering a dedicated hybrid PMU.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
arch/x86/events/intel/core.c | 82
From: Kan Liang
The events are different among hybrid PMUs. Each hybrid PMU should use
its own event constraints.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
arch/x86/events/core.c | 3 ++-
arch/x86/events/intel/core.c | 5 +++--
arch/x86/events/intel/ds.c | 5 +++--
From: Kan Liang
The number of GP and fixed counters are different among hybrid PMUs.
Each hybrid PMU should use its own counter related information.
When handling a certain hybrid PMU, apply the number of counters from
the corresponding hybrid PMU.
When reserving the counters in the
From: Kan Liang
The unconstrained value depends on the number of GP and fixed counters.
Each hybrid PMU should use its own unconstrained.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Kan Liang
---
arch/x86/events/intel/core.c | 5 -
arch/x86/events/perf_event.h | 1 +
2 files
From: Kan Liang
The hardware cache events are different among hybrid PMUs. Each hybrid
PMU should have its own hw cache event table.
Reviewed-by: Andi Kleen
Signed-off-by: Kan Liang
---
arch/x86/events/core.c | 11 +--
arch/x86/events/perf_event.h | 9 +
2 files
From: Kan Liang
The intel_ctrl is the counter mask of a PMU. The PMU counter information
may be different among hybrid PMUs, each hybrid PMU should use its own
intel_ctrl to check and access the counters.
When handling a certain hybrid PMU, apply the intel_ctrl from the
corresponding hybrid
From: Kan Liang
Some platforms, e.g. Alder Lake, have hybrid architecture. In the same
package, there may be more than one type of CPU. The PMU capabilities
are different among different types of CPU. Perf will register a
dedicated PMU for each type of CPU.
Add a 'pmu' variable in the struct
From: Kan Liang
Some platforms, e.g. Alder Lake, have hybrid architecture. Although most
PMU capabilities are the same, there are still some unique PMU
capabilities for different hybrid PMUs. Perf should register a dedicated
pmu for each hybrid PMU.
Add a new struct x86_hybrid_pmu, which saves
From: Ricardo Neri
Add feature enumeration to identify a processor with Intel Hybrid
Technology: one in which CPUs of more than one type are the same package.
On a hybrid processor, all CPUs support the same homogeneous (i.e.,
symmetric) instruction set. All CPUs enumerate the same features in
From: Ricardo Neri
On processors with Intel Hybrid Technology (i.e., one having more than one
type of CPU in the same package), all CPUs support the same instruction
set and enumerate the same features on CPUID. Thus, all software can run
on any CPU without restrictions. However, there may be
From: Kan Liang
Changes since V1:
- Drop all user space patches, which will be reviewed later separately.
- Don't save the CPU type in struct cpuinfo_x86. Instead, provide helper
functions to get parameters of hybrid CPUs. (Boris)
- Rework the perf kernel patches according to Peter's
On Wed, Mar 10, 2021 at 4:54 PM Krzysztof Kozlowski
wrote:
> On 10/03/2021 16:47, Krzysztof Kozlowski wrote:
> > This edac Altera driver is very weird... it uses the same compatible
> > differently depending whether this is 32-bit or 64-bit (e.g. Stratix
> > 10)! On ARMv7 the compatible means for
Hi Atish,
On Thu, Nov 19, 2020 at 1:40 AM Atish Patra wrote:
> Currently, we perform some memory init functions in paging init. But,
> that will be an issue for NUMA support where DT needs to be flattened
> before numa initialization and memblock_present can only be called
> after numa
On 3/10/21 10:37 AM, Takashi Iwai wrote:
On Wed, 10 Mar 2021 17:18:14 +0100,
Mark Brown wrote:
On Wed, Mar 10, 2021 at 09:44:07AM -0600, Pierre-Louis Bossart wrote:
On 3/10/21 7:35 AM, Mark Brown wrote:
Just change it to a system level check for ACPI, checking for OF would
leave
On 3/10/2021 8:27 AM, Alan Stern wrote:
On Tue, Mar 09, 2021 at 08:04:53PM -0800, Asutosh Das (asd) wrote:
On 3/9/2021 7:14 PM, Alan Stern wrote:
On Tue, Mar 09, 2021 at 07:04:34PM -0800, Asutosh Das (asd) wrote:
Hello
I & Can (thanks CanG) debugged this further:
Looks like this issue can
On Tue, Mar 09, 2021 at 04:53:43PM +0100, Christoph Hellwig wrote:
> Just use the generic anon_inode file system.
Are you changing the lifetime rules for that module?
On Wed, 10 Mar 2021, Matti Vaittinen wrote:
>
> On Wed, 2021-03-10 at 13:31 +, Lee Jones wrote:
> > On Wed, 10 Mar 2021, Matti Vaittinen wrote:
> >
> > > On Wed, 2021-03-10 at 11:17 +, Lee Jones wrote:
> > > > On Wed, 10 Mar 2021, Vaittinen, Matti wrote:
> > > >
> > > > > Hello Lee,
>
> -Original Message-
> From: Andrew Lunn
> Sent: Wednesday, March 10, 2021 5:51 PM
> To: Stefan Chulski
> Cc: net...@vger.kernel.org; thomas.petazz...@bootlin.com;
> da...@davemloft.net; Nadav Haklai ; Yan
> Markman ; linux-kernel@vger.kernel.org;
> k...@kernel.org;
On Wed, 10 Mar 2021 17:18:14 +0100,
Mark Brown wrote:
>
> On Wed, Mar 10, 2021 at 09:44:07AM -0600, Pierre-Louis Bossart wrote:
> > On 3/10/21 7:35 AM, Mark Brown wrote:
>
> > > Just change it to a system level check for ACPI, checking for OF would
> > > leave problems for board files or any
On Tue, Mar 09, 2021 at 04:53:42PM +0100, Christoph Hellwig wrote:
> Just use the generic anon_inode file system.
Umm... The only problem I see here is the lifetime rules for
that module, and that's not something introduced in this patchset.
Said that, looks like the logics around that place is
Disabling GFXOFF via the quirk list fixes a hardware lockup in
Ryzen V1605B, RAVEN 0x1002:0x15DD rev 0x83.
Signed-off-by: Daniel Gomez
---
This patch is a continuation of the work here:
https://lkml.org/lkml/2021/2/3/122 where a hardware lockup was discussed and
a dma_fence deadlock was provoke
On Mar 5, 2021, at 02:43, Borislav Petkov wrote:
> On Sat, Feb 27, 2021 at 08:59:08AM -0800, Chang S. Bae wrote:
>> Historically, signal.h defines MINSIGSTKSZ (2KB) and SIGSTKSZ (8KB), for
>> use by all architectures with sigaltstack(2). Over time, the hardware state
>> size grew, but these
On Wed, 10 Mar 2021, Xu Yilun wrote:
> This patchset is some improvements for intel-m10-bmc and its subdevs.
>
> Main changes from v1:
> - Add a patch (#2) to simplify the definition of the legacy version reg.
> - Add a patch (#4), add entry in MAINTAINERS for intel-m10-bmc mfd driver
> and
> @@ -9298,10 +9291,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem
> *mmio_base, unsigned int irq)
> /* Get UFS version supported by the controller */
> hba->ufs_version = ufshcd_get_ufs_version(hba);
>
> - if ((hba->ufs_version != UFSHCI_VERSION_10) &&
> -
Tested on the OnePlus 7 Pro (including DMA).
Signed-off-by: Caleb Connolly
---
arch/arm64/boot/dts/qcom/sm8150.dtsi | 521 +++
1 file changed, 521 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi
b/arch/arm64/boot/dts/qcom/sm8150.dtsi
index
Hook up the SMMU for doing DMA over i2c. Some peripherals like
touchscreens easily exceed 32-bytes per transfer, causing errors and
lockups without this.
Signed-off-by: Caleb Connolly
---
Fixes i2c on the OnePlus 7, without this touching the screen with more
than 4 fingers causes the device to
Add the first and third qupv3 nodes used to hook
up peripherals on some devices.
Signed-off-by: Caleb Connolly
---
arch/arm64/boot/dts/qcom/sm8150.dtsi | 25 +
1 file changed, 25 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi
On Wed, 10 Mar 2021 17:03:40 +0800
Tony Lu wrote:
> I use pahole to read vmlinux.o directly with defconfig and
> CONFIG_DEBUG_INFO enabled, the result shows 22 structs prefixed with
> trace_event_raw_ that have at least one hole.
I was thinking of pahole too ;-)
But the information can also be
On Mar 1, 2021, at 11:09, Borislav Petkov wrote:
> On Sat, Feb 27, 2021 at 08:59:06AM -0800, Chang S. Bae wrote:
>>
>> diff --git a/include/uapi/linux/auxvec.h b/include/uapi/linux/auxvec.h
>> index abe5f2b6581b..15be98c75174 100644
>> --- a/include/uapi/linux/auxvec.h
>> +++
On Wed, Mar 10, 2021 at 11:47 PM Viresh Kumar wrote:
>
> On 10-03-21, 20:24, Masahiro Yamada wrote:
> > On Wed, Mar 10, 2021 at 2:35 PM Viresh Kumar
> > wrote:
> > > diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
> > > index bc045a54a34e..59e86f67f9e0 100644
> > > ---
On Tue, Mar 09, 2021 at 08:04:53PM -0800, Asutosh Das (asd) wrote:
> On 3/9/2021 7:14 PM, Alan Stern wrote:
> > On Tue, Mar 09, 2021 at 07:04:34PM -0800, Asutosh Das (asd) wrote:
> > > Hello
> > > I & Can (thanks CanG) debugged this further:
> > >
> > > Looks like this issue can occur if the sd
Adjust the rss_stat tracepoint to print the name of the resident page type
that got updated (e.g. MM_ANONPAGES/MM_FILEPAGES), rather than the numeric
index corresponding to it (the __entry->member value):
Before this patch:
--
rss_stat: mm_id=1216113068 curr=0 member=1 size=28672B
On Wed, Mar 10, 2021 at 04:07:00AM +0900, Masahiro Yamada wrote:
> On Wed, Mar 10, 2021 at 12:10 AM Michal Suchánek wrote:
> >
> > On Tue, Mar 09, 2021 at 11:53:21PM +0900, Masahiro Yamada wrote:
> > > On Tue, Mar 9, 2021 at 10:35 PM Michal Suchánek wrote:
> > > >
> > > > On Tue, Mar 09, 2021 at
On Mon 08-03-21 16:18:52, Mike Kravetz wrote:
[...]
> Converting larger to smaller hugetlb pages can be accomplished today by
> first freeing the larger page to the buddy allocator and then allocating
> the smaller pages. However, there are two issues with this approach:
> 1) This process can
The following speed modes are now supported in J7200 SoC,
- HS200 and HS400 modes at 1.8 V card voltage, in MMCSD0 subsystem [1].
- UHS-I speed modes in MMCSD1 subsystem [1].
Add support for UHS-I modes by adding voltage regulator device tree nodes
and corresponding pinmux details, to power cycle
On 3/9/2021 10:51 AM, liuqi (BA) wrote:
Hi Alexander,
On 2021/2/3 21:58, Alexander Antonov wrote:
This functionality is based on recently introduced sysfs attributes
for Intel® Xeon® Scalable processor family (code name Skylake-SP):
Commit bb42b3d39781 ("perf/x86/intel/uncore: Expose an
From: Faiz Abbas
There are 6 gpio instances inside SoC with 2 groups as show below:
Group one: wkup_gpio0, wkup_gpio1
Group two: main_gpio0, main_gpio2, main_gpio4, main_gpio6
Only one instance from each group can be used at a time. So use main_gpio0
and wkup_gpio0 in current linux
From: Faiz Abbas
There are 4 instances of gpio modules in main domain:
gpio0, gpio2, gpio4 and gpio6
Groups are created to provide protection between different processor
virtual worlds. Each of these modules I/O pins are muxed within the
group. Exactly one module can be selected to
901 - 1000 of 1854 matches
Mail list logo