Re: [PATCH] drm/amd/display: Clear dm_state for fast updates
On Monday, July 27, 2020 1:40 AM, Mazin Rezk wrote: > This patch fixes a race condition that causes a use-after-free during > amdgpu_dm_atomic_commit_tail. This can occur when 2 non-blocking commits > are requested and the second one finishes before the first. Essentially, > this bug occurs when the following sequence of events happens: > > 1. Non-blocking commit #1 is requested w/ a new dm_state #1 and is > deferred to the workqueue. > > 2. Non-blocking commit #2 is requested w/ a new dm_state #2 and is > deferred to the workqueue. > > 3. Commit #2 starts before commit #1, dm_state #1 is used in the > commit_tail and commit #2 completes, freeing dm_state #1. > > 4. Commit #1 starts after commit #2 completes, uses the freed dm_state > 1 and dereferences a freelist pointer while setting the context. > > Since this bug has only been spotted with fast commits, this patch fixes > the bug by clearing the dm_state instead of using the old dc_state for > fast updates. In addition, since dm_state is only used for its dc_state > and amdgpu_dm_atomic_commit_tail will retain the dc_state if none is found, > removing the dm_state should not have any consequences in fast updates. > > This use-after-free bug has existed for a while now, but only caused a > noticeable issue starting from 5.7-rc1 due to 3202fa62f ("slub: relocate > freelist pointer to middle of object") moving the freelist pointer from > dm_state->base (which was unused) to dm_state->context (which is > dereferenced). > > Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207383 > Fixes: bd200d190f45 ("drm/amd/display: Don't replace the dc_state for fast > updates") > Reported-by: Duncan <1i5t5.dun...@cox.net> > Signed-off-by: Mazin Rezk > --- > .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++- > 1 file changed, 27 insertions(+), 9 deletions(-) > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > index 86ffa0c2880f..710edc70e37e 100644 > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > @@ -8717,20 +8717,38 @@ static int amdgpu_dm_atomic_check(struct drm_device > *dev, >* the same resource. If we have a new DC context as part of >* the DM atomic state from validation we need to free it and >* retain the existing one instead. > + * > + * Furthermore, since the DM atomic state only contains the DC > + * context and can safely be annulled, we can free the state > + * and clear the associated private object now to free > + * some memory and avoid a possible use-after-free later. >*/ > - struct dm_atomic_state *new_dm_state, *old_dm_state; > > - new_dm_state = dm_atomic_get_new_state(state); > - old_dm_state = dm_atomic_get_old_state(state); > + for (i = 0; i < state->num_private_objs; i++) { > + struct drm_private_obj *obj = > state->private_objs[i].ptr; > > - if (new_dm_state && old_dm_state) { > - if (new_dm_state->context) > - dc_release_state(new_dm_state->context); > + if (obj->funcs == adev->dm.atomic_obj.funcs) { > + int j = state->num_private_objs-1; > > - new_dm_state->context = old_dm_state->context; > + dm_atomic_destroy_state(obj, > + state->private_objs[i].state); > + > + /* If i is not at the end of the array then the > + * last element needs to be moved to where i was > + * before the array can safely be truncated. > + */ > + if (i != j) > + state->private_objs[i] = > + state->private_objs[j]; > > - if (old_dm_state->context) > - dc_retain_state(old_dm_state->context); > + state->private_objs[j].ptr = NULL; > + state->private_objs[j].state = NULL; > + state->private_objs[j].old_state = NULL; > + state->private_objs[j].new_state = NULL; > + > + state->num_private_objs = j; > + break; > + } > } > } > > -- > 2.27.0 > I have tested this on 5.8.0-rc6 w/ an RX 480 for 8 hours and I have not had the crash described in the Bugzilla thread. I will also be running this patch on my kernel for the next couple of days to further confirm that this is working. In addition, I will ask the users in the Bugzilla thread to test this
Re: [PATCH v2 03/18] gpiolib: make cdev a build option
On Mon, Jul 27, 2020 at 09:46:01AM +0800, Kent Gibson wrote: > On Mon, Jul 27, 2020 at 12:25:53AM +0200, Linus Walleij wrote: > > On Sat, Jul 25, 2020 at 6:21 AM Kent Gibson wrote: > > > > > +config GPIO_CDEV > > > + bool "/dev/gpiochipN (character device interface)" > > > + default y > > > > I don't want to make it too easy to do this, as I see it as a standard > > kernel feature. > > > > Can we add: > > > > depends on EXPERT > > > > as with other standard kernel features? > > > > Fair enough. > > But what of the GPIO_CDEV_V1 option to disable uAPI V1 added in patch 04, > and that depends on GPIO_CDEV? > That is equivalent to GPIO_SYSFS, which is not dependent on EXPERT, > so I'll need to restructure the dependencies so it doesn't > inherit the EXPERT dependency. > Unless you also want it to be dependent on EXPERT. > I've gone with this: +config GPIO_CDEV + bool + prompt "Character device (/dev/gpiochipN) support" if EXPERT + default y so the entry is always present in menuconfig, and GPIO_CDEV_V1 can still depend on it, but GPIO_CDEV can only be disabled if EXPERT is set. > Hmmm, and maybe patch 04 should be later in the series - after V2 is > fully implemented and V1 is deprecated - around patch 11. > Just ignore me - the earlier code patches need the define else the V1 will be compiled out. Cheers, Kent.
Re: [PATCH 4/4] x86/cpu: Use SERIALIZE in sync_core() when available
On July 26, 2020 9:31:32 PM PDT, Ricardo Neri wrote: >The SERIALIZE instruction gives software a way to force the processor >to >complete all modifications to flags, registers and memory from previous >instructions and drain all buffered writes to memory before the next >instruction is fetched and executed. Thus, it serves the purpose of >sync_core(). Use it when available. > >Use boot_cpu_has() and not static_cpu_has(); the most critical paths >(returning to user mode and from interrupt and NMI) will not reach >sync_core(). > >Cc: Andy Lutomirski >Cc: Cathy Zhang >Cc: Dave Hansen >Cc: Fenghua Yu >Cc: "H. Peter Anvin" >Cc: Kyung Min Park >Cc: Peter Zijlstra >Cc: "Ravi V. Shankar" >Cc: Sean Christopherson >Cc: linux-e...@vger.kernel.org >Cc: linux-kernel@vger.kernel.org >Reviwed-by: Tony Luck >Suggested-by: Andy Lutomirski >Signed-off-by: Ricardo Neri >--- >--- > arch/x86/include/asm/special_insns.h | 5 + > arch/x86/include/asm/sync_core.h | 10 +- > 2 files changed, 14 insertions(+), 1 deletion(-) > >diff --git a/arch/x86/include/asm/special_insns.h >b/arch/x86/include/asm/special_insns.h >index 59a3e13204c3..0a2a60bba282 100644 >--- a/arch/x86/include/asm/special_insns.h >+++ b/arch/x86/include/asm/special_insns.h >@@ -234,6 +234,11 @@ static inline void clwb(volatile void *__p) > > #define nop() asm volatile ("nop") > >+static inline void serialize(void) >+{ >+ asm volatile(".byte 0xf, 0x1, 0xe8"); >+} >+ > #endif /* __KERNEL__ */ > > #endif /* _ASM_X86_SPECIAL_INSNS_H */ >diff --git a/arch/x86/include/asm/sync_core.h >b/arch/x86/include/asm/sync_core.h >index fdb5b356e59b..bf132c09d61b 100644 >--- a/arch/x86/include/asm/sync_core.h >+++ b/arch/x86/include/asm/sync_core.h >@@ -5,6 +5,7 @@ > #include > #include > #include >+#include > > #ifdef CONFIG_X86_32 > static inline void iret_to_self(void) >@@ -54,7 +55,8 @@ static inline void iret_to_self(void) > static inline void sync_core(void) > { > /* >- * There are quite a few ways to do this. IRET-to-self is nice >+ * Hardware can do this for us if SERIALIZE is available. Otherwise, >+ * there are quite a few ways to do this. IRET-to-self is nice >* because it works on every CPU, at any CPL (so it's compatible >* with paravirtualization), and it never exits to a hypervisor. >* The only down sides are that it's a bit slow (it seems to be >@@ -75,6 +77,12 @@ static inline void sync_core(void) >* Like all of Linux's memory ordering operations, this is a >* compiler barrier as well. >*/ >+ >+ if (boot_cpu_has(X86_FEATURE_SERIALIZE)) { >+ serialize(); >+ return; >+ } >+ > iret_to_self(); > } > Any reason to not make sync_core() an inline with alternatives? For a really overenginered solution, but which might perform unnecessary poorly on existing hardware: asm volatile("1: .byte 0xf, 0x1, 0xe8; 2:" _ASM_EXTABLE(1b,2b)); -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
[PATCH] checkpatch: disable commit log length check warning for signature tag
Disable commit log length check in case of signature tag. If the commit log line has valid signature tags such as "Reported-and-tested-by" with more than 75 characters, suppress the long length warning. For instance in commit ac854131d984 ("USB: core: Fix misleading driver bug report"), the corresponding patch contains a "Reported by" tag line which exceeds 75 chars. And there is no valid way to shorten the length. Signed-off-by: Nachiket Naganure --- scripts/checkpatch.pl | 2 ++ 1 file changed, 2 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 197436b20288..46237e9e0550 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -2806,6 +2806,8 @@ sub process { # filename then : $line =~ /^\s*(?:Fixes:|Link:)/i || # A Fixes: or Link: line + $line =~ /$signature_tags/ || + # Check for signature_tags $commit_log_possible_stack_dump)) { WARN("COMMIT_LOG_LONG_LINE", "Possible unwrapped commit description (prefer a maximum 75 chars per line)\n" . $herecurr); -- 2.25.1
Re: [PATCH v4 4/5] arm64: dts: sdm845: Add OPP tables and power-domains for venus
On 7/24/2020 7:39 PM, Stanimir Varbanov wrote: Hi, On 7/23/20 9:06 PM, Stanimir Varbanov wrote: Hi Rajendra, After applying 2,3 and 4/5 patches on linaro-integration v5.8-rc2 I see below messages on db845: qcom-venus aa0.video-codec: dev_pm_opp_set_rate: failed to find current OPP for freq 53397 (-34) ^^^ This one is new. qcom_rpmh TCS Busy, retrying RPMH message send: addr=0x3 ^^^ and this message is annoying, can we make it pr_debug in rpmh? On 7/23/20 2:26 PM, Rajendra Nayak wrote: Add the OPP tables in order to be able to vote on the performance state of a power-domain. Signed-off-by: Rajendra Nayak --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 40 ++-- 1 file changed, 38 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index e506793..5ca2265 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -3631,8 +3631,10 @@ interrupts = ; power-domains = < VENUS_GDSC>, < VCODEC0_GDSC>, - < VCODEC1_GDSC>; - power-domain-names = "venus", "vcodec0", "vcodec1"; + < VCODEC1_GDSC>, + < SDM845_CX>; + power-domain-names = "venus", "vcodec0", "vcodec1", "cx"; + operating-points-v2 = <_opp_table>; clocks = < VIDEO_CC_VENUS_CTL_CORE_CLK>, < VIDEO_CC_VENUS_AHB_CLK>, < VIDEO_CC_VENUS_CTL_AXI_CLK>, @@ -3654,6 +3656,40 @@ video-core1 { compatible = "venus-encoder"; }; + + venus_opp_table: venus-opp-table { + compatible = "operating-points-v2"; + + opp-1 { + opp-hz = /bits/ 64 <1>; + required-opps = <_opp_min_svs>; + }; + + opp-2 { + opp-hz = /bits/ 64 <2>; + required-opps = <_opp_low_svs>; + }; + + opp-32000 { + opp-hz = /bits/ 64 <32000>; + required-opps = <_opp_svs>; + }; + + opp-38000 { + opp-hz = /bits/ 64 <38000>; + required-opps = <_opp_svs_l1>; + }; + + opp-44400 { + opp-hz = /bits/ 64 <44400>; + required-opps = <_opp_nom>; + }; + + opp-53300 { + opp-hz = /bits/ 64 <53300>; Actually it comes from videocc, where ftbl_video_cc_venus_clk_src defines 53300 but the real calculated freq is 53397. I still don't quite understand why the videocc driver returns this frequency despite this not being in the freq table. I would expect a clk_round_rate() when called with 53397 to return a 53300. Taniya, Do you know why? If I change to opp-hz = /bits/ 64 <53397> the error disappear. I guess we have to revisit m/n and/or pre-divider for this freq when the source pll is P_VIDEO_PLL0_OUT_MAIN PLL? + required-opps = <_opp_turbo>; + }; + }; }; videocc: clock-controller@ab0 { -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
Re: linux-next: Fixes tag needs some work in the devfreq tree
Hi, On 7/27/20 1:17 PM, Stephen Rothwell wrote: > Hi, > > On Mon, 27 Jul 2020 13:24:28 +1000 Stephen Rothwell > wrote: >> >> Hi all, >> >> In commit >> >> 332c5b522b7c ("PM / devfrq: Fix indentaion of devfreq_summary debugfs >> node") > > This is now commit 470fa173646f > >> Fixes tag >> >> Fixes: commit 66d0e797bf09 ("Revert "PM / devfreq: Modify the device name >> as devfreq(X) for sysfs"") >> >> has these problem(s): >> >> - leading word 'commit' unexpected > Thanks for pointing out. I fixed it. Thanks. -- Best Regards, Chanwoo Choi Samsung Electronics
Re: [PATCH] rtlwifi: core: use eth_broadcast_addr() to assign broadcast
On Mon, 2020-07-27 at 02:16 +, Xu Wang wrote: > This patch is to use eth_broadcast_addr() to assign broadcast address > insetad of memcpy(). > > Signed-off-by: Xu Wang > --- > drivers/net/wireless/realtek/rtlwifi/core.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c > b/drivers/net/wireless/realtek/rtlwifi/core.c > index 4dd82c6052f0..8bb49b77b5c8 100644 > --- a/drivers/net/wireless/realtek/rtlwifi/core.c > +++ b/drivers/net/wireless/realtek/rtlwifi/core.c > @@ -1512,7 +1512,6 @@ static int rtl_op_set_key(struct ieee80211_hw *hw, enum > set_key_cmd cmd, > bool wep_only = false; > int err = 0; > u8 mac_addr[ETH_ALEN]; > - u8 bcast_addr[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }; > > rtlpriv->btcoexist.btc_info.in_4way = false; > 'bcast_addr' is also used by debug: RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, "%s hardware based encryption for keyidx: %d, mac: %pM\n", cmd == SET_KEY ? "Using" : "Disabling", key->keyidx, sta ? sta->addr : bcast_addr); If you turn on CONFIG_RTLWIFI_DEBUG, compiler must warn an error. So, NACK. > @@ -1634,7 +1633,7 @@ static int rtl_op_set_key(struct ieee80211_hw *hw, enum > set_key_cmd cmd, > memcpy(rtlpriv->sec.key_buf[key_idx], > key->key, key->keylen); > rtlpriv->sec.key_len[key_idx] = key->keylen; > - memcpy(mac_addr, bcast_addr, ETH_ALEN); > + eth_broadcast_addr(mac_addr); > } else {/* pairwise key */ > RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, > "set pairwise key\n");
Re: [PATCH 1/2] fsi/sbefifo: Clean up correct FIFO when receiving reset request from SBE
On Fri, 2020-07-24 at 16:45 +0930, Joel Stanley wrote: > From: Joachim Fenkes > > When the SBE requests a reset via the down FIFO, that is also the > FIFO we should go and reset ;) Is it ? I no longer work for IBM and dont have access to any of the documentation here but I had vague memories that we would get a reset request in the down fifo in order to reset the up one since we control the up one and the host controls the down one, no ? Cheers, Ben. > Fixes: 9f4a8a2d7f9d ("fsi/sbefifo: Add driver for the SBE FIFO") > Signed-off-by: Joachim Fenkes > Signed-off-by: Joel Stanley > --- > drivers/fsi/fsi-sbefifo.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c > index f54df9ebc8b3..655b45c1f6ba 100644 > --- a/drivers/fsi/fsi-sbefifo.c > +++ b/drivers/fsi/fsi-sbefifo.c > @@ -400,7 +400,7 @@ static int sbefifo_cleanup_hw(struct sbefifo > *sbefifo) > /* The FIFO already contains a reset request from the SBE ? */ > if (down_status & SBEFIFO_STS_RESET_REQ) { > dev_info(dev, "Cleanup: FIFO reset request set, > resetting\n"); > - rc = sbefifo_regw(sbefifo, SBEFIFO_UP, > SBEFIFO_PERFORM_RESET); > + rc = sbefifo_regw(sbefifo, SBEFIFO_DOWN, > SBEFIFO_PERFORM_RESET); > if (rc) { > sbefifo->broken = true; > dev_err(dev, "Cleanup: Reset reg write failed, > rc=%d\n", rc);
Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
Hi, Change looks good to me. Reviewed-by: Maulik Shah Thanks, Maulik On 7/25/2020 2:47 AM, Stephen Boyd wrote: From: Stephen Boyd The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread is an irqthread and the tcs_tx_done() irq is also an irqthread. That's because kernel irqthreads are SCHED_FIFO and thus need to voluntarily give up priority by calling into the scheduler so that other threads can run. I see RCU stalls when I boot with irqthreads on the kernel commandline because the modem remoteproc driver is trying to send an rpmh async message from an irqthread that needs to give up the CPU for the rpmh irqthread to run and clear out tcs slots. rcu: INFO: rcu_preempt self-detected stall on CPU rcu: 0-: (1 GPs behind) idle=402/1/0x4002 softirq=2108/2109 fqs=4920 (t=21016 jiffies g=2933 q=590) Task dump for CPU 0: irq/11-smp2pR running task0 148 2 0x0028 Call trace: dump_backtrace+0x0/0x154 show_stack+0x20/0x2c sched_show_task+0xfc/0x108 dump_cpu_task+0x44/0x50 rcu_dump_cpu_stacks+0xa4/0xf8 rcu_sched_clock_irq+0x7dc/0xaa8 update_process_times+0x30/0x54 tick_sched_handle+0x50/0x64 tick_sched_timer+0x4c/0x8c __hrtimer_run_queues+0x21c/0x36c hrtimer_interrupt+0xf0/0x22c arch_timer_handler_phys+0x40/0x50 handle_percpu_devid_irq+0x114/0x25c __handle_domain_irq+0x84/0xc4 gic_handle_irq+0xd0/0x178 el1_irq+0xbc/0x180 save_return_addr+0x18/0x28 return_address+0x54/0x88 preempt_count_sub+0x40/0x88 _raw_spin_unlock_irqrestore+0x4c/0x6c ___ratelimit+0xd0/0x128 rpmh_rsc_send_data+0x24c/0x378 __rpmh_write+0x1b0/0x208 rpmh_write_async+0x90/0xbc rpmhpd_send_corner+0x60/0x8c rpmhpd_aggregate_corner+0x8c/0x124 rpmhpd_set_performance_state+0x8c/0xbc _genpd_set_performance_state+0xdc/0x1b8 dev_pm_genpd_set_performance_state+0xb8/0xf8 q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss] qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss] q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5] handle_nested_irq+0xd0/0x138 qcom_smp2p_intr+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18 This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting on the waitqueue and only be woken up when a free slot is available. This should make things more predictable too because the scheduler will be able to sleep tasks that are waiting on a free tcs instead of the busy loop we currently have today. Cc: Douglas Anderson Cc: Maulik Shah Cc: Lina Iyer Signed-off-by: Stephen Boyd --- Changes in v2: * Document tcs_wait * Move wake_up() outside of the spinlock * Document claim_tcs_for_req() drivers/soc/qcom/rpmh-internal.h | 4 ++ drivers/soc/qcom/rpmh-rsc.c | 115 +++ 2 files changed, 58 insertions(+), 61 deletions(-) diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h index ef60e790a750..344ba687c13b 100644 --- a/drivers/soc/qcom/rpmh-internal.h +++ b/drivers/soc/qcom/rpmh-internal.h @@ -8,6 +8,7 @@ #define __RPM_INTERNAL_H__ #include +#include #include #define TCS_TYPE_NR 4 @@ -106,6 +107,8 @@ struct rpmh_ctrlr { * @lock: Synchronize state of the controller. If RPMH's cache * lock will also be held, the order is: drv->lock then * cache_lock. + * @tcs_wait: Wait queue used to wait for @tcs_in_use to free up a + * slot * @client: Handle to the DRV's client. */ struct rsc_drv { @@ -118,6 +121,7 @@ struct rsc_drv { struct tcs_group tcs[TCS_TYPE_NR]; DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR); spinlock_t lock; + wait_queue_head_t tcs_wait; struct rpmh_ctrlr client; }; diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index 076fd27f3081..84a27b884af0 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -445,6 +446,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) if (!drv->tcs[ACTIVE_TCS].num_tcs) enable_tcs_irq(drv, i, false); spin_unlock(>lock); + wake_up(>tcs_wait); if (req) rpmh_tx_done(req, err); } @@ -563,73 +565,34 @@ static int find_free_tcs(struct tcs_group *tcs) } /** - * tcs_write() - Store messages into a TCS right now, or return -EBUSY. + * claim_tcs_for_req() - Claim a tcs in the given tcs_group; only for active. * @drv: The controller. + * @tcs: The tcs_group used for ACTIVE_ONLY
Re: [PATCH 18/23] init: open code setting up stdin/stdout/stderr
On Mon, Jul 27, 2020 at 04:05:34AM +0100, Al Viro wrote: > On Tue, Jul 14, 2020 at 09:04:22PM +0200, Christoph Hellwig wrote: > > Don't rely on the implicit set_fs(KERNEL_DS) for ksys_open to work, but > > instead open a struct file for /dev/console and then install it as FD > > 0/1/2 manually. > > I really hate that one. Every time we exposed the internal details to > the fucking early init code, we paid for that afterwards. And this > goes over the top wrt the level of details being exposed. > > _IF_ you want to keep that thing, move it to fs/file.c, with dire comment > re that being very special shite for init and likely cause of subsequent > trouble whenever anything gets changed, a gnat farts somewhere, etc. Err, while I'm all for keeping internals internal, fd_install and get_unused_fd_flags are exported routines with tons of users of this pattern all over.
Re: [RESEND PATCH] usb: common: usb-conn-gpio: Register optional charger
On Sun, 2020-07-26 at 12:27 +0200, Paul Cercueil wrote: > > Le dim. 26 juil. 2020 à 13:14, Andy Shevchenko > a écrit : > > On Mon, Jun 22, 2020 at 1:51 AM Paul Cercueil > > wrote: > >> > >> Register a power supply charger, if the Kconfig option > >> USB_CONN_GPIO_CHARGER is set, whose online state depends on whether > >> the USB role is set to device or not. > >> > >> This is useful when the USB role is the only way to know if the > >> device > >> is charging from USB. The API is the standard power supply charger > >> API, > >> you get a /sys/class/power_supply/xxx/online node which tells you > >> the > >> state of the charger. > >> > >> The sole purpose of this is to give userspace applications a way to > >> know whether or not the charger is plugged. > > > > I'm not sure I understand the purpose of this (third?) way to detect > > USB charger and notify user space about. > > Why is extcon not good enough? > > We can't have extcon and USB role detection at the same time. > > -Paul > > >> Signed-off-by: Paul Cercueil > >> --- > >> drivers/usb/common/Kconfig | 11 +++ > >> drivers/usb/common/usb-conn-gpio.c | 47 > >> ++ > >> 2 files changed, 58 insertions(+) > >> > >> diff --git a/drivers/usb/common/Kconfig b/drivers/usb/common/Kconfig > >> index d611477aae41..5405ae96c68f 100644 > >> --- a/drivers/usb/common/Kconfig > >> +++ b/drivers/usb/common/Kconfig > >> @@ -49,3 +49,14 @@ config USB_CONN_GPIO > >> > >>To compile the driver as a module, choose M here: the > >> module will > >>be called usb-conn-gpio.ko > >> + > >> +if USB_CONN_GPIO > >> + > >> +config USB_CONN_GPIO_CHARGER > >> + bool "USB charger support" > >> + select POWER_SUPPLY > >> + help > >> + Register a charger with the power supply subsystem. This > >> will allow > >> + userspace to know whether or not the device is charging > >> from USB. > >> + > >> +endif > >> diff --git a/drivers/usb/common/usb-conn-gpio.c > >> b/drivers/usb/common/usb-conn-gpio.c > >> index ed204cbb63ea..129d48db280b 100644 > >> --- a/drivers/usb/common/usb-conn-gpio.c > >> +++ b/drivers/usb/common/usb-conn-gpio.c > >> @@ -17,6 +17,7 @@ > >> #include > >> #include > >> #include > >> +#include > >> #include > >> #include > >> > >> @@ -38,6 +39,9 @@ struct usb_conn_info { > >> struct gpio_desc *vbus_gpiod; > >> int id_irq; > >> int vbus_irq; > >> + > >> + struct power_supply_desc desc; > >> + struct power_supply *charger; > >> }; > >> > >> /** > >> @@ -98,6 +102,8 @@ static void usb_conn_detect_cable(struct > >> work_struct *work) > >> ret = regulator_enable(info->vbus); > >> if (ret) > >> dev_err(info->dev, "enable vbus regulator > >> failed\n"); > >> + } else if (IS_ENABLED(CONFIG_USB_CONN_GPIO_CHARGER)) { > >> + power_supply_changed(info->charger); > >> } > >> > >> info->last_role = role; > >> @@ -121,10 +127,35 @@ static irqreturn_t usb_conn_isr(int irq, void > >> *dev_id) > >> return IRQ_HANDLED; > >> } > >> > >> +static enum power_supply_property usb_charger_properties[] = { > >> + POWER_SUPPLY_PROP_ONLINE, > >> +}; > >> + > >> +static int usb_charger_get_property(struct power_supply *psy, > >> + enum power_supply_property psp, > >> + union power_supply_propval *val) > >> +{ > >> + struct usb_conn_info *info = power_supply_get_drvdata(psy); > >> + > >> + switch (psp) { > >> + case POWER_SUPPLY_PROP_ONLINE: > >> + val->intval = info->last_role == USB_ROLE_DEVICE; What will happen if you not change info->last_role here? I prefer it's only changed by usb_conn_isr(), if it's changed by other drivers, for example, through power_supply_get_property(), may skip role switch. > >> + break; > >> + default: > >> + return -EINVAL; > >> + } > >> + > >> + return 0; > >> +} > >> + > >> static int usb_conn_probe(struct platform_device *pdev) > >> { > >> struct device *dev = >dev; > >> + struct power_supply_desc *desc; > >> struct usb_conn_info *info; > >> + struct power_supply_config cfg = { > >> + .of_node = dev->of_node, > >> + }; > >> int ret = 0; > >> > >> info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); > >> @@ -203,6 +234,22 @@ static int usb_conn_probe(struct > >> platform_device *pdev) > >> } > >> } > >> > >> + if (IS_ENABLED(CONFIG_USB_CONN_GPIO_CHARGER)) { > >> + desc = >desc; > >> + desc->name = "usb-charger"; > >> + desc->properties = usb_charger_properties; > >> + desc->num_properties = >
drivers/net/ppp/pppox.c:84:21: sparse: sparse: incorrect type in initializer (different address spaces)
tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master head: 92ed301919932f13b9172e525674157e983d commit: 670d0a4b10704667765f7d18f7592993d02783aa sparse: use identifiers to define address spaces date: 6 weeks ago config: openrisc-randconfig-s032-20200727 (attached as .config) compiler: or1k-linux-gcc (GCC) 9.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.2-94-geb6779f6-dirty git checkout 670d0a4b10704667765f7d18f7592993d02783aa # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=openrisc If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot sparse warnings: (new ones prefixed by >>) >> drivers/net/ppp/pppox.c:84:21: sparse: sparse: incorrect type in initializer >> (different address spaces) @@ expected int *__pu_addr @@ got int >> [noderef] __user * @@ drivers/net/ppp/pppox.c:84:21: sparse: expected int *__pu_addr >> drivers/net/ppp/pppox.c:84:21: sparse: got int [noderef] __user * -- >> drivers/net/ppp/pppoe.c:751:21: sparse: sparse: incorrect type in >> initializer (different address spaces) @@ expected int *__pu_addr @@ >> got int [noderef] __user * @@ drivers/net/ppp/pppoe.c:751:21: sparse: expected int *__pu_addr >> drivers/net/ppp/pppoe.c:751:21: sparse: got int [noderef] __user * >> drivers/net/ppp/pppoe.c:765:21: sparse: sparse: incorrect type in >> initializer (different address spaces) @@ expected int const *__gu_addr >> @@ got int [noderef] __user * @@ drivers/net/ppp/pppoe.c:765:21: sparse: expected int const *__gu_addr drivers/net/ppp/pppoe.c:765:21: sparse: got int [noderef] __user * drivers/net/ppp/pppoe.c:778:21: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected int const *__gu_addr @@ got int [noderef] __user * @@ drivers/net/ppp/pppoe.c:778:21: sparse: expected int const *__gu_addr drivers/net/ppp/pppoe.c:778:21: sparse: got int [noderef] __user * -- >> drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1828:9: sparse: sparse: >> incorrect type in initializer (different address spaces) @@ expected >> unsigned int const *__gu_addr @@ got unsigned int [noderef] [usertype] >> __user * @@ drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1828:9: sparse: expected unsigned int const *__gu_addr >> drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1828:9: sparse: got >> unsigned int [noderef] [usertype] __user * >> drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1830:9: sparse: sparse: >> incorrect type in initializer (different address spaces) @@ expected >> unsigned int *__pu_addr @@ got unsigned int [noderef] [usertype] __user >> * @@ drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1830:9: sparse: expected unsigned int *__pu_addr drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1830:9: sparse: got unsigned int [noderef] [usertype] __user * drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1833:9: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected unsigned int const *__gu_addr @@ got unsigned int [noderef] [usertype] __user * @@ drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1833:9: sparse: expected unsigned int const *__gu_addr drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1833:9: sparse: got unsigned int [noderef] [usertype] __user * drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1845:9: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected unsigned int const *__gu_addr @@ got unsigned int [noderef] [usertype] __user * @@ drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1845:9: sparse: expected unsigned int const *__gu_addr drivers/staging/rtl8712/rtl871x_ioctl_linux.c:1845:9: sparse: got unsigned int [noderef] [usertype] __user * drivers/staging/rtl8712/rtl871x_ioctl_linux.c: note: in included file (through include/linux/sched/task.h, include/linux/sched/signal.h, drivers/staging/rtl8712/osdep_service.h): include/linux/uaccess.h:131:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected void *to @@ got void [noderef] __user *to @@ include/linux/uaccess.h:131:38: sparse: expected void *to include/linux/uaccess.h:131:38: sparse: got void [noderef] __user *to include/linux/uaccess.h:131:42: sparse: sparse: incorrect type in argument 2 (different address spaces) @@ expected void const [noderef] __user *from @@ got void const *from @@ include/linux/uaccess.h:131:42: sparse: expected void const [noderef] __user *from include/linux/uaccess.h:131:42:
Re: [PATCH v17 00/21] per memcg lru lock
A standard for new page isolation steps like the following: 1, get_page(); #pin the page avoid be free 2, TestClearPageLRU(); #serialize other isolation, also memcg change 3, spin_lock on lru_lock; #serialize lru list access The step 2 could be optimzed/replaced in scenarios which page is unlikely be accessed by others. 在 2020/7/25 下午8:59, Alex Shi 写道: > The new version which bases on v5.8-rc6. It includes Hugh Dickins fix in > mm/swap.c and mm/mlock.c fix which Alexander Duyck pointed out, then > removes 'mm/mlock: reorder isolation sequence during munlock' > > Hi Johanness & Hugh & Alexander & Willy, > > Could you like to give a reviewed by since you address much of issue and > give lots of suggestions! Many thanks! > > Current lru_lock is one for each of node, pgdat->lru_lock, that guard for > lru lists, but now we had moved the lru lists into memcg for long time. Still > using per node lru_lock is clearly unscalable, pages on each of memcgs have > to compete each others for a whole lru_lock. This patchset try to use per > lruvec/memcg lru_lock to repleace per node lru lock to guard lru lists, make > it scalable for memcgs and get performance gain. > > Currently lru_lock still guards both lru list and page's lru bit, that's ok. > but if we want to use specific lruvec lock on the page, we need to pin down > the page's lruvec/memcg during locking. Just taking lruvec lock first may be > undermined by the page's memcg charge/migration. To fix this problem, we could > take out the page's lru bit clear and use it as pin down action to block the > memcg changes. That's the reason for new atomic func TestClearPageLRU. > So now isolating a page need both actions: TestClearPageLRU and hold the > lru_lock. > > The typical usage of this is isolate_migratepages_block() in compaction.c > we have to take lru bit before lru lock, that serialized the page isolation > in memcg page charge/migration which will change page's lruvec and new > lru_lock in it. > > The above solution suggested by Johannes Weiner, and based on his new memcg > charge path, then have this patchset. (Hugh Dickins tested and contributed > much > code from compaction fix to general code polish, thanks a lot!). > > The patchset includes 3 parts: > 1, some code cleanup and minimum optimization as a preparation. > 2, use TestCleanPageLRU as page isolation's precondition > 3, replace per node lru_lock with per memcg per node lru_lock > > Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104 > containers on a 2s * 26cores * HT box with a modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > With this patchset, the readtwice performance increased about 80% > in concurrent containers. > > Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this > idea 8 years ago, and others who give comments as well: Daniel Jordan, > Mel Gorman, Shakeel Butt, Matthew Wilcox etc. > > Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, > and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks! > > > Alex Shi (19): > mm/vmscan: remove unnecessary lruvec adding > mm/page_idle: no unlikely double check for idle page counting > mm/compaction: correct the comments of compact_defer_shift > mm/compaction: rename compact_deferred as compact_should_defer > mm/thp: move lru_add_page_tail func to huge_memory.c > mm/thp: clean up lru_add_page_tail > mm/thp: remove code path which never got into > mm/thp: narrow lru locking > mm/memcg: add debug checking in lock_page_memcg > mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn > mm/lru: move lru_lock holding in func lru_note_cost_page > mm/lru: move lock into lru_note_cost > mm/lru: introduce TestClearPageLRU > mm/compaction: do page isolation first in compaction > mm/thp: add tail pages into lru anyway in split_huge_page() > mm/swap: serialize memcg changes in pagevec_lru_move_fn > mm/lru: replace pgdat lru_lock with lruvec lock > mm/lru: introduce the relock_page_lruvec function > mm/pgdat: remove pgdat lru_lock > > Hugh Dickins (2): > mm/vmscan: use relock for move_pages_to_lru > mm/lru: revise the comments of lru_lock > > Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +- > Documentation/admin-guide/cgroup-v1/memory.rst | 21 +-- > Documentation/trace/events-kmem.rst| 2 +- > Documentation/vm/unevictable-lru.rst | 22 +-- > include/linux/compaction.h | 4 +- > include/linux/memcontrol.h | 98 ++ > include/linux/mm_types.h | 2 +- > include/linux/mmzone.h | 6 +- > include/linux/page-flags.h | 1 + > include/linux/swap.h | 4 +- > include/trace/events/compaction.h | 2 +- > mm/compaction.c
linux-next: manual merge of the tip tree with the drm-msm tree
Hi all, Today's linux-next merge of the tip tree got a conflict in: drivers/gpu/drm/msm/msm_drv.c between commit: 00be2abf1413 ("drm/msm: use kthread_create_worker instead of kthread_run") from the drm-msm tree and commits: 64419ca67622 ("sched,msm: Convert to sched_set_fifo*()") 8b700983de82 ("sched: Remove sched_set_*() return value") from the tip tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/gpu/drm/msm/msm_drv.c index 36d98d4116ca,556cca38487c.. --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@@ -524,11 -508,8 +517,7 @@@ static int msm_drm_init(struct device * goto err_msm_uninit; } - ret = sched_setscheduler(priv->event_thread[i].worker->task, -SCHED_FIFO, ); - if (ret) - dev_warn(dev, "event_thread set priority failed:%d\n", -ret); - sched_set_fifo(priv->event_thread[i].thread); ++ sched_set_fifo(priv->event_thread[i].worker->task); } ret = drm_vblank_init(ddev, priv->num_crtcs); pgpxFCLustfMg.pgp Description: OpenPGP digital signature
[PATCH] drm/amd/display: Clear dm_state for fast updates
This patch fixes a race condition that causes a use-after-free during amdgpu_dm_atomic_commit_tail. This can occur when 2 non-blocking commits are requested and the second one finishes before the first. Essentially, this bug occurs when the following sequence of events happens: 1. Non-blocking commit #1 is requested w/ a new dm_state #1 and is deferred to the workqueue. 2. Non-blocking commit #2 is requested w/ a new dm_state #2 and is deferred to the workqueue. 3. Commit #2 starts before commit #1, dm_state #1 is used in the commit_tail and commit #2 completes, freeing dm_state #1. 4. Commit #1 starts after commit #2 completes, uses the freed dm_state 1 and dereferences a freelist pointer while setting the context. Since this bug has only been spotted with fast commits, this patch fixes the bug by clearing the dm_state instead of using the old dc_state for fast updates. In addition, since dm_state is only used for its dc_state and amdgpu_dm_atomic_commit_tail will retain the dc_state if none is found, removing the dm_state should not have any consequences in fast updates. This use-after-free bug has existed for a while now, but only caused a noticeable issue starting from 5.7-rc1 due to 3202fa62f ("slub: relocate freelist pointer to middle of object") moving the freelist pointer from dm_state->base (which was unused) to dm_state->context (which is dereferenced). Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207383 Fixes: bd200d190f45 ("drm/amd/display: Don't replace the dc_state for fast updates") Reported-by: Duncan <1i5t5.dun...@cox.net> Signed-off-by: Mazin Rezk --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 86ffa0c2880f..710edc70e37e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8717,20 +8717,38 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, * the same resource. If we have a new DC context as part of * the DM atomic state from validation we need to free it and * retain the existing one instead. +* +* Furthermore, since the DM atomic state only contains the DC +* context and can safely be annulled, we can free the state +* and clear the associated private object now to free +* some memory and avoid a possible use-after-free later. */ - struct dm_atomic_state *new_dm_state, *old_dm_state; - new_dm_state = dm_atomic_get_new_state(state); - old_dm_state = dm_atomic_get_old_state(state); + for (i = 0; i < state->num_private_objs; i++) { + struct drm_private_obj *obj = state->private_objs[i].ptr; - if (new_dm_state && old_dm_state) { - if (new_dm_state->context) - dc_release_state(new_dm_state->context); + if (obj->funcs == adev->dm.atomic_obj.funcs) { + int j = state->num_private_objs-1; - new_dm_state->context = old_dm_state->context; + dm_atomic_destroy_state(obj, + state->private_objs[i].state); + + /* If i is not at the end of the array then the +* last element needs to be moved to where i was +* before the array can safely be truncated. +*/ + if (i != j) + state->private_objs[i] = + state->private_objs[j]; - if (old_dm_state->context) - dc_retain_state(old_dm_state->context); + state->private_objs[j].ptr = NULL; + state->private_objs[j].state = NULL; + state->private_objs[j].old_state = NULL; + state->private_objs[j].new_state = NULL; + + state->num_private_objs = j; + break; + } } } -- 2.27.0
[PATCH v4 10/10] powerpc/smp: Implement cpu_to_coregroup_id
Lookup the coregroup id from the associativity array. If unable to detect the coregroup id, fallback on the core id. This way, ensure sched_domain degenerates and an extra sched domain is not created. Ideally this function should have been implemented in arch/powerpc/kernel/smp.c. However if its implemented in mm/numa.c, we don't need to find the primary domain again. If the device-tree mentions more than one coregroup, then kernel implements only the last or the smallest coregroup, which currently corresponds to the penultimate domain in the device-tree. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by : Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Move coregroup_enabled before getting associativity (Gautham) arch/powerpc/mm/numa.c | 20 1 file changed, 20 insertions(+) diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 0d57779e7942..8b3b3ec7fcc4 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1218,6 +1218,26 @@ int find_and_online_cpu_nid(int cpu) int cpu_to_coregroup_id(int cpu) { + __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0}; + int index; + + if (cpu < 0 || cpu > nr_cpu_ids) + return -1; + + if (!coregroup_enabled) + goto out; + + if (!firmware_has_feature(FW_FEATURE_VPHN)) + goto out; + + if (vphn_get_associativity(cpu, associativity)) + goto out; + + index = of_read_number(associativity, 1); + if (index > min_common_depth + 1) + return of_read_number([index - 1], 1); + +out: return cpu_to_core_id(cpu); } -- 2.17.1
linux-next: manual merge of the tip tree with the vfs tree
Hi all, Today's linux-next merge of the tip tree got a conflict in: arch/x86/include/asm/fpu/xstate.h between commit: c196049cc732 ("x86: switch to ->regset_get()") from the vfs tree and commit: ce711ea3cab9 ("perf/x86/intel/lbr: Support XSAVES/XRSTORS for LBR context switch") from the tip tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc arch/x86/include/asm/fpu/xstate.h index f691ea1bc086,1559554af931.. --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@@ -71,8 -103,9 +103,9 @@@ extern void __init update_regset_xstate void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr); const void *get_xsave_field_ptr(int xfeature_nr); int using_compacted_format(void); + int xfeature_size(int xfeature_nr); -int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset, unsigned int size); -int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned int offset, unsigned int size); +struct membuf; +void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave); int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf); int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf); void copy_supervisor_to_kernel(struct xregs_state *xsave); pgpd4jWSQlBvh.pgp Description: OpenPGP digital signature
INVESTMENT
Good day, You were recommended by a mutual associate. I write you regarding an investment of bearer bonds I made on behalf of a client. The investment was made in 2009 and has been under my management. The said investor is deceased. The window is now available to assign these bonds to any name or company of my choice. I have all the necessary information to achieve this within 10 banking days. The total value of the bond is 100million pounds sterling, in a million pound denominations. If you can handle this, do contact me at your earliest convenience via my email robertdankwo...@aol.com So we can discuss the final details Thank you. Mr ROBERT DANKWORTH
[PATCH v4 07/10] Powerpc/numa: Detect support for coregroup
Add support for grouping cores based on the device-tree classification. - The last domain in the associativity domains always refers to the core. - If primary reference domain happens to be the penultimate domain in the associativity domains device-tree property, then there are no coregroups. However if its not a penultimate domain, then there are coregroups. There can be more than one coregroup. For now we would be interested in the last or the smallest coregroups. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Explained Coregroup in commit msg (Michael Ellerman) arch/powerpc/include/asm/smp.h | 1 + arch/powerpc/kernel/smp.c | 1 + arch/powerpc/mm/numa.c | 34 +- 3 files changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h index 49a25e2400f2..5bdc17a7049f 100644 --- a/arch/powerpc/include/asm/smp.h +++ b/arch/powerpc/include/asm/smp.h @@ -28,6 +28,7 @@ extern int boot_cpuid; extern int spinning_secondaries; extern u32 *cpu_to_phys_id; +extern bool coregroup_enabled; extern void cpu_die(void); extern int cpu_to_chip_id(int cpu); diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 3c5ccf6d2b1c..698000c7f76f 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -74,6 +74,7 @@ static DEFINE_PER_CPU(int, cpu_state) = { 0 }; struct task_struct *secondary_current; bool has_big_cores; +bool coregroup_enabled; DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map); diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 2298899a0f0a..51cb672f113b 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -886,7 +886,9 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) static void __init find_possible_nodes(void) { struct device_node *rtas; - u32 numnodes, i; + const __be32 *domains; + int prop_length, max_nodes; + u32 i; if (!numa_enabled) return; @@ -895,25 +897,31 @@ static void __init find_possible_nodes(void) if (!rtas) return; - if (of_property_read_u32_index(rtas, "ibm,current-associativity-domains", - min_common_depth, )) { - /* -* ibm,current-associativity-domains is a fairly recent -* property. If it doesn't exist, then fallback on -* ibm,max-associativity-domains. Current denotes what the -* platform can support compared to max which denotes what the -* Hypervisor can support. -*/ - if (of_property_read_u32_index(rtas, "ibm,max-associativity-domains", - min_common_depth, )) + /* +* ibm,current-associativity-domains is a fairly recent property. If +* it doesn't exist, then fallback on ibm,max-associativity-domains. +* Current denotes what the platform can support compared to max +* which denotes what the Hypervisor can support. +*/ + domains = of_get_property(rtas, "ibm,current-associativity-domains", + _length); + if (!domains) { + domains = of_get_property(rtas, "ibm,max-associativity-domains", + _length); + if (!domains) goto out; } - for (i = 0; i < numnodes; i++) { + max_nodes = of_read_number([min_common_depth], 1); + for (i = 0; i < max_nodes; i++) { if (!node_possible(i)) node_set(i, node_possible_map); } + prop_length /= sizeof(int); + if (prop_length > min_common_depth + 2) + coregroup_enabled = 1; + out: of_node_put(rtas); } -- 2.17.1
[PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group
If allocated earlier and the search fails, then cpumask need to be freed. However cpu_l1_cache_map can be allocated after we search thread group. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- arch/powerpc/kernel/smp.c | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 698000c7f76f..dab96a1203ec 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -797,10 +797,6 @@ static int init_cpu_l1_cache_map(int cpu) if (err) goto out; - zalloc_cpumask_var_node(_cpu(cpu_l1_cache_map, cpu), - GFP_KERNEL, - cpu_to_node(cpu)); - cpu_group_start = get_cpu_thread_group_start(cpu, ); if (unlikely(cpu_group_start == -1)) { @@ -809,6 +805,9 @@ static int init_cpu_l1_cache_map(int cpu) goto out; } + zalloc_cpumask_var_node(_cpu(cpu_l1_cache_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + for (i = first_thread; i < first_thread + threads_per_core; i++) { int i_group_start = get_cpu_thread_group_start(i, ); -- 2.17.1
[PATCH v4 09/10] Powerpc/smp: Create coregroup domain
Add percpu coregroup maps and masks to create coregroup domain. If a coregroup doesn't exist, the coregroup domain will be degenerated in favour of SMT/CACHE domain. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Signed-off-by: Srikar Dronamraju --- Changelog v3 ->v4: if coregroup_support doesn't exist, update MC mask to the next smaller domain mask. Changelog v2 -> v3: Add optimization for mask updation under coregroup_support Changelog v1 -> v2: Moved coregroup topology fixup to fixup_topology (Gautham) arch/powerpc/include/asm/topology.h | 10 +++ arch/powerpc/kernel/smp.c | 44 + arch/powerpc/mm/numa.c | 5 3 files changed, 59 insertions(+) diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h index f0b6300e7dd3..6609174918ab 100644 --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -88,12 +88,22 @@ static inline int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc) #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) extern int find_and_online_cpu_nid(int cpu); +extern int cpu_to_coregroup_id(int cpu); #else static inline int find_and_online_cpu_nid(int cpu) { return 0; } +static inline int cpu_to_coregroup_id(int cpu) +{ +#ifdef CONFIG_SMP + return cpu_to_core_id(cpu); +#else + return 0; +#endif +} + #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */ #include diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index dab96a1203ec..95f0bf72e283 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -80,6 +80,7 @@ DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map); DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map); DEFINE_PER_CPU(cpumask_var_t, cpu_core_map); +DEFINE_PER_CPU(cpumask_var_t, cpu_coregroup_map); EXPORT_PER_CPU_SYMBOL(cpu_sibling_map); EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); @@ -91,6 +92,7 @@ enum { smt_idx, #endif bigcore_idx, + mc_idx, die_idx, }; @@ -869,6 +871,21 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif +static struct cpumask *cpu_coregroup_mask(int cpu) +{ + return per_cpu(cpu_coregroup_map, cpu); +} + +static bool has_coregroup_support(void) +{ + return coregroup_enabled; +} + +static const struct cpumask *cpu_mc_mask(int cpu) +{ + return cpu_coregroup_mask(cpu); +} + static const struct cpumask *cpu_bigcore_mask(int cpu) { return per_cpu(cpu_sibling_map, cpu); @@ -879,6 +896,7 @@ static struct sched_domain_topology_level powerpc_topology[] = { { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, + { cpu_mc_mask, SD_INIT_NAME(MC) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, }; @@ -925,6 +943,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus) GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(_cpu(cpu_core_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); + if (has_coregroup_support()) + zalloc_cpumask_var_node(_cpu(cpu_coregroup_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + #ifdef CONFIG_NEED_MULTIPLE_NODES /* * numa_node_id() works after this. @@ -942,6 +964,9 @@ void __init smp_prepare_cpus(unsigned int max_cpus) cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid)); cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid)); + if (has_coregroup_support()) + cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid)); + init_big_cores(); if (has_big_cores) { cpumask_set_cpu(boot_cpuid, @@ -1233,6 +1258,8 @@ static void remove_cpu_from_masks(int cpu) set_cpus_unrelated(cpu, i, cpu_sibling_mask); if (has_big_cores) set_cpus_unrelated(cpu, i, cpu_smallcore_mask); + if (has_coregroup_support()) + set_cpus_unrelated(cpu, i, cpu_coregroup_mask); } } #endif @@ -1293,6 +1320,20 @@ static void add_cpu_to_masks(int cpu) add_cpu_to_smallcore_masks(cpu); update_mask_by_l2(cpu, cpu_l2_cache_mask); + if (has_coregroup_support()) { + int coregroup_id = cpu_to_coregroup_id(cpu); + + cpumask_set_cpu(cpu, cpu_coregroup_mask(cpu)); + for_each_cpu_and(i, cpu_online_mask, cpu_cpu_mask(cpu)) { + int fcpu = cpu_first_thread_sibling(i); + +
[PATCH v4 06/10] powerpc/smp: Generalize 2nd sched domain
Currently "CACHE" domain happens to be the 2nd sched domain as per powerpc_topology. This domain will collapse if cpumask of l2-cache is same as SMT domain. However we could generalize this domain such that it could mean either be a "CACHE" domain or a "BIGCORE" domain. While setting up the "CACHE" domain, check if shared_cache is already set. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Moved shared_cache topology fixup to fixup_topology (Gautham) arch/powerpc/kernel/smp.c | 48 +++ 1 file changed, 34 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index d997c7411664..3c5ccf6d2b1c 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -85,6 +85,14 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); EXPORT_PER_CPU_SYMBOL(cpu_core_map); EXPORT_SYMBOL_GPL(has_big_cores); +enum { +#ifdef CONFIG_SCHED_SMT + smt_idx, +#endif + bigcore_idx, + die_idx, +}; + #define MAX_THREAD_LIST_SIZE 8 #define THREAD_GROUP_SHARE_L1 1 struct thread_groups { @@ -851,13 +859,7 @@ static int powerpc_shared_cache_flags(void) */ static const struct cpumask *shared_cache_mask(int cpu) { - if (shared_caches) - return cpu_l2_cache_mask(cpu); - - if (has_big_cores) - return cpu_smallcore_mask(cpu); - - return per_cpu(cpu_sibling_map, cpu); + return per_cpu(cpu_l2_cache_map, cpu); } #ifdef CONFIG_SCHED_SMT @@ -867,11 +869,16 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif +static const struct cpumask *cpu_bigcore_mask(int cpu) +{ + return per_cpu(cpu_sibling_map, cpu); +} + static struct sched_domain_topology_level powerpc_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, + { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, }; @@ -1311,7 +1318,6 @@ static void add_cpu_to_masks(int cpu) void start_secondary(void *unused) { unsigned int cpu = smp_processor_id(); - struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; mmgrab(_mm); current->active_mm = _mm; @@ -1337,14 +1343,20 @@ void start_secondary(void *unused) /* Update topology CPU masks */ add_cpu_to_masks(cpu); - if (has_big_cores) - sibling_mask = cpu_smallcore_mask; /* * Check for any shared caches. Note that this must be done on a * per-core basis because one core in the pair might be disabled. */ - if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu))) - shared_caches = true; + if (!shared_caches) { + struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; + struct cpumask *mask = cpu_l2_cache_mask(cpu); + + if (has_big_cores) + sibling_mask = cpu_smallcore_mask; + + if (cpumask_weight(mask) > cpumask_weight(sibling_mask(cpu))) + shared_caches = true; + } set_numa_node(numa_cpu_lookup_table[cpu]); set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); @@ -1375,9 +1387,17 @@ static void fixup_topology(void) #ifdef CONFIG_SCHED_SMT if (has_big_cores) { pr_info("Big cores detected but using small core scheduling\n"); - powerpc_topology[0].mask = smallcore_smt_mask; + powerpc_topology[smt_idx].mask = smallcore_smt_mask; } #endif + if (shared_caches) { + pr_info("Using shared cache scheduler topology\n"); + powerpc_topology[bigcore_idx].mask = shared_cache_mask; + powerpc_topology[bigcore_idx].sd_flags = powerpc_shared_cache_flags; +#ifdef CONFIG_SCHED_DEBUG + powerpc_topology[bigcore_idx].name = "CACHE"; +#endif + } } void __init smp_cpus_done(unsigned int max_cpus) -- 2.17.1
[PATCH v4 00/10] Coregroup support on Powerpc
Changelog v3 ->v4: v3: https://lore.kernel.org/lkml/20200723085116.4731-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Create coregroup domain if coregroup_support doesn't exist, update MC mask to the next smaller domain mask. Changelog v2 -> v3: v2: https://lore.kernel.org/linuxppc-dev/20200721113814.32284-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Cache node for reuse Removed node caching part. Rewrote the Commit msg (Michael Ellerman) Renamed to powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES powerpc/smp: Enable small core scheduling sooner Rewrote changelog (Gautham) Renamed to powerpc/smp: Move topology fixups into a new function powerpc/smp: Create coregroup domain Add optimization for mask updation under coregroup_support Changelog v1 -> v2: v1: https://lore.kernel.org/linuxppc-dev/20200714043624.5648-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Merge Power9 topology with Power topology Replaced a reference to cpu_smt_mask with per_cpu(cpu_sibling_map, cpu) since cpu_smt_mask is only defined under CONFIG_SCHED_SMT powerpc/smp: Enable small core scheduling sooner Restored the previous info msg (Jordan) Moved big core topology fixup to fixup_topology (Gautham) powerpc/smp: Dont assume l2-cache to be superset of sibling Set cpumask after verifying l2-cache. (Gautham) powerpc/smp: Generalize 2nd sched domain Moved shared_cache topology fixup to fixup_topology (Gautham) Powerpc/numa: Detect support for coregroup Explained Coregroup in commit msg (Michael Ellerman) Powerpc/smp: Create coregroup domain Moved coregroup topology fixup to fixup_topology (Gautham) powerpc/smp: Implement cpu_to_coregroup_id Move coregroup_enabled before getting associativity (Gautham) powerpc/smp: Provide an ability to disable coregroup Patch dropped (Michael Ellerman) Cleanup of existing powerpc topologies and add coregroup support on Powerpc. Coregroup is a group of (subset of) cores of a DIE that share a resource. Patch 7 of this patch series: "Powerpc/numa: Detect support for coregroup" depends on https://lore.kernel.org/linuxppc-dev/20200707140644.7241-1-sri...@linux.vnet.ibm.com/t/#u However it should be easy to rebase the patch without the above patch. This patch series is based on top of current powerpc/next tree + the above patch. On Power 8 Systems -- $ tail /proc/cpuinfo processor : 255 cpu : POWER8 (architected), altivec supported clock : 3724.00MHz revision: 2.1 (pvr 004b 0201) timebase: 51200 platform: pSeries model : IBM,8408-E8E machine : CHRP IBM,8408-E8E MMU : Hash Before the patchset --- $ cat /proc/sys/kernel/sched_domain/cpu0/domain*/name SMT DIE NUMA NUMA $ head /proc/schedstat version 15 timestamp 4295534931 cpu0 0 0 0 0 0 0 41389823338 17682779896 14117 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain2 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain3 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu1 0 0 0 0 0 0 27087859050 152273672 10396 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 After the patchset -- $ cat /proc/sys/kernel/sched_domain/cpu0/domain*/name SMT DIE NUMA NUMA $ head /proc/schedstat version 15 timestamp 4295534931 cpu0 0 0 0 0 0 0 41389823338 17682779896 14117 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain2 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain3 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu1 0 0 0 0 0 0 27087859050 152273672 10396 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1
[PATCH v4 05/10] powerpc/smp: Dont assume l2-cache to be superset of sibling
Current code assumes that cpumask of cpus sharing a l2-cache mask will always be a superset of cpu_sibling_mask. Lets stop that assumption. cpu_l2_cache_mask is a superset of cpu_sibling_mask if and only if shared_caches is set. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Set cpumask after verifying l2-cache. (Gautham) arch/powerpc/kernel/smp.c | 28 +++- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index da27f6909be1..d997c7411664 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1194,6 +1194,7 @@ static bool update_mask_by_l2(int cpu, struct cpumask *(*mask_fn)(int)) if (!l2_cache) return false; + cpumask_set_cpu(cpu, mask_fn(cpu)); for_each_cpu(i, cpu_online_mask) { /* * when updating the marks the current CPU has not been marked @@ -1276,29 +1277,30 @@ static void add_cpu_to_masks(int cpu) * add it to it's own thread sibling mask. */ cpumask_set_cpu(cpu, cpu_sibling_mask(cpu)); + cpumask_set_cpu(cpu, cpu_core_mask(cpu)); for (i = first_thread; i < first_thread + threads_per_core; i++) if (cpu_online(i)) set_cpus_related(i, cpu, cpu_sibling_mask); add_cpu_to_smallcore_masks(cpu); - /* -* Copy the thread sibling mask into the cache sibling mask -* and mark any CPUs that share an L2 with this CPU. -*/ - for_each_cpu(i, cpu_sibling_mask(cpu)) - set_cpus_related(cpu, i, cpu_l2_cache_mask); update_mask_by_l2(cpu, cpu_l2_cache_mask); - /* -* Copy the cache sibling mask into core sibling mask and mark -* any CPUs on the same chip as this CPU. -*/ - for_each_cpu(i, cpu_l2_cache_mask(cpu)) - set_cpus_related(cpu, i, cpu_core_mask); + if (pkg_id == -1) { + struct cpumask *(*mask)(int) = cpu_sibling_mask; + + /* +* Copy the sibling mask into core sibling mask and +* mark any CPUs on the same chip as this CPU. +*/ + if (shared_caches) + mask = cpu_l2_cache_mask; + + for_each_cpu(i, mask(cpu)) + set_cpus_related(cpu, i, cpu_core_mask); - if (pkg_id == -1) return; + } for_each_cpu(i, cpu_online_mask) if (get_physical_package_id(i) == pkg_id) -- 2.17.1
[PATCH v4 02/10] powerpc/smp: Merge Power9 topology with Power topology
A new sched_domain_topology_level was added just for Power9. However the same can be achieved by merging powerpc_topology with power9_topology and makes the code more simpler especially when adding a new sched domain. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Replaced a reference to cpu_smt_mask with per_cpu(cpu_sibling_map, cpu) since cpu_smt_mask is only defined under CONFIG_SCHED_SMT arch/powerpc/kernel/smp.c | 33 ++--- 1 file changed, 10 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index edf94ca64eea..283a04e54f52 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1313,7 +1313,7 @@ int setup_profiling_timer(unsigned int multiplier) } #ifdef CONFIG_SCHED_SMT -/* cpumask of CPUs with asymetric SMT dependancy */ +/* cpumask of CPUs with asymmetric SMT dependency */ static int powerpc_smt_flags(void) { int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; @@ -1326,14 +1326,6 @@ static int powerpc_smt_flags(void) } #endif -static struct sched_domain_topology_level powerpc_topology[] = { -#ifdef CONFIG_SCHED_SMT - { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, -#endif - { cpu_cpu_mask, SD_INIT_NAME(DIE) }, - { NULL, }, -}; - /* * P9 has a slightly odd architecture where pairs of cores share an L2 cache. * This topology makes it *much* cheaper to migrate tasks between adjacent cores @@ -1351,7 +1343,13 @@ static int powerpc_shared_cache_flags(void) */ static const struct cpumask *shared_cache_mask(int cpu) { - return cpu_l2_cache_mask(cpu); + if (shared_caches) + return cpu_l2_cache_mask(cpu); + + if (has_big_cores) + return cpu_smallcore_mask(cpu); + + return per_cpu(cpu_sibling_map, cpu); } #ifdef CONFIG_SCHED_SMT @@ -1361,7 +1359,7 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif -static struct sched_domain_topology_level power9_topology[] = { +static struct sched_domain_topology_level powerpc_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif @@ -1386,21 +1384,10 @@ void __init smp_cpus_done(unsigned int max_cpus) #ifdef CONFIG_SCHED_SMT if (has_big_cores) { pr_info("Big cores detected but using small core scheduling\n"); - power9_topology[0].mask = smallcore_smt_mask; powerpc_topology[0].mask = smallcore_smt_mask; } #endif - /* -* If any CPU detects that it's sharing a cache with another CPU then -* use the deeper topology that is aware of this sharing. -*/ - if (shared_caches) { - pr_info("Using shared cache scheduler topology\n"); - set_sched_topology(power9_topology); - } else { - pr_info("Using standard scheduler topology\n"); - set_sched_topology(powerpc_topology); - } + set_sched_topology(powerpc_topology); } #ifdef CONFIG_HOTPLUG_CPU -- 2.17.1
[PATCH v4 03/10] powerpc/smp: Move powerpc_topology above
Just moving the powerpc_topology description above. This will help in using functions in this file and avoid declarations. No other functional changes Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- arch/powerpc/kernel/smp.c | 116 +++--- 1 file changed, 58 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 283a04e54f52..a685915e5941 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -818,6 +818,64 @@ static int init_cpu_l1_cache_map(int cpu) return err; } +static bool shared_caches; + +#ifdef CONFIG_SCHED_SMT +/* cpumask of CPUs with asymmetric SMT dependency */ +static int powerpc_smt_flags(void) +{ + int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + + if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { + printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); + flags |= SD_ASYM_PACKING; + } + return flags; +} +#endif + +/* + * P9 has a slightly odd architecture where pairs of cores share an L2 cache. + * This topology makes it *much* cheaper to migrate tasks between adjacent cores + * since the migrated task remains cache hot. We want to take advantage of this + * at the scheduler level so an extra topology level is required. + */ +static int powerpc_shared_cache_flags(void) +{ + return SD_SHARE_PKG_RESOURCES; +} + +/* + * We can't just pass cpu_l2_cache_mask() directly because + * returns a non-const pointer and the compiler barfs on that. + */ +static const struct cpumask *shared_cache_mask(int cpu) +{ + if (shared_caches) + return cpu_l2_cache_mask(cpu); + + if (has_big_cores) + return cpu_smallcore_mask(cpu); + + return per_cpu(cpu_sibling_map, cpu); +} + +#ifdef CONFIG_SCHED_SMT +static const struct cpumask *smallcore_smt_mask(int cpu) +{ + return cpu_smallcore_mask(cpu); +} +#endif + +static struct sched_domain_topology_level powerpc_topology[] = { +#ifdef CONFIG_SCHED_SMT + { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, +#endif + { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, + { cpu_cpu_mask, SD_INIT_NAME(DIE) }, + { NULL, }, +}; + static int init_big_cores(void) { int cpu; @@ -1247,8 +1305,6 @@ static void add_cpu_to_masks(int cpu) set_cpus_related(cpu, i, cpu_core_mask); } -static bool shared_caches; - /* Activate a secondary processor. */ void start_secondary(void *unused) { @@ -1312,62 +1368,6 @@ int setup_profiling_timer(unsigned int multiplier) return 0; } -#ifdef CONFIG_SCHED_SMT -/* cpumask of CPUs with asymmetric SMT dependency */ -static int powerpc_smt_flags(void) -{ - int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; - - if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { - printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); - flags |= SD_ASYM_PACKING; - } - return flags; -} -#endif - -/* - * P9 has a slightly odd architecture where pairs of cores share an L2 cache. - * This topology makes it *much* cheaper to migrate tasks between adjacent cores - * since the migrated task remains cache hot. We want to take advantage of this - * at the scheduler level so an extra topology level is required. - */ -static int powerpc_shared_cache_flags(void) -{ - return SD_SHARE_PKG_RESOURCES; -} - -/* - * We can't just pass cpu_l2_cache_mask() directly because - * returns a non-const pointer and the compiler barfs on that. - */ -static const struct cpumask *shared_cache_mask(int cpu) -{ - if (shared_caches) - return cpu_l2_cache_mask(cpu); - - if (has_big_cores) - return cpu_smallcore_mask(cpu); - - return per_cpu(cpu_sibling_map, cpu); -} - -#ifdef CONFIG_SCHED_SMT -static const struct cpumask *smallcore_smt_mask(int cpu) -{ - return cpu_smallcore_mask(cpu); -} -#endif - -static struct sched_domain_topology_level powerpc_topology[] = { -#ifdef CONFIG_SCHED_SMT - { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, -#endif - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, - { cpu_cpu_mask, SD_INIT_NAME(DIE) }, - { NULL, }, -}; - void __init smp_cpus_done(unsigned int max_cpus) { /* -- 2.17.1
[PATCH v4 04/10] powerpc/smp: Move topology fixups into a new function
Move topology fixup based on the platform attributes into its own function which is called just before set_sched_topology. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v2 -> v3: Rewrote changelog (Gautham) Renamed to powerpc/smp: Move topology fixups into a new function arch/powerpc/kernel/smp.c | 17 +++-- 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index a685915e5941..da27f6909be1 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1368,6 +1368,16 @@ int setup_profiling_timer(unsigned int multiplier) return 0; } +static void fixup_topology(void) +{ +#ifdef CONFIG_SCHED_SMT + if (has_big_cores) { + pr_info("Big cores detected but using small core scheduling\n"); + powerpc_topology[0].mask = smallcore_smt_mask; + } +#endif +} + void __init smp_cpus_done(unsigned int max_cpus) { /* @@ -1381,12 +1391,7 @@ void __init smp_cpus_done(unsigned int max_cpus) dump_numa_cpu_topology(); -#ifdef CONFIG_SCHED_SMT - if (has_big_cores) { - pr_info("Big cores detected but using small core scheduling\n"); - powerpc_topology[0].mask = smallcore_smt_mask; - } -#endif + fixup_topology(); set_sched_topology(powerpc_topology); } -- 2.17.1
[PATCH v4 01/10] powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES
Fix a build warning in a non CONFIG_NEED_MULTIPLE_NODES "error: _numa_cpu_lookup_table_ undeclared" Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v2 -> v3: Removed node caching part. Rewrote the Commit msg (Michael Ellerman) Renamed to powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES arch/powerpc/kernel/smp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 73199470c265..edf94ca64eea 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -860,6 +860,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(_cpu(cpu_core_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); +#ifdef CONFIG_NEED_MULTIPLE_NODES /* * numa_node_id() works after this. */ @@ -868,6 +869,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) set_cpu_numa_mem(cpu, local_memory_node(numa_cpu_lookup_table[cpu])); } +#endif } /* Init the cpumasks so the boot CPU is related to itself */ -- 2.17.1
[PATCH v6 2/9] hwmon: pmbus: adm1266: Add Block process call
From: Alexandru Tachici PmBus devices support Block Write-Block Read Process Call described in SMBus specification v 2.0 with the exception that Block writes and reads are permitted to have up 255 data bytes instead of max 32 bytes (SMBus). This patch adds Block WR process call support for ADM1266. Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/Kconfig | 1 + drivers/hwmon/pmbus/adm1266.c | 73 +++ 2 files changed, 74 insertions(+) diff --git a/drivers/hwmon/pmbus/Kconfig b/drivers/hwmon/pmbus/Kconfig index da34083e1ffd..c04068b665e6 100644 --- a/drivers/hwmon/pmbus/Kconfig +++ b/drivers/hwmon/pmbus/Kconfig @@ -28,6 +28,7 @@ config SENSORS_PMBUS config SENSORS_ADM1266 tristate "Analog Devices ADM1266 Sequencer" + select CRC8 help If you say yes here you get hardware monitoring support for Analog Devices ADM1266 Cascadable Super Sequencer. diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index 79e8d90886b8..63975eba34ad 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -6,6 +6,7 @@ * Copyright 2020 Analog Devices Inc. */ +#include #include #include #include @@ -13,11 +14,80 @@ #include "pmbus.h" #include +#define ADM1266_PMBUS_BLOCK_MAX255 + struct adm1266_data { struct pmbus_driver_info info; struct i2c_client *client; + struct mutex buf_mutex; + u8 write_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; + u8 read_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; }; +DECLARE_CRC8_TABLE(pmbus_crc_table); + +/* + * Different from Block Read as it sends data and waits for the slave to + * return a value dependent on that data. The protocol is simply a Write Block + * followed by a Read Block without the Read-Block command field and the + * Write-Block STOP bit. + */ +static int adm1266_pmbus_block_xfer(struct adm1266_data *data, u8 cmd, u8 w_len, u8 *data_w, + u8 *data_r) +{ + struct i2c_client *client = data->client; + struct i2c_msg msgs[2] = { + { + .addr = client->addr, + .flags = I2C_M_DMA_SAFE, + .buf = data->write_buf, + .len = w_len + 2, + }, + { + .addr = client->addr, + .flags = I2C_M_RD | I2C_M_DMA_SAFE, + .buf = data->read_buf, + .len = ADM1266_PMBUS_BLOCK_MAX + 2, + } + }; + u8 addr; + u8 crc; + int ret; + + mutex_lock(>buf_mutex); + + msgs[0].buf[0] = cmd; + msgs[0].buf[1] = w_len; + memcpy([0].buf[2], data_w, w_len); + + ret = i2c_transfer(client->adapter, msgs, 2); + if (ret != 2) { + if (ret >= 0) + ret = -EPROTO; + return ret; + } + + if (client->flags & I2C_CLIENT_PEC) { + addr = i2c_8bit_addr_from_msg([0]); + crc = crc8(pmbus_crc_table, , 1, 0); + crc = crc8(pmbus_crc_table, msgs[0].buf, msgs[0].len, crc); + + addr = i2c_8bit_addr_from_msg([1]); + crc = crc8(pmbus_crc_table, , 1, crc); + crc = crc8(pmbus_crc_table, msgs[1].buf, msgs[1].buf[0] + 1, crc); + + if (crc != msgs[1].buf[msgs[1].buf[0] + 1]) + return -EBADMSG; + } + + memcpy(data_r, [1].buf[1], msgs[1].buf[0]); + + ret = msgs[1].buf[0]; + mutex_unlock(>buf_mutex); + + return ret; +} + static int adm1266_probe(struct i2c_client *client, const struct i2c_device_id *id) { struct adm1266_data *data; @@ -33,6 +103,9 @@ static int adm1266_probe(struct i2c_client *client, const struct i2c_device_id * for (i = 0; i < data->info.pages; i++) data->info.func[i] = PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT; + crc8_populate_msb(pmbus_crc_table, 0x7); + mutex_init(>buf_mutex); + return pmbus_do_probe(client, id, >info); } -- 2.20.1
[PATCH v6 8/9] hwmon: pmbus: adm1266: program configuration
From: Alexandru Tachici Writing the configuration Intel hex file to the nvmem, of an adm1266, with offset 0x3, will now trigger the configuration programming. During this process the adm1266 sequencer will be stopped and at the end will be issued a seq reset (see AN-1453 Programming the configuration). Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 179 +- 1 file changed, 178 insertions(+), 1 deletion(-) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index 376fc56abe04..2b07e38041da 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -40,7 +40,10 @@ #define ADM1266_BLACKBOX_INFO 0xE6 #define ADM1266_PDIO_STATUS0xE9 #define ADM1266_GPIO_STATUS0xEA +#define ADM1266_STATUS_MFR_2 0xED +#define ADM1266_REFRESH_FLASH 0xF5 #define ADM1266_MEMORY_CONFIG 0xF8 +#define ADM1266_MEMORY_CRC 0xF9 #define ADM1266_SWITCH_MEMORY 0xFA #define ADM1266_UPDATE_FW 0xFC #define ADM1266_FW_PASSWORD0xFD @@ -66,6 +69,11 @@ /* ADM1266 STATUS_MFR defines */ #define ADM1266_STATUS_PART_LOCKED(x) FIELD_GET(BIT(2), x) +#define ADM1266_RUNNING_REFRESH(x) FIELD_GET(BIT(3), x) +#define ADM1266_ALL_CRC_FAULT(x) FIELD_GET(BIT(5), x) + +/* ADM1266 STATUS_MFR_2 defines */ +#define ADM1266_MAIN_CONFIG_FAULT(x) FIELD_GET(GENMASK(9, 8), x) /* ADM1266 GO_COMMAND defines */ #define ADM1266_GO_COMMAND_STOPBIT(0) @@ -74,6 +82,8 @@ #define ADM1266_FIRMWARE_OFFSET0x0 #define ADM1266_FIRMWARE_SIZE 131072 +#define ADM1266_CONFIG_OFFSET 0x3 +#define ADM1266_CONFIG_SIZE131072 #define ADM1266_BLACKBOX_OFFSET0x7F700 #define ADM1266_BLACKBOX_SIZE 64 @@ -117,6 +127,11 @@ static const struct nvmem_cell_info adm1266_nvmem_cells[] = { .offset = ADM1266_FIRMWARE_OFFSET, .bytes = ADM1266_FIRMWARE_SIZE, }, + { + .name = "configuration", + .offset = ADM1266_CONFIG_OFFSET, + .bytes = ADM1266_CONFIG_SIZE, + }, }; DECLARE_CRC8_TABLE(pmbus_crc_table); @@ -521,6 +536,9 @@ static int adm1266_read_mem_cell(struct adm1266_data *data, const struct nvmem_c case ADM1266_FIRMWARE_OFFSET: /* firmware is write-only */ return 0; + case ADM1266_CONFIG_OFFSET: + /* configuration is write-only */ + return 0; default: return -EINVAL; } @@ -677,6 +695,7 @@ static int adm1266_write_hex(struct adm1266_data *data, u8 first_writes[7]; u8 byte_count; u8 reg_address; + bool to_slaves = false; int ret; int i; @@ -707,7 +726,10 @@ static int adm1266_write_hex(struct adm1266_data *data, if (ret < 0) return ret; - ret = adm1266_group_cmd(data, reg_address, write_buf, byte_count, true); + if (offset == ADM1266_FIRMWARE_OFFSET) + to_slaves = true; + + ret = adm1266_group_cmd(data, reg_address, write_buf, byte_count, to_slaves); if (ret < 0) { dev_err(>client->dev, "Firmware write error: %d.", ret); return ret; @@ -732,6 +754,87 @@ static int adm1266_write_hex(struct adm1266_data *data, return 0; } +static int adm1266_verify_memory(struct adm1266_data *data) +{ + char cmd[2]; + int ret; + int reg; + + cmd[0] = 0x1; + cmd[1] = 0x0; + ret = adm1266_group_cmd(data, ADM1266_MEMORY_CRC, cmd, + sizeof(cmd), true); + if (ret < 0) + return ret; + + /* after issuing a memory recalculate crc command, wait 1000 ms */ + msleep(1000); + + reg = pmbus_read_word_data(data->client, 0, 0xFF, ADM1266_STATUS_MFR_2); + if (reg < 0) + return reg; + + if (ADM1266_MAIN_CONFIG_FAULT(reg)) { + dev_err(>client->dev, "Main memory corrupted."); + return -EFAULT; + } + + return 0; +} + +static int adm1266_refresh_memory(struct adm1266_data *data) +{ + unsigned int timeout = 9000; + int ret; + u8 cmd[2]; + + cmd[0] = 0x2; + ret = adm1266_group_cmd(data, ADM1266_REFRESH_FLASH, cmd, 1, true); + if (ret < 0) { + dev_err(>client->dev, "Could not refresh flash."); + return ret; + } + + /* after issuing a refresh flash command, wait 9000 ms */ + msleep(9000); + + do { + msleep(1000); + timeout -= 1000; + + ret = pmbus_read_byte_data(data->client, 0, ADM1266_STATUS_MFR); + if (ret < 0) { + dev_err(>client->dev, "Could not read status."); +
[PATCH v6 7/9] hwmon: pmbus: adm1266: program firmware
From: Alexandru Tachici Writing the firmware Intel hex file to the nvmem, of the master adm1266, with offset 0, will now trigger the firmware programming of all cascaded devices simultaneously through pmbus. During this process all adm1266 sequencers will be stopped and at the end will be issued a hard reset (see AN-1453 Programming the firmware). Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 501 +- 1 file changed, 500 insertions(+), 1 deletion(-) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index 34bd4e652729..376fc56abe04 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -18,18 +19,31 @@ #include #include #include "pmbus.h" +#include #include #include +#define ADM1266_STORE_USER_ALL 0x15 +#define ADM1266_STATUS_MFR 0x80 +#define ADM1266_IC_DEVICE_REV 0xAE #define ADM1266_BLACKBOX_CONFIG0xD3 #define ADM1266_PDIO_CONFIG0xD4 +#define ADM1266_SEQUENCE_CONFIG0xD6 +#define ADM1266_SYSTEM_CONFIG 0xD7 +#define ADM1266_GO_COMMAND 0xD8 #define ADM1266_READ_STATE 0xD9 #define ADM1266_READ_BLACKBOX 0xDE #define ADM1266_SET_RTC0xDF +#define ADM1266_LOGIC_CONFIG 0xE0 #define ADM1266_GPIO_CONFIG0xE1 +#define ADM1266_USER_DATA 0xE3 #define ADM1266_BLACKBOX_INFO 0xE6 #define ADM1266_PDIO_STATUS0xE9 #define ADM1266_GPIO_STATUS0xEA +#define ADM1266_MEMORY_CONFIG 0xF8 +#define ADM1266_SWITCH_MEMORY 0xFA +#define ADM1266_UPDATE_FW 0xFC +#define ADM1266_FW_PASSWORD0xFD /* ADM1266 GPIO defines */ #define ADM1266_GPIO_NR9 @@ -44,10 +58,35 @@ #define ADM1266_PDIO_GLITCH_FILT(x)FIELD_GET(GENMASK(12, 9), x) #define ADM1266_PDIO_OUT_CFG(x)FIELD_GET(GENMASK(2, 0), x) +/* ADM1266 FW_PASSWORD defines*/ +#define ADM1266_PASSWD_CMD_LEN 17 +#define ADM1266_CHANGE_PASSWORD1 +#define ADM1266_UNLOCK_DEV 2 +#define ADM1266_LOCK_DEV 3 + +/* ADM1266 STATUS_MFR defines */ +#define ADM1266_STATUS_PART_LOCKED(x) FIELD_GET(BIT(2), x) + +/* ADM1266 GO_COMMAND defines */ +#define ADM1266_GO_COMMAND_STOPBIT(0) +#define ADM1266_GO_COMMAND_SEQ_RES BIT(1) +#define ADM1266_GO_COMMAND_HARD_RESBIT(2) + +#define ADM1266_FIRMWARE_OFFSET0x0 +#define ADM1266_FIRMWARE_SIZE 131072 #define ADM1266_BLACKBOX_OFFSET0x7F700 #define ADM1266_BLACKBOX_SIZE 64 #define ADM1266_PMBUS_BLOCK_MAX255 +#define ADM1266_MAX_DEVICES16 + +static LIST_HEAD(registered_masters); +static DEFINE_MUTEX(registered_masters_lock); + +struct adm1266_data_ref { + struct adm1266_data *data; + struct list_head list; +}; struct adm1266_data { struct pmbus_driver_info info; @@ -57,6 +96,10 @@ struct adm1266_data { struct dentry *debugfs_dir; struct nvmem_config nvmem_config; struct nvmem_device *nvmem; + bool master_dev; + struct list_head cascaded_devices_list; + struct mutex cascaded_devices_mutex; /* lock cascaded_devices_list */ + u8 nr_devices; u8 *dev_mem; struct mutex buf_mutex; u8 write_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; @@ -69,6 +112,11 @@ static const struct nvmem_cell_info adm1266_nvmem_cells[] = { .offset = ADM1266_BLACKBOX_OFFSET, .bytes = 2048, }, + { + .name = "firmware", + .offset = ADM1266_FIRMWARE_OFFSET, + .bytes = ADM1266_FIRMWARE_SIZE, + }, }; DECLARE_CRC8_TABLE(pmbus_crc_table); @@ -123,6 +171,27 @@ static int adm1266_pmbus_group_command(struct adm1266_data *data, struct i2c_cli return ret; } +static int adm1266_group_cmd(struct adm1266_data *data, u8 cmd, u8 *write_data, u8 w_len, +bool to_slaves) +{ + struct i2c_client *clients[ADM1266_MAX_DEVICES]; + struct adm1266_data_ref *slave_ref; + int i = 0; + + clients[i] = data->client; + i++; + + if (!to_slaves) + return adm1266_pmbus_group_command(data, clients, 1, cmd, w_len, write_data); + + list_for_each_entry(slave_ref, >cascaded_devices_list, list) { + clients[i] = slave_ref->data->client; + i++; + } + + return adm1266_pmbus_group_command(data, clients, i, cmd, w_len, write_data); +} + /* * Different from Block Read as it sends data and waits for the slave to * return a value dependent on that data. The protocol is simply a Write Block @@ -449,6 +518,9 @@ static int adm1266_read_mem_cell(struct adm1266_data *data, const struct nvmem_c if (ret) dev_err(>client->dev, "Could not read
[PATCH v6 4/9] hwmon: pmbus: adm1266: add debugfs for states
From: Alexandru Tachici Add a debugfs entry which prints the current state of the adm1266 sequencer. Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 42 ++- 1 file changed, 41 insertions(+), 1 deletion(-) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index be911de02cf6..dbffc6d12e87 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -19,6 +19,7 @@ #include #define ADM1266_PDIO_CONFIG0xD4 +#define ADM1266_READ_STATE 0xD9 #define ADM1266_GPIO_CONFIG0xE1 #define ADM1266_PDIO_STATUS0xE9 #define ADM1266_GPIO_STATUS0xEA @@ -43,6 +44,7 @@ struct adm1266_data { struct gpio_chip gc; const char *gpio_names[ADM1266_GPIO_NR + ADM1266_PDIO_NR]; struct i2c_client *client; + struct dentry *debugfs_dir; struct mutex buf_mutex; u8 write_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; u8 read_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; @@ -287,6 +289,38 @@ static int adm1266_config_gpio(struct adm1266_data *data) return ret; } +static int adm1266_state_read(struct seq_file *s, void *pdata) +{ + struct device *dev = s->private; + struct i2c_client *client = to_i2c_client(dev); + int ret; + + ret = i2c_smbus_read_word_data(client, ADM1266_READ_STATE); + if (ret < 0) + return ret; + + seq_printf(s, "%d\n", ret); + + return 0; +} + +static void adm1266_init_debugfs(struct adm1266_data *data) +{ + struct dentry *entry; + struct dentry *root; + + root = pmbus_get_debugfs_dir(data->client); + if (!root) + return; + + data->debugfs_dir = debugfs_create_dir(data->client->name, root); + if (!data->debugfs_dir) + return; + + entry = debugfs_create_devm_seqfile(>client->dev, "sequencer_state", + data->debugfs_dir, adm1266_state_read); +} + static int adm1266_probe(struct i2c_client *client, const struct i2c_device_id *id) { struct adm1266_data *data; @@ -310,7 +344,13 @@ static int adm1266_probe(struct i2c_client *client, const struct i2c_device_id * if (ret < 0) return ret; - return pmbus_do_probe(client, id, >info); + ret = pmbus_do_probe(client, id, >info); + if (ret) + return ret; + + adm1266_init_debugfs(data); + + return 0; } static const struct of_device_id adm1266_of_match[] = { -- 2.20.1
[PATCH v6 0/9] hwmon: pmbus: adm1266: add support
From: Alexandru Tachici Add PMBus probing driver for the adm1266 Cascadable Super Sequencer with Margin Control and Fault Recording. Driver is using the pmbus_core, creating sysfs files under hwmon for inputs: vh1->vh4 and vp1->vp13. 1. Add PMBus probing driver for inputs vh1->vh4 and vp1->vp13. 2. Add Block Write-Read Process Call command. A PMBus specific implementation was required because block write with I2C_SMBUS_PROC_CALL flag allows a maximum of 32 bytes to be received. 3. This makes adm1266 driver expose GPIOs to user-space. Currently are read only. Future developments on the firmware will allow them to be writable. 4. Allow the current sate of the seqeuncer to be read through debugfs. 5. Blackboxes are 64 bytes of chip state related data that is generated on faults. Use the nvmem kernel api to expose the blackbox chip functionality to userspace. 6. Add group command support. This will allow the driver to stop/program all cascaded adm1266 devices at once. 7. Writing the firmware hex file with offset 0 to the nvmem of the master adm1266 will trigger the firmware programming of all cascaded devices. The master adm1266 of each device is specified in the devicetree. 8. Writing the configuration hex file to 0x3 byte address of the nvmem file will trigger the programing of that device in particular. 9. dt bindings for ADM1266. Alexandru Tachici (9): hwmon: pmbus: adm1266: add support hwmon: pmbus: adm1266: Add Block process call hwmon: pmbus: adm1266: Add support for GPIOs hwmon: pmbus: adm1266: add debugfs for states hwmon: pmbus: adm1266: read blackbox hwmon: pmbus: adm1266: Add group command support hwmon: pmbus: adm1266: program firmware hwmon: pmbus: adm1266: program configuration dt-bindings: hwmon: Add bindings for ADM1266 .../bindings/hwmon/adi,adm1266.yaml | 56 + Documentation/hwmon/adm1266.rst | 37 + Documentation/hwmon/index.rst |1 + drivers/hwmon/pmbus/Kconfig | 10 + drivers/hwmon/pmbus/Makefile |1 + drivers/hwmon/pmbus/adm1266.c | 1273 + 6 files changed, 1378 insertions(+) create mode 100644 Documentation/devicetree/bindings/hwmon/adi,adm1266.yaml create mode 100644 Documentation/hwmon/adm1266.rst create mode 100644 drivers/hwmon/pmbus/adm1266.c -- 2.20.1
[PATCH v6 1/9] hwmon: pmbus: adm1266: add support
From: Alexandru Tachici Add pmbus probing driver for the adm1266 Cascadable Super Sequencer with Margin Control and Fault Recording. Driver is using the pmbus_core, creating sysfs files under hwmon for inputs: vh1->vh4 and vp1->vp13. Signed-off-by: Alexandru Tachici --- Documentation/hwmon/adm1266.rst | 37 +++ Documentation/hwmon/index.rst | 1 + drivers/hwmon/pmbus/Kconfig | 9 + drivers/hwmon/pmbus/Makefile| 1 + drivers/hwmon/pmbus/adm1266.c | 65 + 5 files changed, 113 insertions(+) create mode 100644 Documentation/hwmon/adm1266.rst create mode 100644 drivers/hwmon/pmbus/adm1266.c diff --git a/Documentation/hwmon/adm1266.rst b/Documentation/hwmon/adm1266.rst new file mode 100644 index ..9257f8a48650 --- /dev/null +++ b/Documentation/hwmon/adm1266.rst @@ -0,0 +1,37 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Kernel driver adm1266 += + +Supported chips: + * Analog Devices ADM1266 +Prefix: 'adm1266' +Datasheet: https://www.analog.com/media/en/technical-documentation/data-sheets/ADM1266.pdf + +Author: Alexandru Tachici + + +Description +--- + +This driver supports hardware monitoring for Analog Devices ADM1266 sequencer. + +ADM1266 is a sequencer that features voltage readback from 17 channels via an +integrated 12 bit SAR ADC, accessed using a PMBus interface. + +The driver is a client driver to the core PMBus driver. Please see +Documentation/hwmon/pmbus for details on PMBus client drivers. + + +Sysfs entries +- + +The following attributes are supported. Limits are read-write, history reset +attributes are write-only, all other attributes are read-only. + +inX_label "voutx" +inX_input Measured voltage. +inX_minMinimum Voltage. +inX_maxMaximum voltage. +inX_min_alarm Voltage low alarm. +inX_max_alarm Voltage high alarm. diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst index 55ff4b7c5349..056f7107d7b8 100644 --- a/Documentation/hwmon/index.rst +++ b/Documentation/hwmon/index.rst @@ -30,6 +30,7 @@ Hardware Monitoring Kernel Drivers adm1026 adm1031 adm1177 + adm1266 adm1275 adm9240 ads7828 diff --git a/drivers/hwmon/pmbus/Kconfig b/drivers/hwmon/pmbus/Kconfig index a337195b1c39..da34083e1ffd 100644 --- a/drivers/hwmon/pmbus/Kconfig +++ b/drivers/hwmon/pmbus/Kconfig @@ -26,6 +26,15 @@ config SENSORS_PMBUS This driver can also be built as a module. If so, the module will be called pmbus. +config SENSORS_ADM1266 + tristate "Analog Devices ADM1266 Sequencer" + help + If you say yes here you get hardware monitoring support for Analog + Devices ADM1266 Cascadable Super Sequencer. + + This driver can also be built as a module. If so, the module will + be called adm1266. + config SENSORS_ADM1275 tristate "Analog Devices ADM1275 and compatibles" help diff --git a/drivers/hwmon/pmbus/Makefile b/drivers/hwmon/pmbus/Makefile index c4b15db996ad..da41d22be1c9 100644 --- a/drivers/hwmon/pmbus/Makefile +++ b/drivers/hwmon/pmbus/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_PMBUS)+= pmbus_core.o obj-$(CONFIG_SENSORS_PMBUS)+= pmbus.o +obj-$(CONFIG_SENSORS_ADM1266) += adm1266.o obj-$(CONFIG_SENSORS_ADM1275) += adm1275.o obj-$(CONFIG_SENSORS_BEL_PFE) += bel-pfe.o obj-$(CONFIG_SENSORS_IBM_CFFPS)+= ibm-cffps.o diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c new file mode 100644 index ..79e8d90886b8 --- /dev/null +++ b/drivers/hwmon/pmbus/adm1266.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ADM1266 - Cascadable Super Sequencer with Margin + * Control and Fault Recording + * + * Copyright 2020 Analog Devices Inc. + */ + +#include +#include +#include +#include +#include "pmbus.h" +#include + +struct adm1266_data { + struct pmbus_driver_info info; + struct i2c_client *client; +}; + +static int adm1266_probe(struct i2c_client *client, const struct i2c_device_id *id) +{ + struct adm1266_data *data; + int i; + + data = devm_kzalloc(>dev, sizeof(struct adm1266_data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->client = client; + data->info.pages = 17; + data->info.format[PSC_VOLTAGE_OUT] = linear; + for (i = 0; i < data->info.pages; i++) + data->info.func[i] = PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT; + + return pmbus_do_probe(client, id, >info); +} + +static const struct of_device_id adm1266_of_match[] = { + { .compatible = "adi,adm1266" }, + { } +}; +MODULE_DEVICE_TABLE(of, adm1266_of_match); + +static const struct i2c_device_id adm1266_id[] = { + { "adm1266", 0 }, + { } +}; +MODULE_DEVICE_TABLE(i2c, adm1266_id); + +static struct i2c_driver adm1266_driver = { +
[PATCH v6 5/9] hwmon: pmbus: adm1266: read blackbox
From: Alexandru Tachici Use the nvmem kernel api to expose the black box chip functionality to userspace. Using this feature, the device is capable of recording to nonvolatile flash memory the vital data about the system status that caused the system to perform a black box write. A blackbox is 64 bytes of data containing all the status registers, last two states of the sequencer, timestamp and counters. The mapping of this data is described in the adm1266 datasheet. On power-up the driver sets the unix time to the adm1266 using the SET_RTC command. This value is incremented by an internal clock and it is used as timestamp for the black box feature. Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 165 ++ 1 file changed, 165 insertions(+) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index dbffc6d12e87..c06465100320 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -15,12 +15,19 @@ #include #include #include +#include +#include #include "pmbus.h" #include +#include +#define ADM1266_BLACKBOX_CONFIG0xD3 #define ADM1266_PDIO_CONFIG0xD4 #define ADM1266_READ_STATE 0xD9 +#define ADM1266_READ_BLACKBOX 0xDE +#define ADM1266_SET_RTC0xDF #define ADM1266_GPIO_CONFIG0xE1 +#define ADM1266_BLACKBOX_INFO 0xE6 #define ADM1266_PDIO_STATUS0xE9 #define ADM1266_GPIO_STATUS0xEA @@ -37,6 +44,9 @@ #define ADM1266_PDIO_GLITCH_FILT(x)FIELD_GET(GENMASK(12, 9), x) #define ADM1266_PDIO_OUT_CFG(x)FIELD_GET(GENMASK(2, 0), x) +#define ADM1266_BLACKBOX_OFFSET0x7F700 +#define ADM1266_BLACKBOX_SIZE 64 + #define ADM1266_PMBUS_BLOCK_MAX255 struct adm1266_data { @@ -45,11 +55,22 @@ struct adm1266_data { const char *gpio_names[ADM1266_GPIO_NR + ADM1266_PDIO_NR]; struct i2c_client *client; struct dentry *debugfs_dir; + struct nvmem_config nvmem_config; + struct nvmem_device *nvmem; + u8 *dev_mem; struct mutex buf_mutex; u8 write_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; u8 read_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; }; +static const struct nvmem_cell_info adm1266_nvmem_cells[] = { + { + .name = "blackbox", + .offset = ADM1266_BLACKBOX_OFFSET, + .bytes = 2048, + }, +}; + DECLARE_CRC8_TABLE(pmbus_crc_table); /* @@ -321,6 +342,142 @@ static void adm1266_init_debugfs(struct adm1266_data *data) data->debugfs_dir, adm1266_state_read); } +#if IS_ENABLED(CONFIG_NVMEM) +static int adm1266_nvmem_read_blackbox(struct adm1266_data *data, u8 *read_buff) +{ + int record_count; + char index; + u8 buf[5]; + int ret; + + ret = i2c_smbus_read_block_data(data->client, ADM1266_BLACKBOX_INFO, buf); + if (ret < 0) + return ret; + + if (ret != 4) + return -EIO; + + record_count = buf[3]; + + for (index = 0; index < record_count; index++) { + ret = adm1266_pmbus_block_xfer(data, ADM1266_READ_BLACKBOX, 1, , read_buff); + if (ret < 0) + return ret; + + if (ret != ADM1266_BLACKBOX_SIZE) + return -EIO; + + read_buff += ADM1266_BLACKBOX_SIZE; + } + + return 0; +} + +static bool adm1266_cell_is_accessed(const struct nvmem_cell_info *mem_cell, unsigned int offset, +size_t bytes) +{ + unsigned int start_addr = offset; + unsigned int end_addr = offset + bytes; + unsigned int cell_start = mem_cell->offset; + unsigned int cell_end = mem_cell->offset + mem_cell->bytes; + + return start_addr <= cell_end && cell_start <= end_addr; +} + +static int adm1266_read_mem_cell(struct adm1266_data *data, const struct nvmem_cell_info *mem_cell) +{ + u8 *mem_offset; + int ret; + + switch (mem_cell->offset) { + case ADM1266_BLACKBOX_OFFSET: + mem_offset = data->dev_mem + mem_cell->offset; + + memset(mem_offset, 0, ADM1266_BLACKBOX_SIZE); + + ret = adm1266_nvmem_read_blackbox(data, mem_offset); + if (ret) + dev_err(>client->dev, "Could not read blackbox!"); + return ret; + default: + return -EINVAL; + } +} + +static int adm1266_nvmem_read(void *priv, unsigned int offset, void *val, + size_t bytes) +{ + const struct nvmem_cell_info *mem_cell; + struct adm1266_data *data = priv; + int ret; + int i; + + for (i = 0; i < data->nvmem_config.ncells; i++) { + mem_cell = _nvmem_cells[i]; + if (!adm1266_cell_is_accessed(mem_cell, offset,
[PATCH v6 3/9] hwmon: pmbus: adm1266: Add support for GPIOs
From: Alexandru Tachici Adm1266 exposes 9 GPIOs and 16 PDIOs which are currently read-only. They are controlled by the internal sequencing engine. This patch makes adm1266 driver expose GPIOs and PDIOs to user-space using GPIO provider kernel api. Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 204 ++ 1 file changed, 204 insertions(+) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index 63975eba34ad..be911de02cf6 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -6,18 +6,42 @@ * Copyright 2020 Analog Devices Inc. */ +#include #include +#include +#include #include +#include #include #include #include #include "pmbus.h" #include +#define ADM1266_PDIO_CONFIG0xD4 +#define ADM1266_GPIO_CONFIG0xE1 +#define ADM1266_PDIO_STATUS0xE9 +#define ADM1266_GPIO_STATUS0xEA + +/* ADM1266 GPIO defines */ +#define ADM1266_GPIO_NR9 +#define ADM1266_GPIO_FUNCTIONS(x) FIELD_GET(BIT(0), x) +#define ADM1266_GPIO_INPUT_EN(x) FIELD_GET(BIT(2), x) +#define ADM1266_GPIO_OUTPUT_EN(x) FIELD_GET(BIT(3), x) +#define ADM1266_GPIO_OPEN_DRAIN(x) FIELD_GET(BIT(4), x) + +/* ADM1266 PDIO defines */ +#define ADM1266_PDIO_NR16 +#define ADM1266_PDIO_PIN_CFG(x)FIELD_GET(GENMASK(15, 13), x) +#define ADM1266_PDIO_GLITCH_FILT(x)FIELD_GET(GENMASK(12, 9), x) +#define ADM1266_PDIO_OUT_CFG(x)FIELD_GET(GENMASK(2, 0), x) + #define ADM1266_PMBUS_BLOCK_MAX255 struct adm1266_data { struct pmbus_driver_info info; + struct gpio_chip gc; + const char *gpio_names[ADM1266_GPIO_NR + ADM1266_PDIO_NR]; struct i2c_client *client; struct mutex buf_mutex; u8 write_buf[ADM1266_PMBUS_BLOCK_MAX + 1] cacheline_aligned; @@ -88,9 +112,185 @@ static int adm1266_pmbus_block_xfer(struct adm1266_data *data, u8 cmd, u8 w_len, return ret; } +static const unsigned int adm1266_gpio_mapping[ADM1266_GPIO_NR][2] = { + {1, 0}, + {2, 1}, + {3, 2}, + {4, 8}, + {5, 9}, + {6, 10}, + {7, 11}, + {8, 6}, + {9, 7}, +}; + +static const char *adm1266_names[ADM1266_GPIO_NR + ADM1266_PDIO_NR] = { + "GPIO1", "GPIO2", "GPIO3", "GPIO4", "GPIO5", "GPIO6", "GPIO7", "GPIO8", + "GPIO9", "PDIO1", "PDIO2", "PDIO3", "PDIO4", "PDIO5", "PDIO6", + "PDIO7", "PDIO8", "PDIO9", "PDIO10", "PDIO11", "PDIO12", "PDIO13", + "PDIO14", "PDIO15", "PDIO16", +}; + +static int adm1266_gpio_get(struct gpio_chip *chip, unsigned int offset) +{ + struct adm1266_data *data = gpiochip_get_data(chip); + u8 read_buf[I2C_SMBUS_BLOCK_MAX + 1]; + unsigned long pins_status; + unsigned int pmbus_cmd; + int ret; + + if (offset < ADM1266_GPIO_NR) + pmbus_cmd = ADM1266_GPIO_STATUS; + else + pmbus_cmd = ADM1266_PDIO_STATUS; + + ret = i2c_smbus_read_block_data(data->client, pmbus_cmd, read_buf); + if (ret < 0) + return ret; + + pins_status = read_buf[0] + (read_buf[1] << 8); + if (offset < ADM1266_GPIO_NR) + return test_bit(adm1266_gpio_mapping[offset][1], _status); + + return test_bit(offset - ADM1266_GPIO_NR, _status); +} + +static int adm1266_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask, +unsigned long *bits) +{ + struct adm1266_data *data = gpiochip_get_data(chip); + u8 read_buf[ADM1266_PMBUS_BLOCK_MAX + 1]; + unsigned long status; + unsigned int gpio_nr; + int ret; + + ret = i2c_smbus_read_block_data(data->client, ADM1266_GPIO_STATUS, read_buf); + if (ret < 0) + return ret; + + status = read_buf[0] + (read_buf[1] << 8); + + *bits = 0; + for_each_set_bit(gpio_nr, mask, ADM1266_GPIO_NR) { + if (test_bit(adm1266_gpio_mapping[gpio_nr][1], )) + set_bit(gpio_nr, bits); + } + + ret = i2c_smbus_read_block_data(data->client, ADM1266_PDIO_STATUS, read_buf); + if (ret < 0) + return ret; + + status = read_buf[0] + (read_buf[1] << 8); + + *bits = 0; + for_each_set_bit_from(gpio_nr, mask, ADM1266_GPIO_NR + ADM1266_PDIO_STATUS) { + if (test_bit(gpio_nr - ADM1266_GPIO_NR, )) + set_bit(gpio_nr, bits); + } + + return 0; +} + +static void adm1266_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip) +{ + struct adm1266_data *data = gpiochip_get_data(chip); + u8 read_buf[ADM1266_PMBUS_BLOCK_MAX + 1]; + unsigned long gpio_config; + unsigned long pdio_config; + unsigned long pin_cfg; + u8 write_cmd; + int ret; + int i; + + for (i = 0; i < ADM1266_GPIO_NR; i++) { + write_cmd =
[PATCH v6 9/9] dt-bindings: hwmon: Add bindings for ADM1266
From: Alexandru Tachici Add bindings for the Analog Devices ADM1266 sequencer. Signed-off-by: Alexandru Tachici --- .../bindings/hwmon/adi,adm1266.yaml | 56 +++ 1 file changed, 56 insertions(+) create mode 100644 Documentation/devicetree/bindings/hwmon/adi,adm1266.yaml diff --git a/Documentation/devicetree/bindings/hwmon/adi,adm1266.yaml b/Documentation/devicetree/bindings/hwmon/adi,adm1266.yaml new file mode 100644 index ..ad92686e2ee6 --- /dev/null +++ b/Documentation/devicetree/bindings/hwmon/adi,adm1266.yaml @@ -0,0 +1,56 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/hwmon/adi,adm1266.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Analog Devices ADM1266 Cascadable Super Sequencer with Margin + Control and Fault Recording + +maintainers: + - Alexandru Tachici + +description: | + Analog Devices ADM1266 Cascadable Super Sequencer with Margin + Control and Fault Recording. + https://www.analog.com/media/en/technical-documentation/data-sheets/ADM1266.pdf + +properties: + compatible: +enum: + - adi,adm1266 + + reg: +description: | + I2C address of slave device. +items: + minimum: 0x40 + maximum: 0x4F + + avcc-supply: +description: | + Phandle to the Avcc power supply. + + adi,master-adm1266: +description: | + Represents phandle of a master ADM1266 device cascaded through the IDB. +$ref: "/schemas/types.yaml#/definitions/phandle" + +required: + - compatible + - reg + +additionalProperties: false + +examples: + - | +i2c0 { +#address-cells = <1>; +#size-cells = <0>; + +adm1266@40 { +compatible = "adi,adm1266"; +reg = <0x40>; +}; +}; +... -- 2.20.1
[PATCH v6 6/9] hwmon: pmbus: adm1266: Add group command support
From: Alexandru Tachici The Group Command Protocol is used to send commands to more than one PMBus device. Some devices working together require that they must execute some commands all at once. The commands are sent in one continuous transmission. When the devices detect the STOP condition that ends the sending of commands, they all begin executing the command they received. This patch adds support for the group command protocol. Signed-off-by: Alexandru Tachici --- drivers/hwmon/pmbus/adm1266.c | 50 +++ 1 file changed, 50 insertions(+) diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c index c06465100320..34bd4e652729 100644 --- a/drivers/hwmon/pmbus/adm1266.c +++ b/drivers/hwmon/pmbus/adm1266.c @@ -73,6 +73,56 @@ static const struct nvmem_cell_info adm1266_nvmem_cells[] = { DECLARE_CRC8_TABLE(pmbus_crc_table); +/* PMBus Group command. */ +static int adm1266_pmbus_group_command(struct adm1266_data *data, struct i2c_client **clients, + u8 nr_clients, u8 cmd, u8 w_len, u8 *data_w) +{ + struct i2c_msg *msgs; + u8 addr; + int ret; + int i; + + msgs = kcalloc(nr_clients, sizeof(struct i2c_msg), GFP_KERNEL); + if (!msgs) + return -ENOMEM; + + for (i = 0; i < nr_clients; i++) { + msgs[i].addr = clients[i]->addr; + msgs[i].len = w_len + 1; + + msgs[i].buf = kcalloc(ADM1266_PMBUS_BLOCK_MAX + 2, sizeof(u8), GFP_KERNEL); + if (!msgs[i].buf) { + ret = -ENOMEM; + goto cleanup; + } + + msgs[i].buf[0] = cmd; + memcpy([i].buf[1], data_w, w_len); + + if (clients[i]->flags & I2C_CLIENT_PEC) { + u8 crc = 0; + + addr = i2c_8bit_addr_from_msg([i]); + crc = crc8(pmbus_crc_table, , 1, crc); + crc = crc8(pmbus_crc_table, msgs[i].buf, msgs[i].len, + crc); + + msgs[i].buf[msgs[i].len] = crc; + msgs[i].len++; + } + }; + + ret = i2c_transfer(data->client->adapter, msgs, nr_clients); + +cleanup: + for (i = i - 1; i >= 0; i--) + kfree(msgs[i].buf); + + kfree(msgs); + + return ret; +} + /* * Different from Block Read as it sends data and waits for the slave to * return a value dependent on that data. The protocol is simply a Write Block -- 2.20.1
Re: [PATCH v7 4/7] fs: Introduce O_MAYEXEC flag for openat2(2)
* Al Viro: > On Thu, Jul 23, 2020 at 07:12:24PM +0200, Mickaël Salaün wrote: >> When the O_MAYEXEC flag is passed, openat2(2) may be subject to >> additional restrictions depending on a security policy managed by the >> kernel through a sysctl or implemented by an LSM thanks to the >> inode_permission hook. This new flag is ignored by open(2) and >> openat(2) because of their unspecified flags handling. When used with >> openat2(2), the default behavior is only to forbid to open a directory. > > Correct me if I'm wrong, but it looks like you are introducing a magical > flag that would mean "let the Linux S take an extra special whip > for this open()". > > Why is it done during open? If the caller is passing it deliberately, > why not have an explicit request to apply given torture device to an > already opened file? Why not sys_masochism(int fd, char *hurt_flavour), > for that matter? While I do not think this is appropriate language for a workplace, Al has a point: If the auditing event can be generated on an already-open descriptor, it would also cover scenarios like this one: perl < /path/to/script Where the process that opens the file does not (and cannot) know that it will be used for execution purposes. Thanks, Florian
[PATCH v4 03/10] powerpc/smp: Move powerpc_topology above
Just moving the powerpc_topology description above. This will help in using functions in this file and avoid declarations. No other functional changes Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- arch/powerpc/kernel/smp.c | 116 +++--- 1 file changed, 58 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 283a04e54f52..a685915e5941 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -818,6 +818,64 @@ static int init_cpu_l1_cache_map(int cpu) return err; } +static bool shared_caches; + +#ifdef CONFIG_SCHED_SMT +/* cpumask of CPUs with asymmetric SMT dependency */ +static int powerpc_smt_flags(void) +{ + int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + + if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { + printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); + flags |= SD_ASYM_PACKING; + } + return flags; +} +#endif + +/* + * P9 has a slightly odd architecture where pairs of cores share an L2 cache. + * This topology makes it *much* cheaper to migrate tasks between adjacent cores + * since the migrated task remains cache hot. We want to take advantage of this + * at the scheduler level so an extra topology level is required. + */ +static int powerpc_shared_cache_flags(void) +{ + return SD_SHARE_PKG_RESOURCES; +} + +/* + * We can't just pass cpu_l2_cache_mask() directly because + * returns a non-const pointer and the compiler barfs on that. + */ +static const struct cpumask *shared_cache_mask(int cpu) +{ + if (shared_caches) + return cpu_l2_cache_mask(cpu); + + if (has_big_cores) + return cpu_smallcore_mask(cpu); + + return per_cpu(cpu_sibling_map, cpu); +} + +#ifdef CONFIG_SCHED_SMT +static const struct cpumask *smallcore_smt_mask(int cpu) +{ + return cpu_smallcore_mask(cpu); +} +#endif + +static struct sched_domain_topology_level powerpc_topology[] = { +#ifdef CONFIG_SCHED_SMT + { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, +#endif + { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, + { cpu_cpu_mask, SD_INIT_NAME(DIE) }, + { NULL, }, +}; + static int init_big_cores(void) { int cpu; @@ -1247,8 +1305,6 @@ static void add_cpu_to_masks(int cpu) set_cpus_related(cpu, i, cpu_core_mask); } -static bool shared_caches; - /* Activate a secondary processor. */ void start_secondary(void *unused) { @@ -1312,62 +1368,6 @@ int setup_profiling_timer(unsigned int multiplier) return 0; } -#ifdef CONFIG_SCHED_SMT -/* cpumask of CPUs with asymmetric SMT dependency */ -static int powerpc_smt_flags(void) -{ - int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; - - if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { - printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); - flags |= SD_ASYM_PACKING; - } - return flags; -} -#endif - -/* - * P9 has a slightly odd architecture where pairs of cores share an L2 cache. - * This topology makes it *much* cheaper to migrate tasks between adjacent cores - * since the migrated task remains cache hot. We want to take advantage of this - * at the scheduler level so an extra topology level is required. - */ -static int powerpc_shared_cache_flags(void) -{ - return SD_SHARE_PKG_RESOURCES; -} - -/* - * We can't just pass cpu_l2_cache_mask() directly because - * returns a non-const pointer and the compiler barfs on that. - */ -static const struct cpumask *shared_cache_mask(int cpu) -{ - if (shared_caches) - return cpu_l2_cache_mask(cpu); - - if (has_big_cores) - return cpu_smallcore_mask(cpu); - - return per_cpu(cpu_sibling_map, cpu); -} - -#ifdef CONFIG_SCHED_SMT -static const struct cpumask *smallcore_smt_mask(int cpu) -{ - return cpu_smallcore_mask(cpu); -} -#endif - -static struct sched_domain_topology_level powerpc_topology[] = { -#ifdef CONFIG_SCHED_SMT - { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, -#endif - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, - { cpu_cpu_mask, SD_INIT_NAME(DIE) }, - { NULL, }, -}; - void __init smp_cpus_done(unsigned int max_cpus) { /* -- 2.17.1
Re: [PATCH] kernel.h: Remove duplicate include of asm/div64.h
On Sun, Jul 26, 2020 at 11:48:52PM -0400, Arvind Sankar wrote: > This seems to have been added inadvertently in commit > 72deb455b5ec ("block: remove CONFIG_LBDAF") > > Fixes: 72deb455b5ec ("block: remove CONFIG_LBDAF") > Signed-off-by: Arvind Sankar > Cc: Christoph Hellwig Looks good: Reviewed-by: Christoph Hellwig
Re: linux-next: build failure after merge of the bluetooth tree
The fixup looks good to me, thanks.
Re: [PATCH] [net/ipv6] ip6_output: Add ipv6_pinfo null check
Hi Gaurav, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on sparc-next/master] [also build test WARNING on ipvs/master linus/master v5.8-rc7 next-20200724] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Gaurav-Singh/ip6_output-Add-ipv6_pinfo-null-check/20200727-113949 base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next.git master config: csky-defconfig (attached as .config) compiler: csky-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=csky If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): net/ipv6/ip6_output.c: In function 'ip6_autoflowlabel': >> net/ipv6/ip6_output.c:188:1: warning: control reaches end of non-void >> function [-Wreturn-type] 188 | } | ^ vim +188 net/ipv6/ip6_output.c ^1da177e4c3f41 Linus Torvalds 2005-04-16 181 e9191ffb65d8e1 Ben Hutchings 2018-01-22 182 bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np) 513674b5a2c9c7 Shaohua Li 2017-12-20 183 { 5bdc1ea8a7d229 Gaurav Singh 2020-07-26 184 if (np && np->autoflowlabel_set) 513674b5a2c9c7 Shaohua Li 2017-12-20 185 return np->autoflowlabel; 5bdc1ea8a7d229 Gaurav Singh 2020-07-26 186 else 5bdc1ea8a7d229 Gaurav Singh 2020-07-26 187 ip6_default_np_autolabel(net); 513674b5a2c9c7 Shaohua Li 2017-12-20 @188 } 513674b5a2c9c7 Shaohua Li 2017-12-20 189 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip
[PATCH v4 01/10] powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES
Fix a build warning in a non CONFIG_NEED_MULTIPLE_NODES "error: _numa_cpu_lookup_table_ undeclared" Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v2 -> v3: Removed node caching part. Rewrote the Commit msg (Michael Ellerman) Renamed to powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES arch/powerpc/kernel/smp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 73199470c265..edf94ca64eea 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -860,6 +860,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(_cpu(cpu_core_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); +#ifdef CONFIG_NEED_MULTIPLE_NODES /* * numa_node_id() works after this. */ @@ -868,6 +869,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) set_cpu_numa_mem(cpu, local_memory_node(numa_cpu_lookup_table[cpu])); } +#endif } /* Init the cpumasks so the boot CPU is related to itself */ -- 2.17.1
Re: [PATCH 06/10] remoteproc: imx_rproc: add load hook
On Fri, Jul 24, 2020 at 04:08:09PM +0800, Peng Fan wrote: > To i.MX8, we not able to see the correct data written into TCM when > using ioremap_wc, so use ioremap. > > However common elf loader using memset. > > To arm64, "dc zva, dst" is used in memset. > Per ARM DDI 0487A.j, chapter C5.3.8 DC ZVA, Data Cache Zero by VA, > > "If the memory region being zeroed is any type of Device memory, > this instruction can give an alignment fault which is prioritized > in the same way as other alignment faults that are determined > by the memory type." > > On i.MX platforms, when elf is loaded to onchip TCM area, the region > is ioremapped, so "dc zva, dst" will trigger abort. > > So add i.MX specific loader to address the TCM write issue. First I wonted to ask, if it is AMR64 related issues, why do we handle it in iMX specific driver? But after searching and finding this thread: https://lkml.org/lkml/2020/4/18/93 it looks to me like most of related maintainer questions, was not answered. > The change not impact i.MX6/7 function. Hm... it is impossible assumption,e except you was able to test all firmware variants it the wild. You changed behavior of ELF parser in the first place. It means, not iMX6/7 is affected, but firmware used on this platforms. > Signed-off-by: Peng Fan > --- > drivers/remoteproc/imx_rproc.c | 76 > ++ > 1 file changed, 76 insertions(+) > > diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c > index aee790efbf7b..c23726091228 100644 > --- a/drivers/remoteproc/imx_rproc.c > +++ b/drivers/remoteproc/imx_rproc.c > @@ -4,6 +4,7 @@ > */ > > #include > +#include > #include > #include > #include > @@ -15,6 +16,9 @@ > #include > #include > > +#include "remoteproc_internal.h" > +#include "remoteproc_elf_helpers.h" > + > #define IMX7D_SRC_SCR0x0C > #define IMX7D_ENABLE_M4 BIT(3) > #define IMX7D_SW_M4P_RST BIT(2) > @@ -247,10 +251,82 @@ static void *imx_rproc_da_to_va(struct rproc *rproc, > u64 da, size_t len) > return va; > } > > +static int imx_rproc_elf_load_segments(struct rproc *rproc, const struct > firmware *fw) > +{ > + struct device *dev = >dev; > + const void *ehdr, *phdr; > + int i, ret = 0; > + u16 phnum; > + const u8 *elf_data = fw->data; > + u8 class = fw_elf_get_class(fw); > + u32 elf_phdr_get_size = elf_size_of_phdr(class); > + > + ehdr = elf_data; > + phnum = elf_hdr_get_e_phnum(class, ehdr); > + phdr = elf_data + elf_hdr_get_e_phoff(class, ehdr); > + > + /* go through the available ELF segments */ > + for (i = 0; i < phnum; i++, phdr += elf_phdr_get_size) { > + u64 da = elf_phdr_get_p_paddr(class, phdr); > + u64 memsz = elf_phdr_get_p_memsz(class, phdr); > + u64 filesz = elf_phdr_get_p_filesz(class, phdr); > + u64 offset = elf_phdr_get_p_offset(class, phdr); > + u32 type = elf_phdr_get_p_type(class, phdr); > + void *ptr; > + > + if (type != PT_LOAD) > + continue; > + > + dev_dbg(dev, "phdr: type %d da 0x%llx memsz 0x%llx filesz > 0x%llx\n", > + type, da, memsz, filesz); > + > + if (filesz > memsz) { > + dev_err(dev, "bad phdr filesz 0x%llx memsz 0x%llx\n", > + filesz, memsz); > + ret = -EINVAL; > + break; > + } > + > + if (offset + filesz > fw->size) { > + dev_err(dev, "truncated fw: need 0x%llx avail 0x%zx\n", > + offset + filesz, fw->size); > + ret = -EINVAL; > + break; > + } > + > + if (!rproc_u64_fit_in_size_t(memsz)) { > + dev_err(dev, "size (%llx) does not fit in size_t > type\n", > + memsz); > + ret = -EOVERFLOW; > + break; > + } > + > + /* grab the kernel address for this device address */ > + ptr = rproc_da_to_va(rproc, da, memsz); > + if (!ptr) { > + dev_err(dev, "bad phdr da 0x%llx mem 0x%llx\n", da, > + memsz); > + ret = -EINVAL; > + break; > + } > + > + /* put the segment where the remote processor expects it */ > + if (filesz) > + memcpy_toio(ptr, elf_data + offset, filesz); > + } > + > + return ret; > +} > + > static const struct rproc_ops imx_rproc_ops = { > .start = imx_rproc_start, > .stop = imx_rproc_stop, > .da_to_va = imx_rproc_da_to_va, > + .load = imx_rproc_elf_load_segments, > + .parse_fw = rproc_elf_load_rsc_table, > +
Re: [PATCH v2] net: ipv6: fix use-after-free Read in __xfrm6_tunnel_spi_lookup
On Mon, Jul 27, 2020 at 1:37 AM Cong Wang wrote: > > On Sat, Jul 25, 2020 at 11:12 PM B K Karthik wrote: > > > > On Sun, Jul 26, 2020 at 11:05 AM Cong Wang wrote: > > > > > > On Sat, Jul 25, 2020 at 8:09 PM B K Karthik > > > wrote: > > > > @@ -103,10 +103,10 @@ static int __xfrm6_tunnel_spi_check(struct net > > > > *net, u32 spi) > > > > { > > > > struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net); > > > > struct xfrm6_tunnel_spi *x6spi; > > > > - int index = xfrm6_tunnel_spi_hash_byspi(spi); > > > > + int index = xfrm6_tunnel_spi_hash_byaddr((const xfrm_address_t > > > > *)spi); > > > > > > > > hlist_for_each_entry(x6spi, > > > > -_tn->spi_byspi[index], > > > > +_tn->spi_byaddr[index], > > > > list_byspi) { > > > > if (x6spi->spi == spi) > > > > > > How did you convince yourself this is correct? This lookup is still > > > using spi. :) > > > > I'm sorry, but my intention behind writing this patch was not to fix > > the UAF, but to fix a slab-out-of-bound. > > Odd, your $subject is clearly UAF, so is the stack trace in your changelog. > :) > > > > If required, I can definitely change the subject line and resend the > > patch, but I figured this was correct for > > https://syzkaller.appspot.com/bug?id=058d05f470583ab2843b1d6785fa8d0658ef66ae > > . since that particular report did not have a reproducer, > > Dmitry Vyukov suggested that I test this patch on > > other reports for xfrm/spi . > > You have to change it to avoid misleading. I will do that once somebody tells me this patch is reasonable to avoid wasting people's time. > > > > > Forgive me if this was the wrong way to send a patch for that > > particular report, but I guessed since the reproducer did not trigger > > the crash > > for UAF, I would leave the subject line as 'fix UAF' :) > > > > xfrm6_spi_hash_by_hash seemed more convincing because I had to prevent > > a slab-out-of-bounds because it uses ipv6_addr_hash. > > It would be of great help if you could help me understand how this was > > able to fix a UAF. > > Sure, you just avoid a pointer deref, which of course can fix the UAF, > but I still don't think it is correct in any aspect. I saw a function call being made to tomoyo_check_acl(). the next thing happening is a kfree(). Also, spi_hash_byspi just returns spi % XFRM6_TUNNEL_SPI_BYSPI_HSIZE . I'm a mentee, hence I would say my knowledge is very limited, please let me know if I am making a horrible mistake somewhere, but return (__force u32)(a->s6_addr32[0] ^ a->s6_addr32[1] ^ a->s6_addr32[2] ^ a->s6_addr32[3]); seems like a better because as David S. Miller said "It is doing a XOR on all bits of an IPv6 address, it is doing more bit shifting which the existing hash was ignoring" . Please help me understand this better if I am going wrong. > > Even if it is a OOB, you still have to explain why it happened. Once > again, I can't see how it could happen either. > > > > > > > > > More importantly, can you explain how UAF happens? Apparently > > > the syzbot stack traces you quote make no sense at all. I also > > > looked at other similar reports, none of them makes sense to me. > > > > Forgive me, but I do not understand what you mean by the stack traces > > (this or other similar reports) "make no sense". > > Because the stack trace in your changelog clearly shows it is allocated > in tomoyo_init_log(), which is a buffer in struct tomoyo_query, but > none of xfrm paths uses it. Or do you see anything otherwise? Aren't there indirect inet calls and netfilter hooks? I'm sorry I do not see anything otherwise. Please help me understand. thanks, karthik
[PATCH v4 10/10] powerpc/smp: Implement cpu_to_coregroup_id
Lookup the coregroup id from the associativity array. If unable to detect the coregroup id, fallback on the core id. This way, ensure sched_domain degenerates and an extra sched domain is not created. Ideally this function should have been implemented in arch/powerpc/kernel/smp.c. However if its implemented in mm/numa.c, we don't need to find the primary domain again. If the device-tree mentions more than one coregroup, then kernel implements only the last or the smallest coregroup, which currently corresponds to the penultimate domain in the device-tree. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by : Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Move coregroup_enabled before getting associativity (Gautham) arch/powerpc/mm/numa.c | 20 1 file changed, 20 insertions(+) diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 0d57779e7942..8b3b3ec7fcc4 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1218,6 +1218,26 @@ int find_and_online_cpu_nid(int cpu) int cpu_to_coregroup_id(int cpu) { + __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0}; + int index; + + if (cpu < 0 || cpu > nr_cpu_ids) + return -1; + + if (!coregroup_enabled) + goto out; + + if (!firmware_has_feature(FW_FEATURE_VPHN)) + goto out; + + if (vphn_get_associativity(cpu, associativity)) + goto out; + + index = of_read_number(associativity, 1); + if (index > min_common_depth + 1) + return of_read_number([index - 1], 1); + +out: return cpu_to_core_id(cpu); } -- 2.17.1
[PATCH v4 02/10] powerpc/smp: Merge Power9 topology with Power topology
A new sched_domain_topology_level was added just for Power9. However the same can be achieved by merging powerpc_topology with power9_topology and makes the code more simpler especially when adding a new sched domain. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Replaced a reference to cpu_smt_mask with per_cpu(cpu_sibling_map, cpu) since cpu_smt_mask is only defined under CONFIG_SCHED_SMT arch/powerpc/kernel/smp.c | 33 ++--- 1 file changed, 10 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index edf94ca64eea..283a04e54f52 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1313,7 +1313,7 @@ int setup_profiling_timer(unsigned int multiplier) } #ifdef CONFIG_SCHED_SMT -/* cpumask of CPUs with asymetric SMT dependancy */ +/* cpumask of CPUs with asymmetric SMT dependency */ static int powerpc_smt_flags(void) { int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; @@ -1326,14 +1326,6 @@ static int powerpc_smt_flags(void) } #endif -static struct sched_domain_topology_level powerpc_topology[] = { -#ifdef CONFIG_SCHED_SMT - { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, -#endif - { cpu_cpu_mask, SD_INIT_NAME(DIE) }, - { NULL, }, -}; - /* * P9 has a slightly odd architecture where pairs of cores share an L2 cache. * This topology makes it *much* cheaper to migrate tasks between adjacent cores @@ -1351,7 +1343,13 @@ static int powerpc_shared_cache_flags(void) */ static const struct cpumask *shared_cache_mask(int cpu) { - return cpu_l2_cache_mask(cpu); + if (shared_caches) + return cpu_l2_cache_mask(cpu); + + if (has_big_cores) + return cpu_smallcore_mask(cpu); + + return per_cpu(cpu_sibling_map, cpu); } #ifdef CONFIG_SCHED_SMT @@ -1361,7 +1359,7 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif -static struct sched_domain_topology_level power9_topology[] = { +static struct sched_domain_topology_level powerpc_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif @@ -1386,21 +1384,10 @@ void __init smp_cpus_done(unsigned int max_cpus) #ifdef CONFIG_SCHED_SMT if (has_big_cores) { pr_info("Big cores detected but using small core scheduling\n"); - power9_topology[0].mask = smallcore_smt_mask; powerpc_topology[0].mask = smallcore_smt_mask; } #endif - /* -* If any CPU detects that it's sharing a cache with another CPU then -* use the deeper topology that is aware of this sharing. -*/ - if (shared_caches) { - pr_info("Using shared cache scheduler topology\n"); - set_sched_topology(power9_topology); - } else { - pr_info("Using standard scheduler topology\n"); - set_sched_topology(powerpc_topology); - } + set_sched_topology(powerpc_topology); } #ifdef CONFIG_HOTPLUG_CPU -- 2.17.1
[PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group
If allocated earlier and the search fails, then cpumask need to be freed. However cpu_l1_cache_map can be allocated after we search thread group. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- arch/powerpc/kernel/smp.c | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 698000c7f76f..dab96a1203ec 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -797,10 +797,6 @@ static int init_cpu_l1_cache_map(int cpu) if (err) goto out; - zalloc_cpumask_var_node(_cpu(cpu_l1_cache_map, cpu), - GFP_KERNEL, - cpu_to_node(cpu)); - cpu_group_start = get_cpu_thread_group_start(cpu, ); if (unlikely(cpu_group_start == -1)) { @@ -809,6 +805,9 @@ static int init_cpu_l1_cache_map(int cpu) goto out; } + zalloc_cpumask_var_node(_cpu(cpu_l1_cache_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + for (i = first_thread; i < first_thread + threads_per_core; i++) { int i_group_start = get_cpu_thread_group_start(i, ); -- 2.17.1
[PATCH v4 06/10] powerpc/smp: Generalize 2nd sched domain
Currently "CACHE" domain happens to be the 2nd sched domain as per powerpc_topology. This domain will collapse if cpumask of l2-cache is same as SMT domain. However we could generalize this domain such that it could mean either be a "CACHE" domain or a "BIGCORE" domain. While setting up the "CACHE" domain, check if shared_cache is already set. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Moved shared_cache topology fixup to fixup_topology (Gautham) arch/powerpc/kernel/smp.c | 49 --- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index d997c7411664..3c5ccf6d2b1c 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -85,6 +85,14 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); EXPORT_PER_CPU_SYMBOL(cpu_core_map); EXPORT_SYMBOL_GPL(has_big_cores); +enum { +#ifdef CONFIG_SCHED_SMT + smt_idx, +#endif + bigcore_idx, + die_idx, +}; + #define MAX_THREAD_LIST_SIZE 8 #define THREAD_GROUP_SHARE_L1 1 struct thread_groups { @@ -851,13 +859,7 @@ static int powerpc_shared_cache_flags(void) */ static const struct cpumask *shared_cache_mask(int cpu) { - if (shared_caches) - return cpu_l2_cache_mask(cpu); - - if (has_big_cores) - return cpu_smallcore_mask(cpu); - - return per_cpu(cpu_sibling_map, cpu); + return per_cpu(cpu_l2_cache_map, cpu); } #ifdef CONFIG_SCHED_SMT @@ -867,11 +869,16 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif +static const struct cpumask *cpu_bigcore_mask(int cpu) +{ + return per_cpu(cpu_sibling_map, cpu); +} + static struct sched_domain_topology_level powerpc_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, + { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, }; @@ -1311,7 +1318,6 @@ static void add_cpu_to_masks(int cpu) void start_secondary(void *unused) { unsigned int cpu = smp_processor_id(); - struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; mmgrab(_mm); current->active_mm = _mm; @@ -1337,14 +1343,20 @@ void start_secondary(void *unused) /* Update topology CPU masks */ add_cpu_to_masks(cpu); - if (has_big_cores) - sibling_mask = cpu_smallcore_mask; /* * Check for any shared caches. Note that this must be done on a * per-core basis because one core in the pair might be disabled. */ - if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu))) - shared_caches = true; + if (!shared_caches) { + struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; + struct cpumask *mask = cpu_l2_cache_mask(cpu); + + if (has_big_cores) + sibling_mask = cpu_smallcore_mask; + + if (cpumask_weight(mask) > cpumask_weight(sibling_mask(cpu))) + shared_caches = true; + } set_numa_node(numa_cpu_lookup_table[cpu]); set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); @@ -1375,9 +1387,17 @@ static void fixup_topology(void) #ifdef CONFIG_SCHED_SMT if (has_big_cores) { pr_info("Big cores detected but using small core scheduling\n"); - powerpc_topology[0].mask = smallcore_smt_mask; + powerpc_topology[smt_idx].mask = smallcore_smt_mask; } #endif + if (shared_caches) { + pr_info("Using shared cache scheduler topology\n"); + powerpc_topology[bigcore_idx].mask = shared_cache_mask; + powerpc_topology[bigcore_idx].sd_flags = powerpc_shared_cache_flags; +#ifdef CONFIG_SCHED_DEBUG + powerpc_topology[bigcore_idx].name = "CACHE"; +#endif + } } void __init smp_cpus_done(unsigned int max_cpus) -- 2.17.1
[PATCH v4 09/10] Powerpc/smp: Create coregroup domain
Add percpu coregroup maps and masks to create coregroup domain. If a coregroup doesn't exist, the coregroup domain will be degenerated in favour of SMT/CACHE domain. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Signed-off-by: Srikar Dronamraju --- Changelog v3 ->v4: if coregroup_support doesn't exist, update MC mask to the next smaller domain mask. Changelog v2 -> v3: Add optimization for mask updation under coregroup_support Changelog v1 -> v2: Moved coregroup topology fixup to fixup_topology (Gautham) arch/powerpc/include/asm/topology.h | 10 ++ arch/powerpc/kernel/smp.c | 48 + arch/powerpc/mm/numa.c | 5 +++ 3 files changed, 63 insertions(+) diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h index f0b6300e7dd3..6609174918ab 100644 --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -88,12 +88,22 @@ static inline int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc) #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) extern int find_and_online_cpu_nid(int cpu); +extern int cpu_to_coregroup_id(int cpu); #else static inline int find_and_online_cpu_nid(int cpu) { return 0; } +static inline int cpu_to_coregroup_id(int cpu) +{ +#ifdef CONFIG_SMP + return cpu_to_core_id(cpu); +#else + return 0; +#endif +} + #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */ #include diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index dab96a1203ec..95f0bf72e283 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -80,6 +80,7 @@ DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map); DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map); DEFINE_PER_CPU(cpumask_var_t, cpu_core_map); +DEFINE_PER_CPU(cpumask_var_t, cpu_coregroup_map); EXPORT_PER_CPU_SYMBOL(cpu_sibling_map); EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); @@ -91,6 +92,7 @@ enum { smt_idx, #endif bigcore_idx, + mc_idx, die_idx, }; @@ -869,6 +871,21 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif +static struct cpumask *cpu_coregroup_mask(int cpu) +{ + return per_cpu(cpu_coregroup_map, cpu); +} + +static bool has_coregroup_support(void) +{ + return coregroup_enabled; +} + +static const struct cpumask *cpu_mc_mask(int cpu) +{ + return cpu_coregroup_mask(cpu); +} + static const struct cpumask *cpu_bigcore_mask(int cpu) { return per_cpu(cpu_sibling_map, cpu); @@ -879,6 +896,7 @@ static struct sched_domain_topology_level powerpc_topology[] = { { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, + { cpu_mc_mask, SD_INIT_NAME(MC) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, }; @@ -925,6 +943,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus) GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(_cpu(cpu_core_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); + if (has_coregroup_support()) + zalloc_cpumask_var_node(_cpu(cpu_coregroup_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + #ifdef CONFIG_NEED_MULTIPLE_NODES /* * numa_node_id() works after this. @@ -942,6 +964,9 @@ void __init smp_prepare_cpus(unsigned int max_cpus) cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid)); cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid)); + if (has_coregroup_support()) + cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid)); + init_big_cores(); if (has_big_cores) { cpumask_set_cpu(boot_cpuid, @@ -1233,6 +1258,8 @@ static void remove_cpu_from_masks(int cpu) set_cpus_unrelated(cpu, i, cpu_sibling_mask); if (has_big_cores) set_cpus_unrelated(cpu, i, cpu_smallcore_mask); + if (has_coregroup_support()) + set_cpus_unrelated(cpu, i, cpu_coregroup_mask); } } #endif @@ -1293,6 +1320,20 @@ static void add_cpu_to_masks(int cpu) add_cpu_to_smallcore_masks(cpu); update_mask_by_l2(cpu, cpu_l2_cache_mask); + if (has_coregroup_support()) { + int coregroup_id = cpu_to_coregroup_id(cpu); + + cpumask_set_cpu(cpu, cpu_coregroup_mask(cpu)); + for_each_cpu_and(i, cpu_online_mask, cpu_cpu_mask(cpu)) { + int fcpu = cpu_first_thread_sibling(i); + +
[PATCH v4 04/10] powerpc/smp: Move topology fixups into a new function
Move topology fixup based on the platform attributes into its own function which is called just before set_sched_topology. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v2 -> v3: Rewrote changelog (Gautham) Renamed to powerpc/smp: Move topology fixups into a new function arch/powerpc/kernel/smp.c | 17 +++-- 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index a685915e5941..da27f6909be1 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1368,6 +1368,16 @@ int setup_profiling_timer(unsigned int multiplier) return 0; } +static void fixup_topology(void) +{ +#ifdef CONFIG_SCHED_SMT + if (has_big_cores) { + pr_info("Big cores detected but using small core scheduling\n"); + powerpc_topology[0].mask = smallcore_smt_mask; + } +#endif +} + void __init smp_cpus_done(unsigned int max_cpus) { /* @@ -1381,12 +1391,7 @@ void __init smp_cpus_done(unsigned int max_cpus) dump_numa_cpu_topology(); -#ifdef CONFIG_SCHED_SMT - if (has_big_cores) { - pr_info("Big cores detected but using small core scheduling\n"); - powerpc_topology[0].mask = smallcore_smt_mask; - } -#endif + fixup_topology(); set_sched_topology(powerpc_topology); } -- 2.17.1
[PATCH v4 00/10] Coregroup support on Powerpc
Changelog v3 ->v4: v3: https://lore.kernel.org/lkml/20200723085116.4731-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Create coregroup domain if coregroup_support doesn't exist, update MC mask to the next smaller domain mask. Changelog v2 -> v3: v2: https://lore.kernel.org/linuxppc-dev/20200721113814.32284-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Cache node for reuse Removed node caching part. Rewrote the Commit msg (Michael Ellerman) Renamed to powerpc/smp: Fix a warning under !NEED_MULTIPLE_NODES powerpc/smp: Enable small core scheduling sooner Rewrote changelog (Gautham) Renamed to powerpc/smp: Move topology fixups into a new function powerpc/smp: Create coregroup domain Add optimization for mask updation under coregroup_support Changelog v1 -> v2: v1: https://lore.kernel.org/linuxppc-dev/20200714043624.5648-1-sri...@linux.vnet.ibm.com/t/#u powerpc/smp: Merge Power9 topology with Power topology Replaced a reference to cpu_smt_mask with per_cpu(cpu_sibling_map, cpu) since cpu_smt_mask is only defined under CONFIG_SCHED_SMT powerpc/smp: Enable small core scheduling sooner Restored the previous info msg (Jordan) Moved big core topology fixup to fixup_topology (Gautham) powerpc/smp: Dont assume l2-cache to be superset of sibling Set cpumask after verifying l2-cache. (Gautham) powerpc/smp: Generalize 2nd sched domain Moved shared_cache topology fixup to fixup_topology (Gautham) Powerpc/numa: Detect support for coregroup Explained Coregroup in commit msg (Michael Ellerman) Powerpc/smp: Create coregroup domain Moved coregroup topology fixup to fixup_topology (Gautham) powerpc/smp: Implement cpu_to_coregroup_id Move coregroup_enabled before getting associativity (Gautham) powerpc/smp: Provide an ability to disable coregroup Patch dropped (Michael Ellerman) Cleanup of existing powerpc topologies and add coregroup support on Powerpc. Coregroup is a group of (subset of) cores of a DIE that share a resource. Patch 7 of this patch series: "Powerpc/numa: Detect support for coregroup" depends on https://lore.kernel.org/linuxppc-dev/20200707140644.7241-1-sri...@linux.vnet.ibm.com/t/#u However it should be easy to rebase the patch without the above patch. This patch series is based on top of current powerpc/next tree + the above patch. On Power 8 Systems -- $ tail /proc/cpuinfo processor : 255 cpu : POWER8 (architected), altivec supported clock : 3724.00MHz revision: 2.1 (pvr 004b 0201) timebase: 51200 platform: pSeries model : IBM,8408-E8E machine : CHRP IBM,8408-E8E MMU : Hash Before the patchset --- $ cat /proc/sys/kernel/sched_domain/cpu0/domain*/name SMT DIE NUMA NUMA $ head /proc/schedstat version 15 timestamp 4295534931 cpu0 0 0 0 0 0 0 41389823338 17682779896 14117 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain2 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain3 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu1 0 0 0 0 0 0 27087859050 152273672 10396 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 After the patchset -- $ cat /proc/sys/kernel/sched_domain/cpu0/domain*/name SMT DIE NUMA NUMA $ head /proc/schedstat version 15 timestamp 4295534931 cpu0 0 0 0 0 0 0 41389823338 17682779896 14117 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain2 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain3 ,,,,,,, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu1 0 0 0 0 0 0 27087859050 152273672 10396 domain0 ,,,,,,,00ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 domain1
[PATCH v4 05/10] powerpc/smp: Dont assume l2-cache to be superset of sibling
Current code assumes that cpumask of cpus sharing a l2-cache mask will always be a superset of cpu_sibling_mask. Lets stop that assumption. cpu_l2_cache_mask is a superset of cpu_sibling_mask if and only if shared_caches is set. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Set cpumask after verifying l2-cache. (Gautham) arch/powerpc/kernel/smp.c | 28 +++- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index da27f6909be1..d997c7411664 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1194,6 +1194,7 @@ static bool update_mask_by_l2(int cpu, struct cpumask *(*mask_fn)(int)) if (!l2_cache) return false; + cpumask_set_cpu(cpu, mask_fn(cpu)); for_each_cpu(i, cpu_online_mask) { /* * when updating the marks the current CPU has not been marked @@ -1276,29 +1277,30 @@ static void add_cpu_to_masks(int cpu) * add it to it's own thread sibling mask. */ cpumask_set_cpu(cpu, cpu_sibling_mask(cpu)); + cpumask_set_cpu(cpu, cpu_core_mask(cpu)); for (i = first_thread; i < first_thread + threads_per_core; i++) if (cpu_online(i)) set_cpus_related(i, cpu, cpu_sibling_mask); add_cpu_to_smallcore_masks(cpu); - /* -* Copy the thread sibling mask into the cache sibling mask -* and mark any CPUs that share an L2 with this CPU. -*/ - for_each_cpu(i, cpu_sibling_mask(cpu)) - set_cpus_related(cpu, i, cpu_l2_cache_mask); update_mask_by_l2(cpu, cpu_l2_cache_mask); - /* -* Copy the cache sibling mask into core sibling mask and mark -* any CPUs on the same chip as this CPU. -*/ - for_each_cpu(i, cpu_l2_cache_mask(cpu)) - set_cpus_related(cpu, i, cpu_core_mask); + if (pkg_id == -1) { + struct cpumask *(*mask)(int) = cpu_sibling_mask; + + /* +* Copy the sibling mask into core sibling mask and +* mark any CPUs on the same chip as this CPU. +*/ + if (shared_caches) + mask = cpu_l2_cache_mask; + + for_each_cpu(i, mask(cpu)) + set_cpus_related(cpu, i, cpu_core_mask); - if (pkg_id == -1) return; + } for_each_cpu(i, cpu_online_mask) if (get_physical_package_id(i) == pkg_id) -- 2.17.1
[PATCH v4 07/10] Powerpc/numa: Detect support for coregroup
Add support for grouping cores based on the device-tree classification. - The last domain in the associativity domains always refers to the core. - If primary reference domain happens to be the penultimate domain in the associativity domains device-tree property, then there are no coregroups. However if its not a penultimate domain, then there are coregroups. There can be more than one coregroup. For now we would be interested in the last or the smallest coregroups. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Anton Blanchard Cc: Oliver O'Halloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Jordan Niethe Reviewed-by: Gautham R. Shenoy Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: Explained Coregroup in commit msg (Michael Ellerman) arch/powerpc/include/asm/smp.h | 1 + arch/powerpc/kernel/smp.c | 1 + arch/powerpc/mm/numa.c | 34 +- 3 files changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h index 49a25e2400f2..5bdc17a7049f 100644 --- a/arch/powerpc/include/asm/smp.h +++ b/arch/powerpc/include/asm/smp.h @@ -28,6 +28,7 @@ extern int boot_cpuid; extern int spinning_secondaries; extern u32 *cpu_to_phys_id; +extern bool coregroup_enabled; extern void cpu_die(void); extern int cpu_to_chip_id(int cpu); diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 3c5ccf6d2b1c..698000c7f76f 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -74,6 +74,7 @@ static DEFINE_PER_CPU(int, cpu_state) = { 0 }; struct task_struct *secondary_current; bool has_big_cores; +bool coregroup_enabled; DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map); diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 2298899a0f0a..51cb672f113b 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -886,7 +886,9 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) static void __init find_possible_nodes(void) { struct device_node *rtas; - u32 numnodes, i; + const __be32 *domains; + int prop_length, max_nodes; + u32 i; if (!numa_enabled) return; @@ -895,25 +897,31 @@ static void __init find_possible_nodes(void) if (!rtas) return; - if (of_property_read_u32_index(rtas, "ibm,current-associativity-domains", - min_common_depth, )) { - /* -* ibm,current-associativity-domains is a fairly recent -* property. If it doesn't exist, then fallback on -* ibm,max-associativity-domains. Current denotes what the -* platform can support compared to max which denotes what the -* Hypervisor can support. -*/ - if (of_property_read_u32_index(rtas, "ibm,max-associativity-domains", - min_common_depth, )) + /* +* ibm,current-associativity-domains is a fairly recent property. If +* it doesn't exist, then fallback on ibm,max-associativity-domains. +* Current denotes what the platform can support compared to max +* which denotes what the Hypervisor can support. +*/ + domains = of_get_property(rtas, "ibm,current-associativity-domains", + _length); + if (!domains) { + domains = of_get_property(rtas, "ibm,max-associativity-domains", + _length); + if (!domains) goto out; } - for (i = 0; i < numnodes; i++) { + max_nodes = of_read_number([min_common_depth], 1); + for (i = 0; i < max_nodes; i++) { if (!node_possible(i)) node_set(i, node_possible_map); } + prop_length /= sizeof(int); + if (prop_length > min_common_depth + 2) + coregroup_enabled = 1; + out: of_node_put(rtas); } -- 2.17.1
[PATCH] ARC: perf: don't bail setup if pct irq missing in device-tree
Current code inadventely bails if hardware supports sampling/overflow interrupts, but the irq is missing from device tree. This need not be as we can still do simple counting based perf stat. This unborks perf on HSDK-4xD Signed-off-by: Vineet Gupta --- arch/arc/kernel/perf_event.c | 14 -- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c index 661fd842ea97..79849f37e782 100644 --- a/arch/arc/kernel/perf_event.c +++ b/arch/arc/kernel/perf_event.c @@ -562,7 +562,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev) { struct arc_reg_pct_build pct_bcr; struct arc_reg_cc_build cc_bcr; - int i, has_interrupts; + int i, has_interrupts, irq; int counter_size; /* in bits */ union cc_name { @@ -637,13 +637,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev) .attr_groups= arc_pmu->attr_groups, }; - if (has_interrupts) { - int irq = platform_get_irq(pdev, 0); - - if (irq < 0) { - pr_err("Cannot get IRQ number for the platform\n"); - return -ENODEV; - } + if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) { arc_pmu->irq = irq; @@ -652,9 +646,9 @@ static int arc_pmu_device_probe(struct platform_device *pdev) this_cpu_ptr(_pmu_cpu)); on_each_cpu(arc_cpu_pmu_irq_init, , 1); - - } else + } else { arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT; + } /* * perf parser doesn't really like '-' symbol in events name, so let's -- 2.20.1
[RESEND 3/3] ASoC: max98390: update dsm param bin max size
MAX98390_DSM_PARAM_MAX_SIZE is changed to support extended register update. Signed-off-by: Steve Lee --- sound/soc/codecs/max98390.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/max98390.h b/sound/soc/codecs/max98390.h index 5f444e7779b0..dff884f68e3e 100644 --- a/sound/soc/codecs/max98390.h +++ b/sound/soc/codecs/max98390.h @@ -650,7 +650,7 @@ /* DSM register offset */ #define MAX98390_DSM_PAYLOAD_OFFSET 16 -#define MAX98390_DSM_PARAM_MAX_SIZE 770 +#define MAX98390_DSM_PARAM_MAX_SIZE 1024 #define MAX98390_DSM_PARAM_MIN_SIZE 670 struct max98390_priv { -- 2.17.1
[RESEND 2/3] ASoC: max98390: Update dsm init sequence and condition.
Modify dsm_init sequence and dsm param bin check condition. - Move dsm_init() to after amp init setting to make sure dsm init is last setting. - dsm param bin check condition changed for extended register setting. Signed-off-by: Steve Lee --- sound/soc/codecs/max98390.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sound/soc/codecs/max98390.c b/sound/soc/codecs/max98390.c index 44ffebac15ad..ff5cc9bbec29 100644 --- a/sound/soc/codecs/max98390.c +++ b/sound/soc/codecs/max98390.c @@ -790,7 +790,7 @@ static int max98390_dsm_init(struct snd_soc_component *component) param_start_addr = (dsm_param[0] & 0xff) | (dsm_param[1] & 0xff) << 8; param_size = (dsm_param[2] & 0xff) | (dsm_param[3] & 0xff) << 8; if (param_size > MAX98390_DSM_PARAM_MAX_SIZE || - param_start_addr < DSM_STBASS_HPF_B0_BYTE0 || + param_start_addr < MAX98390_IRQ_CTRL || fw->size < param_size + MAX98390_DSM_PAYLOAD_OFFSET) { dev_err(component->dev, "param fw is invalid.\n"); @@ -864,11 +864,11 @@ static int max98390_probe(struct snd_soc_component *component) regmap_write(max98390->regmap, MAX98390_SOFTWARE_RESET, 0x01); /* Sleep reset settle time */ msleep(20); - /* Update dsm bin param */ - max98390_dsm_init(component); /* Amp init setting */ max98390_init_regs(component); + /* Update dsm bin param */ + max98390_dsm_init(component); /* Dsm Setting */ if (max98390->ref_rdc_value) { -- 2.17.1
[RESEND 1/3] ASoC: max98390: Fix dac event dapm mixer.
Global EN register guide to off before AMP_EN register when amp disable sequence. - remove AMP_EN control before max98390_dac_event call Signed-off-by: Steve Lee --- sound/soc/codecs/max98390.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/max98390.c b/sound/soc/codecs/max98390.c index 3e8094241645..44ffebac15ad 100644 --- a/sound/soc/codecs/max98390.c +++ b/sound/soc/codecs/max98390.c @@ -678,7 +678,7 @@ static const struct snd_kcontrol_new max98390_dai_controls = static const struct snd_soc_dapm_widget max98390_dapm_widgets[] = { SND_SOC_DAPM_DAC_E("Amp Enable", "HiFi Playback", - MAX98390_R203A_AMP_EN, 0, 0, max98390_dac_event, + SND_SOC_NOPM, 0, 0, max98390_dac_event, SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), SND_SOC_DAPM_MUX("DAI Sel Mux", SND_SOC_NOPM, 0, 0, _dai_controls), -- 2.17.1
Re: [PATCH] Makefile.extrawarn: Move sign-compare from W=2 to W=3
On Wed, Jul 22, 2020 at 1:57 PM Joe Perches wrote: > > This -Wsign-compare compiler warning can be very noisy > and most of the suggested conversions are unnecessary. > > Make the warning W=3 so it's described under the > "can most likely be ignored" block. > > Signed-off-by: Joe Perches > --- Applied to linux-kbuild. Thanks. > On Tue, 2020-07-21 at 14:32 -0700, Joe Perches wrote: > > On Tue, 2020-07-21 at 19:06 +, Corentin Labbe wrote: > > > This patch fixes the warning: > > > warning: comparison of integer expressions of different signedness: 'int' > > > and 'long unsigned int' [-Wsign-compare] > > > > I think these do not really need conversion. > > Are these useful compiler warnings ? > > Perhaps move the warning from W=2 to W=3 so > it's described as "can most likely be ignored" > > scripts/Makefile.extrawarn | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn > index 62c275685b75..95e4cdb94fe9 100644 > --- a/scripts/Makefile.extrawarn > +++ b/scripts/Makefile.extrawarn > @@ -66,7 +66,6 @@ KBUILD_CFLAGS += -Wnested-externs > KBUILD_CFLAGS += -Wshadow > KBUILD_CFLAGS += $(call cc-option, -Wlogical-op) > KBUILD_CFLAGS += -Wmissing-field-initializers > -KBUILD_CFLAGS += -Wsign-compare > KBUILD_CFLAGS += -Wtype-limits > KBUILD_CFLAGS += $(call cc-option, -Wmaybe-uninitialized) > KBUILD_CFLAGS += $(call cc-option, -Wunused-macros) > @@ -87,6 +86,7 @@ KBUILD_CFLAGS += -Wpacked > KBUILD_CFLAGS += -Wpadded > KBUILD_CFLAGS += -Wpointer-arith > KBUILD_CFLAGS += -Wredundant-decls > +KBUILD_CFLAGS += -Wsign-compare > KBUILD_CFLAGS += -Wswitch-default > KBUILD_CFLAGS += $(call cc-option, -Wpacked-bitfield-compat) > > > -- Best Regards Masahiro Yamada
undefined reference to `start_isolate_page_range'
Hi Michal, FYI, the error/warning still remains. tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master head: 92ed301919932f13b9172e525674157e983d commit: 2602276d3d3811b1a48c48113042cd75fcbfc27d microblaze: Wire CMA allocator date: 6 months ago config: microblaze-randconfig-c022-20200727 (attached as .config) compiler: microblaze-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross git checkout 2602276d3d3811b1a48c48113042cd75fcbfc27d # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=microblaze If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): microblaze-linux-ld: mm/page_alloc.o: in function `alloc_contig_range': >> (.text+0xd274): undefined reference to `start_isolate_page_range' >> microblaze-linux-ld: (.text+0xd48c): undefined reference to >> `test_pages_isolated' >> microblaze-linux-ld: (.text+0xd548): undefined reference to >> `undo_isolate_page_range' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip
Re: [RESEND PATCH v4 2/3] usb: serial: xr_serial: Add gpiochip support
On Sun, Jul 26, 2020 at 07:34:54PM +0300, Andy Shevchenko wrote: > On Sun, Jul 26, 2020 at 6:53 PM Manivannan Sadhasivam wrote: > > On Wed, Jul 01, 2020 at 03:02:06PM +0200, Johan Hovold wrote: > > > On Sun, Jun 07, 2020 at 09:53:49PM +0530, Manivannan Sadhasivam wrote: > > ... > > > > Same here. And perhaps just ignoring the pins managed by gpiolib is > > > better (cf. gpiolib and pinctrl being orthogonal). > > > > You mean, we can just make TX,RX,CTS,RTS pins controlled only by the serial > > driver and the rest only by gpiolib? > > I'm wondering if you may use mctrl_gpio_*() API instead. How? mctrl_gpio APIs are a wrapper for accessing modem control gpio pins but here we are not accessing the pins but rather exposing the pins as a gpiochip. Am I missing something? Thanks, Mani > > -- > With Best Regards, > Andy Shevchenko
Re: [PATCH v3 09/10] powerpc/smp: Create coregroup domain
On Thu, Jul 23, 2020 at 02:21:15PM +0530, Srikar Dronamraju wrote: > Add percpu coregroup maps and masks to create coregroup domain. > If a coregroup doesn't exist, the coregroup domain will be degenerated > in favour of SMT/CACHE domain. > > Cc: linuxppc-dev > Cc: LKML > Cc: Michael Ellerman > Cc: Nicholas Piggin > Cc: Anton Blanchard > Cc: Oliver O'Halloran > Cc: Nathan Lynch > Cc: Michael Neuling > Cc: Gautham R Shenoy > Cc: Ingo Molnar > Cc: Peter Zijlstra > Cc: Valentin Schneider > Cc: Jordan Niethe > Signed-off-by: Srikar Dronamraju > --- > Changelog v2 -> v3: > Add optimization for mask updation under coregroup_support > > Changelog v1 -> v2: > Moved coregroup topology fixup to fixup_topology (Gautham) > > arch/powerpc/include/asm/topology.h | 10 +++ > arch/powerpc/kernel/smp.c | 44 + > arch/powerpc/mm/numa.c | 5 > 3 files changed, 59 insertions(+) > > diff --git a/arch/powerpc/include/asm/topology.h > b/arch/powerpc/include/asm/topology.h > index f0b6300e7dd3..6609174918ab 100644 > --- a/arch/powerpc/include/asm/topology.h > +++ b/arch/powerpc/include/asm/topology.h > @@ -88,12 +88,22 @@ static inline int cpu_distance(__be32 *cpu1_assoc, __be32 > *cpu2_assoc) > > #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) > extern int find_and_online_cpu_nid(int cpu); > +extern int cpu_to_coregroup_id(int cpu); > #else > static inline int find_and_online_cpu_nid(int cpu) > { > return 0; > } > > +static inline int cpu_to_coregroup_id(int cpu) > +{ > +#ifdef CONFIG_SMP > + return cpu_to_core_id(cpu); > +#else > + return 0; > +#endif > +} > + > #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */ > > #include > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c > index 7d8d44cbab11..1faedde3e406 100644 > --- a/arch/powerpc/kernel/smp.c > +++ b/arch/powerpc/kernel/smp.c > @@ -80,6 +80,7 @@ DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); > DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map); > DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map); > DEFINE_PER_CPU(cpumask_var_t, cpu_core_map); > +DEFINE_PER_CPU(cpumask_var_t, cpu_coregroup_map); > > EXPORT_PER_CPU_SYMBOL(cpu_sibling_map); > EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); > @@ -91,6 +92,7 @@ enum { > smt_idx, > #endif > bigcore_idx, > + mc_idx, > die_idx, > }; > > @@ -869,6 +871,21 @@ static const struct cpumask *smallcore_smt_mask(int cpu) > } > #endif > > +static struct cpumask *cpu_coregroup_mask(int cpu) > +{ > + return per_cpu(cpu_coregroup_map, cpu); > +} > + > +static bool has_coregroup_support(void) > +{ > + return coregroup_enabled; > +} > + > +static const struct cpumask *cpu_mc_mask(int cpu) > +{ > + return cpu_coregroup_mask(cpu); > +} > + > static const struct cpumask *cpu_bigcore_mask(int cpu) > { > return per_cpu(cpu_sibling_map, cpu); > @@ -879,6 +896,7 @@ static struct sched_domain_topology_level > powerpc_topology[] = { > { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, > #endif > { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, > + { cpu_mc_mask, SD_INIT_NAME(MC) }, > { cpu_cpu_mask, SD_INIT_NAME(DIE) }, > { NULL, }, > }; [..snip..] > @@ -1384,6 +1425,9 @@ int setup_profiling_timer(unsigned int multiplier) > > static void fixup_topology(void) > { > + if (!has_coregroup_support()) > + powerpc_topology[mc_idx].mask = cpu_bigcore_mask; > + > if (shared_caches) { > pr_info("Using shared cache scheduler topology\n"); > powerpc_topology[bigcore_idx].mask = shared_cache_mask; Suppose we consider a topology which does not have coregroup_support, but has shared_caches. In that case, we would want our coregroup domain to degenerate. >From the above code, after the fixup, our topology will look as follows: static struct sched_domain_topology_level powerpc_topology[] = { { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, { cpu_bigcore_mask, SD_INIT_NAME(MC) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, So, in this case, the core-group domain (identified by MC) will degenerate only if cpu_bigcore_mask() and shared_cache_mask() return the same value. This may work for existing platforms, because either shared_caches don't exist, or when they do, cpu_bigcore_mask and shared_cache_mask return the same set of CPUs. But this may or may not continue to hold good in the future. Furthermore, if that is always going to be the case that in the presence of shared_caches the cpu_bigcore_mask() and shared_cache_mask() will always be the same, then why even define two separate masks and not just have only the cpu_bigcore_mask() ? The correct way would be to set the powerpc_topology[mc_idx].mask to powerpc_topology[bigcore_idx].mask *after* we have fixedup the big_core
Re: [RFC 2/2] KVM: VMX: Enable bus lock VM exit
On 7/23/2020 9:21 AM, Sean Christopherson wrote: On Wed, Jul 01, 2020 at 04:49:49PM +0200, Vitaly Kuznetsov wrote: Xiaoyao Li writes: So you want an exit to userspace for every bus lock and leave it all to userspace. Yes, it's doable. In some cases we may not even want to have a VM exit: think e.g. real-time/partitioning case when even in case of bus lock we may not want to add additional latency just to count such events. Hmm, I suspect this isn't all that useful for real-time cases because they'd probably want to prevent the split lock in the first place, e.g. would prefer to use the #AC variant in fatal mode. Of course, the availability of split lock #AC is a whole other can of worms. But anyways, I 100% agree that this needs either an off-by-default module param or an opt-in per-VM capability. Maybe on-by-default or an opt-out per-VM capability? Turning it on introduces no overhead if no bus lock happens in guest but gives KVM the capability to track every potential bus lock. If user doesn't want the extra latency due to bus lock VM exit, it's better try to fix the bus lock, which also incurs high latency. I'd suggest we make the new capability tri-state: - disabled (no vmexit, default) - stats only (what this patch does) - userspace exit But maybe this is an overkill, I'd like to hear what others think. Userspace exit would also be interesting for debug. Another throttling option would be schedule() or cond_reched(), though that's probably getting into overkill territory. We're going to leverage host's policy, i.e., calling handle_user_bus_lock(), for throttling, as proposed in https://lkml.kernel.org/r/1595021700-68460-1-git-send-email-fenghua...@intel.com
[PATCH 3/4] x86/cpu: Refactor sync_core() for readability
Instead of having #ifdef/#endif blocks inside sync_core() for X86_64 and X86_32, implement the new function iret_to_self() with two versions. In this manner, avoid having to use even more more #ifdef/#endif blocks when adding support for SERIALIZE in sync_core(). Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Dave Hansen Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: linux-e...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Co-developed-by: Tony Luck Signed-off-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/special_insns.h | 1 - arch/x86/include/asm/sync_core.h | 56 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index eb8e781c4353..59a3e13204c3 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -234,7 +234,6 @@ static inline void clwb(volatile void *__p) #define nop() asm volatile ("nop") - #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index 9c5573f2c333..fdb5b356e59b 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -6,6 +6,37 @@ #include #include +#ifdef CONFIG_X86_32 +static inline void iret_to_self(void) +{ + asm volatile ( + "pushfl\n\t" + "pushl %%cs\n\t" + "pushl $1f\n\t" + "iret\n\t" + "1:" + : ASM_CALL_CONSTRAINT : : "memory"); +} +#else +static inline void iret_to_self(void) +{ + unsigned int tmp; + + asm volatile ( + "mov %%ss, %0\n\t" + "pushq %q0\n\t" + "pushq %%rsp\n\t" + "addq $8, (%%rsp)\n\t" + "pushfq\n\t" + "mov %%cs, %0\n\t" + "pushq %q0\n\t" + "pushq $1f\n\t" + "iretq\n\t" + "1:" + : "=" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); +} +#endif /* CONFIG_X86_32 */ + /* * This function forces the icache and prefetched instruction stream to * catch up with reality in two very specific cases: @@ -44,30 +75,7 @@ static inline void sync_core(void) * Like all of Linux's memory ordering operations, this is a * compiler barrier as well. */ -#ifdef CONFIG_X86_32 - asm volatile ( - "pushfl\n\t" - "pushl %%cs\n\t" - "pushl $1f\n\t" - "iret\n\t" - "1:" - : ASM_CALL_CONSTRAINT : : "memory"); -#else - unsigned int tmp; - - asm volatile ( - "mov %%ss, %0\n\t" - "pushq %q0\n\t" - "pushq %%rsp\n\t" - "addq $8, (%%rsp)\n\t" - "pushfq\n\t" - "mov %%cs, %0\n\t" - "pushq %q0\n\t" - "pushq $1f\n\t" - "iretq\n\t" - "1:" - : "=" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); -#endif + iret_to_self(); } /* -- 2.17.1
[PATCH 2/4] x86/cpu: Relocate sync_core() to sync_core.h
Having sync_core() in processor.h is problematic since it is not possible to check for hardware capabilities via the *cpu_has() family of macros. The latter needs the definitions in processor.h. It also looks more intuitive to relocate the function to sync_core.h. This changeset does not make changes in functionality. Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Dave Hansen Cc: Dimitri Sivanich Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: linux-e...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/processor.h| 64 - arch/x86/include/asm/sync_core.h| 64 + arch/x86/kernel/alternative.c | 1 + arch/x86/kernel/cpu/mce/core.c | 1 + drivers/misc/sgi-gru/grufault.c | 1 + drivers/misc/sgi-gru/gruhandles.c | 1 + drivers/misc/sgi-gru/grukservices.c | 1 + 7 files changed, 69 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 2a1f7e1d7151..97143d87994c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -676,70 +676,6 @@ static inline unsigned int cpuid_edx(unsigned int op) return edx; } -/* - * This function forces the icache and prefetched instruction stream to - * catch up with reality in two very specific cases: - * - * a) Text was modified using one virtual address and is about to be executed - * from the same physical page at a different virtual address. - * - * b) Text was modified on a different CPU, may subsequently be - * executed on this CPU, and you want to make sure the new version - * gets executed. This generally means you're calling this in a IPI. - * - * If you're calling this for a different reason, you're probably doing - * it wrong. - */ -static inline void sync_core(void) -{ - /* -* There are quite a few ways to do this. IRET-to-self is nice -* because it works on every CPU, at any CPL (so it's compatible -* with paravirtualization), and it never exits to a hypervisor. -* The only down sides are that it's a bit slow (it seems to be -* a bit more than 2x slower than the fastest options) and that -* it unmasks NMIs. The "push %cs" is needed because, in -* paravirtual environments, __KERNEL_CS may not be a valid CS -* value when we do IRET directly. -* -* In case NMI unmasking or performance ever becomes a problem, -* the next best option appears to be MOV-to-CR2 and an -* unconditional jump. That sequence also works on all CPUs, -* but it will fault at CPL3 (i.e. Xen PV). -* -* CPUID is the conventional way, but it's nasty: it doesn't -* exist on some 486-like CPUs, and it usually exits to a -* hypervisor. -* -* Like all of Linux's memory ordering operations, this is a -* compiler barrier as well. -*/ -#ifdef CONFIG_X86_32 - asm volatile ( - "pushfl\n\t" - "pushl %%cs\n\t" - "pushl $1f\n\t" - "iret\n\t" - "1:" - : ASM_CALL_CONSTRAINT : : "memory"); -#else - unsigned int tmp; - - asm volatile ( - "mov %%ss, %0\n\t" - "pushq %q0\n\t" - "pushq %%rsp\n\t" - "addq $8, (%%rsp)\n\t" - "pushfq\n\t" - "mov %%cs, %0\n\t" - "pushq %q0\n\t" - "pushq $1f\n\t" - "iretq\n\t" - "1:" - : "=" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); -#endif -} - extern void select_idle_routine(const struct cpuinfo_x86 *c); extern void amd_e400_c1e_apic_setup(void); diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index c67caafd3381..9c5573f2c333 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -6,6 +6,70 @@ #include #include +/* + * This function forces the icache and prefetched instruction stream to + * catch up with reality in two very specific cases: + * + * a) Text was modified using one virtual address and is about to be executed + * from the same physical page at a different virtual address. + * + * b) Text was modified on a different CPU, may subsequently be + * executed on this CPU, and you want to make sure the new version + * gets executed. This generally means you're calling this in a IPI. + * + * If you're calling this for a different reason, you're probably doing + * it wrong. + */ +static inline void sync_core(void) +{ + /* +* There are quite a few ways to do this. IRET-to-self is nice +* because it works on every CPU, at any CPL (so it's compatible +* with paravirtualization),
[PATCH 4/4] x86/cpu: Use SERIALIZE in sync_core() when available
The SERIALIZE instruction gives software a way to force the processor to complete all modifications to flags, registers and memory from previous instructions and drain all buffered writes to memory before the next instruction is fetched and executed. Thus, it serves the purpose of sync_core(). Use it when available. Use boot_cpu_has() and not static_cpu_has(); the most critical paths (returning to user mode and from interrupt and NMI) will not reach sync_core(). Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Dave Hansen Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: linux-e...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviwed-by: Tony Luck Suggested-by: Andy Lutomirski Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/special_insns.h | 5 + arch/x86/include/asm/sync_core.h | 10 +- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 59a3e13204c3..0a2a60bba282 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -234,6 +234,11 @@ static inline void clwb(volatile void *__p) #define nop() asm volatile ("nop") +static inline void serialize(void) +{ + asm volatile(".byte 0xf, 0x1, 0xe8"); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index fdb5b356e59b..bf132c09d61b 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -5,6 +5,7 @@ #include #include #include +#include #ifdef CONFIG_X86_32 static inline void iret_to_self(void) @@ -54,7 +55,8 @@ static inline void iret_to_self(void) static inline void sync_core(void) { /* -* There are quite a few ways to do this. IRET-to-self is nice +* Hardware can do this for us if SERIALIZE is available. Otherwise, +* there are quite a few ways to do this. IRET-to-self is nice * because it works on every CPU, at any CPL (so it's compatible * with paravirtualization), and it never exits to a hypervisor. * The only down sides are that it's a bit slow (it seems to be @@ -75,6 +77,12 @@ static inline void sync_core(void) * Like all of Linux's memory ordering operations, this is a * compiler barrier as well. */ + + if (boot_cpu_has(X86_FEATURE_SERIALIZE)) { + serialize(); + return; + } + iret_to_self(); } -- 2.17.1
Re: [PATCH] i2c: iproc: fix race between client unreg and isr
On Sat, Jul 25, 2020 at 3:48 PM Wolfram Sang wrote: > > > > I think the following sequence needs to be implemented to make this > > safe, i.e., after 'synchronize_irq', no further slave interrupt will be > > fired. > > > > In 'bcm_iproc_i2c_unreg_slave': > > > > 1. Set an atomic variable 'unreg_slave' (I'm bad in names so please come > > up with a better name than this) > > > > 2. Disable all slave interrupts > > > > 3. synchronize_irq > > > > 4. Set slave to NULL > > > > 5. Erase slave addresses > > What about this in unreg_slave? > > 1. disable_irq() > This includes synchronize_irq() and avoids the race. Because irq > will be masked at interrupt controller level, interrupts coming > in at the I2C IP core level should still be pending once we > reenable the irq. > > 2. disable all slave interrupts > > 3. enable_irq() > > 4. clean up the rest (pointer, address) > > Or am I overlooking something? This sequence will take care of all cases. @Dhananjay Phadke is it possible to verify this from your side once. Best regards, Raaygonda
[PATCH 1/4] x86/cpufeatures: Add enumeration for SERIALIZE instruction
The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: Tony Luck Cc: linux-e...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Dave Hansen Reviewed-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/cpufeatures.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 03390a1ef8e7..2901d5df4366 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -367,6 +367,7 @@ #define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */ #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */ #define X86_FEATURE_TSX_FORCE_ABORT(18*32+13) /* "" TSX_FORCE_ABORT */ +#define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */ #define X86_FEATURE_PCONFIG(18*32+18) /* Intel PCONFIG */ #define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */ -- 2.17.1
[PATCH 0/4] x86/cpu: Use SERIALIZE in sync_core()
A recent submission to LKML introduced a CPU feature flag for a new Intel architecture Serializing Instruction, SERIALIZE [1]. Unlike the existing Serializing Instructions, this new instruction does not have side effects such as clobbering registers or exiting to a hypervisor. As stated in the Intel "extensions" (ISE) manual [2], this instruction will appear first in Sapphire Rapids and Alder Lake. Andy Lutomirski suggested to use this instruction in sync_core() as it serves the very purpose of this function [3]. For completeness, I picked patch #3 from Cathy's series (and has become patch #1 here) [1]. Her series depends on such patch to build correctly. Maybe it can be merged independently while the discussion continues? Thanks and BR, Ricardo [1]. https://lore.kernel.org/kvm/1594088183-7187-1-git-send-email-cathy.zh...@intel.com/ [2]. https://software.intel.com/sites/default/files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf [3]. https://lore.kernel.org/kvm/CALCETrWudiF8G8r57r5i4JefuP5biG1kHg==0o8yxb-bys-...@mail.gmail.com/ Ricardo Neri (4): x86/cpufeatures: Add enumeration for SERIALIZE instruction x86/cpu: Relocate sync_core() to sync_core.h x86/cpu: Refactor sync_core() for readability x86/cpu: Use SERIALIZE in sync_core() when available arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/processor.h | 64 -- arch/x86/include/asm/special_insns.h | 4 ++ arch/x86/include/asm/sync_core.h | 80 arch/x86/kernel/alternative.c| 1 + arch/x86/kernel/cpu/mce/core.c | 1 + drivers/misc/sgi-gru/grufault.c | 1 + drivers/misc/sgi-gru/gruhandles.c| 1 + drivers/misc/sgi-gru/grukservices.c | 1 + 9 files changed, 90 insertions(+), 64 deletions(-) -- 2.17.1
drivers/gpu/drm/bridge/sil-sii8620.c:2355: undefined reference to `extcon_unregister_notifier'
Hi Masahiro, FYI, the error/warning still remains. tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master head: 92ed301919932f13b9172e525674157e983d commit: def2fbffe62c00c330c7f41584a356001179c59c kconfig: allow symbols implied by y to become m date: 5 months ago config: i386-randconfig-r014-20200727 (attached as .config) compiler: gcc-9 (Debian 9.3.0-14) 9.3.0 reproduce (this is a W=1 build): git checkout def2fbffe62c00c330c7f41584a356001179c59c # save the attached .config to linux build tree make W=1 ARCH=i386 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): ld: drivers/gpu/drm/bridge/sil-sii8620.o: in function `sii8620_remove': >> drivers/gpu/drm/bridge/sil-sii8620.c:2355: undefined reference to >> `extcon_unregister_notifier' ld: drivers/gpu/drm/bridge/sil-sii8620.o: in function `sii8620_extcon_init': >> drivers/gpu/drm/bridge/sil-sii8620.c:2179: undefined reference to >> `extcon_find_edev_by_node' >> ld: drivers/gpu/drm/bridge/sil-sii8620.c:2191: undefined reference to >> `extcon_register_notifier' ld: drivers/gpu/drm/bridge/sil-sii8620.o: in function `sii8620_extcon_work': >> drivers/gpu/drm/bridge/sil-sii8620.c:2139: undefined reference to >> `extcon_get_state' vim +2355 drivers/gpu/drm/bridge/sil-sii8620.c 688838442147d9 Maciej Purski2018-02-27 2162 688838442147d9 Maciej Purski2018-02-27 2163 static int sii8620_extcon_init(struct sii8620 *ctx) 688838442147d9 Maciej Purski2018-02-27 2164 { 688838442147d9 Maciej Purski2018-02-27 2165struct extcon_dev *edev; 688838442147d9 Maciej Purski2018-02-27 2166struct device_node *musb, *muic; 688838442147d9 Maciej Purski2018-02-27 2167int ret; 688838442147d9 Maciej Purski2018-02-27 2168 688838442147d9 Maciej Purski2018-02-27 2169/* get micro-USB connector node */ 688838442147d9 Maciej Purski2018-02-27 2170musb = of_graph_get_remote_node(ctx->dev->of_node, 1, -1); 688838442147d9 Maciej Purski2018-02-27 2171/* next get micro-USB Interface Controller node */ 688838442147d9 Maciej Purski2018-02-27 2172muic = of_get_next_parent(musb); 688838442147d9 Maciej Purski2018-02-27 2173 688838442147d9 Maciej Purski2018-02-27 2174if (!muic) { 688838442147d9 Maciej Purski2018-02-27 2175 dev_info(ctx->dev, "no extcon found, switching to 'always on' mode\n"); 688838442147d9 Maciej Purski2018-02-27 2176return 0; 688838442147d9 Maciej Purski2018-02-27 2177} 688838442147d9 Maciej Purski2018-02-27 2178 688838442147d9 Maciej Purski2018-02-27 @2179edev = extcon_find_edev_by_node(muic); 688838442147d9 Maciej Purski2018-02-27 2180of_node_put(muic); 688838442147d9 Maciej Purski2018-02-27 2181if (IS_ERR(edev)) { 688838442147d9 Maciej Purski2018-02-27 2182if (PTR_ERR(edev) == -EPROBE_DEFER) 688838442147d9 Maciej Purski2018-02-27 2183return -EPROBE_DEFER; 688838442147d9 Maciej Purski2018-02-27 2184 dev_err(ctx->dev, "Invalid or missing extcon\n"); 688838442147d9 Maciej Purski2018-02-27 2185return PTR_ERR(edev); 688838442147d9 Maciej Purski2018-02-27 2186} 688838442147d9 Maciej Purski2018-02-27 2187 688838442147d9 Maciej Purski2018-02-27 2188ctx->extcon = edev; 688838442147d9 Maciej Purski2018-02-27 2189 ctx->extcon_nb.notifier_call = sii8620_extcon_notifier; 688838442147d9 Maciej Purski2018-02-27 2190 INIT_WORK(>extcon_wq, sii8620_extcon_work); 688838442147d9 Maciej Purski2018-02-27 @2191ret = extcon_register_notifier(edev, EXTCON_DISP_MHL, >extcon_nb); 688838442147d9 Maciej Purski2018-02-27 2192if (ret) { 688838442147d9 Maciej Purski2018-02-27 2193 dev_err(ctx->dev, "failed to register notifier for MHL\n"); 688838442147d9 Maciej Purski2018-02-27 2194return ret; 688838442147d9 Maciej Purski2018-02-27 2195} 688838442147d9 Maciej Purski2018-02-27 2196 688838442147d9 Maciej Purski2018-02-27 2197return 0; 688838442147d9 Maciej Purski2018-02-27 2198 } 688838442147d9 Maciej Purski2018-02-27 2199 ce6e153f414a73 Andrzej Hajda2016-10-10 2200 static inline struct sii8620 *bridge_to_sii8620(struct drm_bridge *bridge) ce6e153f414a73 Andrzej Hajda2016-10-10 2201 { ce6e153f414a73 Andrzej Hajda2016-10-10 2202return container_of(bridge, struct sii8620, bridge); ce6e153f414a73 Andrzej Hajda2016-10-10 2203 } ce6e153f414a73 Andrzej Hajda2016-10-10 2204 e25f1f7c94e16d Maciej Purski2017-08-24 2205 static int sii8620_attach(struct drm_bridge *bridge) e25f1f7c94e16d Maciej Purski2017-08-24 2206
Re: [PATCH v7 4/7] fs: Introduce O_MAYEXEC flag for openat2(2)
On Thu, Jul 23, 2020 at 07:12:24PM +0200, Mickaël Salaün wrote: > When the O_MAYEXEC flag is passed, openat2(2) may be subject to > additional restrictions depending on a security policy managed by the > kernel through a sysctl or implemented by an LSM thanks to the > inode_permission hook. This new flag is ignored by open(2) and > openat(2) because of their unspecified flags handling. When used with > openat2(2), the default behavior is only to forbid to open a directory. Correct me if I'm wrong, but it looks like you are introducing a magical flag that would mean "let the Linux S take an extra special whip for this open()". Why is it done during open? If the caller is passing it deliberately, why not have an explicit request to apply given torture device to an already opened file? Why not sys_masochism(int fd, char *hurt_flavour), for that matter?
RE: [PATCH] MAINTAINERS: Fix email typo and correct name of Tianshu
Reviewed-by: Tianshu Qiu > -Original Message- > From: Cao, Bingbu > Sent: Monday, July 27, 2020 12:12 PM > To: linux-me...@vger.kernel.org; linux-kernel@vger.kernel.org; > helg...@kernel.org > Cc: sakari.ai...@linux.intel.com; Qiu, Tian Shu ; > Cao, Bingbu ; > bingbu@linux.intel.com > Subject: [PATCH] MAINTAINERS: Fix email typo and correct name of Tianshu > > Fix the typo in email address of Tianshu Qiu and correct the name. > > Signed-off-by: Bingbu Cao > Signed-off-by: Tianshu Qiu > Reported-by: Bjorn Helgaas > --- > MAINTAINERS | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/MAINTAINERS b/MAINTAINERS > index 5392f00cec46..638dfa99751b 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -8765,7 +8765,7 @@ INTEL IPU3 CSI-2 CIO2 DRIVER > M: Yong Zhi > M: Sakari Ailus > M: Bingbu Cao > -R: Tian Shu Qiu > +R: Tianshu Qiu > L: linux-me...@vger.kernel.org > S: Maintained > F: Documentation/userspace-api/media/v4l/pixfmt-srggb10-ipu3.rst > @@ -8774,7 +8774,7 @@ F: drivers/media/pci/intel/ipu3/ > INTEL IPU3 CSI-2 IMGU DRIVER > M: Sakari Ailus > R: Bingbu Cao > -R: Tian Shu Qiu > +R: Tianshu Qiu > L: linux-me...@vger.kernel.org > S: Maintained > F: Documentation/admin-guide/media/ipu3.rst > @@ -12609,7 +12609,7 @@ T:git git://linuxtv.org/media_tree.git > F: drivers/media/i2c/ov2685.c > > OMNIVISION OV2740 SENSOR DRIVER > -M: Tianshu Qiu > +M: Tianshu Qiu > R: Shawn Tu > R: Bingbu Cao > L: linux-me...@vger.kernel.org > -- > 2.7.4
[PATCH] MAINTAINERS: Fix email typo and correct name of Tianshu
Fix the typo in email address of Tianshu Qiu and correct the name. Signed-off-by: Bingbu Cao Signed-off-by: Tianshu Qiu Reported-by: Bjorn Helgaas --- MAINTAINERS | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index 5392f00cec46..638dfa99751b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8765,7 +8765,7 @@ INTEL IPU3 CSI-2 CIO2 DRIVER M: Yong Zhi M: Sakari Ailus M: Bingbu Cao -R: Tian Shu Qiu +R: Tianshu Qiu L: linux-me...@vger.kernel.org S: Maintained F: Documentation/userspace-api/media/v4l/pixfmt-srggb10-ipu3.rst @@ -8774,7 +8774,7 @@ F:drivers/media/pci/intel/ipu3/ INTEL IPU3 CSI-2 IMGU DRIVER M: Sakari Ailus R: Bingbu Cao -R: Tian Shu Qiu +R: Tianshu Qiu L: linux-me...@vger.kernel.org S: Maintained F: Documentation/admin-guide/media/ipu3.rst @@ -12609,7 +12609,7 @@ T: git git://linuxtv.org/media_tree.git F: drivers/media/i2c/ov2685.c OMNIVISION OV2740 SENSOR DRIVER -M: Tianshu Qiu +M: Tianshu Qiu R: Shawn Tu R: Bingbu Cao L: linux-me...@vger.kernel.org -- 2.7.4
[PATCH 3/3] libsubcmd: Get rid of useless conditional assignments
Conditional assignment does not work properly for variables that Make implicitly sets, among which are CC and AR. To quote tools/scripts/Makefile.include, which handles this properly: # Makefiles suck: This macro sets a default value of $(2) for the # variable named by $(1), unless the variable has been set by # environment or command line. This is necessary for CC and AR # because make sets default values, so the simpler ?= approach # won't work as expected. In other words, the conditional assignments will not run even if the variables are not overridden in the environment; Make will set CC and AR to default values when it starts[1], meaning they're not empty by the time the conditional assignments are evaluated. Since the assignments never run, we can just get rid of them. CC and AR are already set properly by Makefile.include using the macro mentioned in the quote above. In addition, we can get rid of the LD assignment, because it's also set by Makefile.include. [1] https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html Signed-off-by: Thomas Hebb --- tools/lib/subcmd/Makefile | 4 1 file changed, 4 deletions(-) diff --git a/tools/lib/subcmd/Makefile b/tools/lib/subcmd/Makefile index 1c777a72bb39..5f2058a6a1ce 100644 --- a/tools/lib/subcmd/Makefile +++ b/tools/lib/subcmd/Makefile @@ -9,10 +9,6 @@ srctree := $(patsubst %/,%,$(dir $(srctree))) #$(info Determined 'srctree' to be $(srctree)) endif -CC ?= $(CROSS_COMPILE)gcc -LD ?= $(CROSS_COMPILE)ld -AR ?= $(CROSS_COMPILE)ar - RM = rm -f MAKEFLAGS += --no-print-directory -- 2.27.0
[PATCH 1/3] tools build feature: Use CC and CXX from parent
commit c8c188679ccf ("tools build: Use the same CC for feature detection and actual build") changed these assignments from unconditional (:=) to conditional (?=) so that they wouldn't clobber values from the environment. However, conditional assignment does not work properly for variables that Make implicitly sets, among which are CC and CXX. To quote tools/scripts/Makefile.include, which handles this properly: # Makefiles suck: This macro sets a default value of $(2) for the # variable named by $(1), unless the variable has been set by # environment or command line. This is necessary for CC and AR # because make sets default values, so the simpler ?= approach # won't work as expected. In other words, the conditional assignments will not run even if the variables are not overridden in the environment; Make will set CC to "cc" and CXX to "g++" when it starts[1], meaning the variables are not empty by the time the conditional assignments are evaluated. This breaks cross-compilation when CROSS_COMPILE is set but CC isn't, since "cc" gets used for feature detection instead of the cross compiler (and likewise for CXX). To fix the issue, just pass down the values of CC and CXX computed by the parent Makefile, which gets included by the Makefile that actually builds whatever we're detecting features for and so is guaranteed to have good values. This is a better solution anyway, since it means we aren't trying to replicate the logic of the parent build system and so don't risk it getting out of sync. Leave PKG_CONFIG alone, since 1) there's no common logic to compute it in Makefile.include, and 2) it's not an implicit variable, so conditional assignment works properly. [1] https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html Fixes: c8c188679ccf ("tools build: Use the same CC for feature detection and actual build") Signed-off-by: Thomas Hebb --- tools/build/Makefile.feature | 2 +- tools/build/feature/Makefile | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature index cb152370fdef..774f0b0ca28a 100644 --- a/tools/build/Makefile.feature +++ b/tools/build/Makefile.feature @@ -8,7 +8,7 @@ endif feature_check = $(eval $(feature_check_code)) define feature_check_code - feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0) + feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CC=$(CC) CXX=$(CXX) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0) endef feature_set = $(eval $(feature_set_code)) diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile index b1f0321180f5..93b590d81209 100644 --- a/tools/build/feature/Makefile +++ b/tools/build/feature/Makefile @@ -74,8 +74,6 @@ FILES= \ FILES := $(addprefix $(OUTPUT),$(FILES)) -CC ?= $(CROSS_COMPILE)gcc -CXX ?= $(CROSS_COMPILE)g++ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config LLVM_CONFIG ?= llvm-config CLANG ?= clang -- 2.27.0
[PATCH 2/3] tools lib api: Get rid of useless conditional assignments
Conditional assignment does not work properly for variables that Make implicitly sets, among which are CC and AR. To quote tools/scripts/Makefile.include, which handles this properly: # Makefiles suck: This macro sets a default value of $(2) for the # variable named by $(1), unless the variable has been set by # environment or command line. This is necessary for CC and AR # because make sets default values, so the simpler ?= approach # won't work as expected. In other words, the conditional assignments will not run even if the variables are not overridden in the environment; Make will set CC and AR to default values when it starts[1], meaning they're not empty by the time the conditional assignments are evaluated. Since the assignments never run, we can just get rid of them. CC and AR are already set properly by Makefile.include using the macro mentioned in the quote above. In addition, we can get rid of the LD assignment, because it's also set by Makefile.include. [1] https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html Signed-off-by: Thomas Hebb --- tools/lib/api/Makefile | 4 1 file changed, 4 deletions(-) diff --git a/tools/lib/api/Makefile b/tools/lib/api/Makefile index a13e9c7f1fc5..5f2e3f8acbd0 100644 --- a/tools/lib/api/Makefile +++ b/tools/lib/api/Makefile @@ -9,10 +9,6 @@ srctree := $(patsubst %/,%,$(dir $(srctree))) #$(info Determined 'srctree' to be $(srctree)) endif -CC ?= $(CROSS_COMPILE)gcc -AR ?= $(CROSS_COMPILE)ar -LD ?= $(CROSS_COMPILE)ld - MAKEFLAGS += --no-print-directory LIBFILE = $(OUTPUT)libapi.a -- 2.27.0
Re: [PATCH v2 04/20] unify generic instances of csum_partial_copy_nocheck()
On Sun, Jul 26, 2020 at 08:11:32AM +0100, Christoph Hellwig wrote: > On Fri, Jul 24, 2020 at 01:30:40PM +0100, Al Viro wrote: > > > Sorry, I meant csum_and_copy_from_nocheck, just as in this patch. > > > > > > Merging your branch into the net-next tree thus will conflict in > > > the nios2 and asm-geneeric/checksum.h as well as lib/checksum.c. > > > > Noted, but that asm-generic/checksum.h conflict will be "massage > > in net-next/outright removal in this branch"; the same goes for > > lib/checksum.c and nios2. It's c6x that is unpleasant in that respect... > > What about just rebasing your branch on the net-next tree? For now I've just cherry-picked your commit in there. net-next interaction there is minimal; most of the PITA (and potential breakage) is in arch/*...
[PATCH -next] crc:Fix build errors
If CONFIG_DRM_NOUVEAU=y,the following errors are seen while building crc.h. In file included from /scratch/linux/drivers/gpu/drm/nouveau/nouveau_display.c:47: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_head_crc_late_register’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:109:47: error: parameter name omitted static inline int nv50_head_crc_late_register(struct nv50_head *) {} ^~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:109:54: warning: no return statement in function returning non-void [-Wreturn-type] static inline int nv50_head_crc_late_register(struct nv50_head *) {} ^ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_crc_handle_vblank’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:111:57: warning: ‘return’ with a value, in function returning void nv50_crc_handle_vblank(struct nv50_head *head) { return 0; } ^ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:111:1: note: declared here nv50_crc_handle_vblank(struct nv50_head *head) { return 0; } ^~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_crc_atomic_check_head’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:114:28: error: parameter name omitted nv50_crc_atomic_check_head(struct nv50_head *, struct nv50_head_atom *, ^~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:114:48: error: parameter name omitted nv50_crc_atomic_check_head(struct nv50_head *, struct nv50_head_atom *, ^~~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:115:7: error: parameter name omitted struct nv50_head_atom *) {} ^~~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:115:14: warning: no return statement in function returning non-void [-Wreturn-type] struct nv50_head_atom *) {} ^~ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_crc_atomic_stop_reporting’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:118:32: error: parameter name omitted nv50_crc_atomic_stop_reporting(struct drm_atomic_state *) {} ^ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_crc_atomic_init_notifier_contexts’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:120:40: error: parameter name omitted nv50_crc_atomic_init_notifier_contexts(struct drm_atomic_state *) {} ^ /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h: In function ‘nv50_crc_atomic_release_notifier_contexts’: /scratch/linux/drivers/gpu/drm/nouveau/dispnv50/crc.h:122:43: error: parameter name omitted Signed-off-by: Peng Wu --- drivers/gpu/drm/nouveau/dispnv50/crc.h | 44 +- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.h b/drivers/gpu/drm/nouveau/dispnv50/crc.h index 4bc59e7..3da16cd 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/crc.h +++ b/drivers/gpu/drm/nouveau/dispnv50/crc.h @@ -76,22 +76,22 @@ struct nv50_crc { }; void nv50_crc_init(struct drm_device *dev); -int nv50_head_crc_late_register(struct nv50_head *); +int nv50_head_crc_late_register(struct nv50_head *head); void nv50_crc_handle_vblank(struct nv50_head *head); -int nv50_crc_verify_source(struct drm_crtc *, const char *, size_t *); -const char *const *nv50_crc_get_sources(struct drm_crtc *, size_t *); -int nv50_crc_set_source(struct drm_crtc *, const char *); +int nv50_crc_verify_source(struct drm_crtc *crtc, const char *source_name, size_t *values_cnt); +const char *const *nv50_crc_get_sources(struct drm_crtc *crtc, size_t *count); +int nv50_crc_set_source(struct drm_crtc *crtc, const char *source_str); -int nv50_crc_atomic_check_head(struct nv50_head *, struct nv50_head_atom *, - struct nv50_head_atom *); +int nv50_crc_atomic_check_head(struct nv50_head *head, struct nv50_head_atom *asyh, + struct nv50_head_atom *armh); void nv50_crc_atomic_check_outp(struct nv50_atom *atom); -void nv50_crc_atomic_stop_reporting(struct drm_atomic_state *); -void nv50_crc_atomic_init_notifier_contexts(struct drm_atomic_state *); -void nv50_crc_atomic_release_notifier_contexts(struct drm_atomic_state *); -void nv50_crc_atomic_start_reporting(struct drm_atomic_state *); -void nv50_crc_atomic_set(struct nv50_head *, struct nv50_head_atom *); -void nv50_crc_atomic_clr(struct nv50_head *); +void nv50_crc_atomic_stop_reporting(struct drm_atomic_state *state); +void nv50_crc_atomic_init_notifier_contexts(struct
Re: [PATCH] irqchip/gic-v4.1: Use GFP_ATOMIC flag in allocate_vpe_l1_table()
Hi Marc, On 2020/6/30 21:37, Zenghui Yu wrote: Booting the latest kernel with DEBUG_ATOMIC_SLEEP=y on a GICv4.1 enabled box, I get the following kernel splat: [0.053766] BUG: sleeping function called from invalid context at mm/slab.h:567 [0.053767] in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 0, name: swapper/1 [0.053769] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.8.0-rc3+ #23 [0.053770] Call trace: [0.053774] dump_backtrace+0x0/0x218 [0.053775] show_stack+0x2c/0x38 [0.053777] dump_stack+0xc4/0x10c [0.053779] ___might_sleep+0xfc/0x140 [0.053780] __might_sleep+0x58/0x90 [0.053782] slab_pre_alloc_hook+0x7c/0x90 [0.053783] kmem_cache_alloc_trace+0x60/0x2f0 [0.053785] its_cpu_init+0x6f4/0xe40 [0.053786] gic_starting_cpu+0x24/0x38 [0.053788] cpuhp_invoke_callback+0xa0/0x710 [0.053789] notify_cpu_starting+0xcc/0xd8 [0.053790] secondary_start_kernel+0x148/0x200 # ./scripts/faddr2line vmlinux its_cpu_init+0x6f4/0xe40 its_cpu_init+0x6f4/0xe40: allocate_vpe_l1_table at drivers/irqchip/irq-gic-v3-its.c:2818 (inlined by) its_cpu_init_lpis at drivers/irqchip/irq-gic-v3-its.c:3138 (inlined by) its_cpu_init at drivers/irqchip/irq-gic-v3-its.c:5166 It turned out that we're allocating memory using GFP_KERNEL (may sleep) within the CPU hotplug notifier, which is indeed an atomic context. Bad thing may happen if we're playing on a system with more than a single CommonLPIAff group. Avoid it by turning this into an atomic allocation. Fixes: 5e5168461c22 ("irqchip/gic-v4.1: VPE table (aka GICR_VPROPBASER) allocation") Signed-off-by: Zenghui Yu --- drivers/irqchip/irq-gic-v3-its.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 6a5a87fc4601..b66eeca442c4 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -2814,7 +2814,7 @@ static int allocate_vpe_l1_table(void) if (val & GICR_VPROPBASER_4_1_VALID) goto out; - gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_KERNEL); + gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_ATOMIC); if (!gic_data_rdist()->vpe_table_mask) return -ENOMEM; @@ -2881,7 +2881,7 @@ static int allocate_vpe_l1_table(void) pr_debug("np = %d, npg = %lld, psz = %d, epp = %d, esz = %d\n", np, npg, psz, epp, esz); - page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(np * PAGE_SIZE)); + page = alloc_pages(GFP_ATOMIC | __GFP_ZERO, get_order(np * PAGE_SIZE)); if (!page) return -ENOMEM; Do you mind taking this patch into v5.9? Or please let me know if you still have any concerns on it? Thanks, Zenghui
[PATCH] kernel.h: Remove duplicate include of asm/div64.h
This seems to have been added inadvertently in commit 72deb455b5ec ("block: remove CONFIG_LBDAF") Fixes: 72deb455b5ec ("block: remove CONFIG_LBDAF") Signed-off-by: Arvind Sankar Cc: Christoph Hellwig --- include/linux/kernel.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 82d91547d122..ddaaaf53a251 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -17,7 +17,6 @@ #include #include #include -#include #define STACK_MAGIC0xdeadbeef -- 2.26.2
[PATCH v3] rtc: ds1307: provide an indication that the watchdog has fired
There's not much feedback when the ds1388 watchdog fires. Generally it yanks on the reset line and the board reboots. Capture the fact that the watchdog has fired in the past so that userspace can retrieve it via WDIOC_GETBOOTSTATUS. This should help distinguish a watchdog triggered reset from a power interruption. Signed-off-by: Chris Packham --- Changes in v3: - Check for watchdog flag in ds1307_wdt_register() Changes in v2: - Set bootstatus to WDIOF_CARDRESET and let userspace decide what to do with the information. drivers/rtc/rtc-ds1307.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c index 49702942bb08..54c85cdd019d 100644 --- a/drivers/rtc/rtc-ds1307.c +++ b/drivers/rtc/rtc-ds1307.c @@ -1668,6 +1668,8 @@ static const struct watchdog_ops ds1388_wdt_ops = { static void ds1307_wdt_register(struct ds1307 *ds1307) { struct watchdog_device *wdt; + int err; + int val; if (ds1307->type != ds_1388) return; @@ -1676,6 +1678,10 @@ static void ds1307_wdt_register(struct ds1307 *ds1307) if (!wdt) return; + err = regmap_read(ds1307->regmap, DS1388_REG_FLAG, ); + if (!err && val & DS1388_BIT_WF) + wdt->bootstatus = WDIOF_CARDRESET; + wdt->info = _wdt_info; wdt->ops = _wdt_ops; wdt->timeout = 99; -- 2.27.0
RE: [PATCH v13 2/9] arm/arm64: KVM: Advertise KVM UID to guests via SMCCC
Hi Will, > -Original Message- > From: Jianyong Wu > Sent: Friday, June 19, 2020 9:01 PM > To: net...@vger.kernel.org; yangbo...@nxp.com; john.stu...@linaro.org; > t...@linutronix.de; pbonz...@redhat.com; sean.j.christopher...@intel.com; > m...@kernel.org; richardcoch...@gmail.com; Mark Rutland > ; w...@kernel.org; Suzuki Poulose > ; Steven Price > Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org; > kvm...@lists.cs.columbia.edu; k...@vger.kernel.org; Steve Capper > ; Kaly Xin ; Justin He > ; Wei Chen ; Jianyong Wu > ; nd > Subject: [PATCH v13 2/9] arm/arm64: KVM: Advertise KVM UID to guests via > SMCCC > > From: Will Deacon > > We can advertise ourselves to guests as KVM and provide a basic features > bitmap for discoverability of future hypervisor services. > > Cc: Marc Zyngier > Signed-off-by: Will Deacon > Signed-off-by: Jianyong Wu > --- > arch/arm64/kvm/hypercalls.c | 29 +++-- > 1 file changed, 19 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c > index 550dfa3e53cd..db6dce3d0e23 100644 > --- a/arch/arm64/kvm/hypercalls.c > +++ b/arch/arm64/kvm/hypercalls.c > @@ -12,13 +12,13 @@ > int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) { > u32 func_id = smccc_get_function(vcpu); > - long val = SMCCC_RET_NOT_SUPPORTED; > + u32 val[4] = {SMCCC_RET_NOT_SUPPORTED}; There is a risk as this u32 value will return here and a u64 value will be obtained in guest. For example, The val[0] is initialized as -1 of 0x and the guest get 0x then it will be compared with -1 of 0x Also this problem exists for the transfer of address in u64 type. So the following assignment to "val" should be split into two u32 value and assign to val[0] and val[1] respectively. WDYT? Thanks Jianyong > u32 feature; > gpa_t gpa; > > switch (func_id) { > case ARM_SMCCC_VERSION_FUNC_ID: > - val = ARM_SMCCC_VERSION_1_1; > + val[0] = ARM_SMCCC_VERSION_1_1; > break; > case ARM_SMCCC_ARCH_FEATURES_FUNC_ID: > feature = smccc_get_arg1(vcpu); > @@ -28,10 +28,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) > case KVM_BP_HARDEN_UNKNOWN: > break; > case KVM_BP_HARDEN_WA_NEEDED: > - val = SMCCC_RET_SUCCESS; > + val[0] = SMCCC_RET_SUCCESS; > break; > case KVM_BP_HARDEN_NOT_REQUIRED: > - val = SMCCC_RET_NOT_REQUIRED; > + val[0] = SMCCC_RET_NOT_REQUIRED; > break; > } > break; > @@ -41,31 +41,40 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) > case KVM_SSBD_UNKNOWN: > break; > case KVM_SSBD_KERNEL: > - val = SMCCC_RET_SUCCESS; > + val[0] = SMCCC_RET_SUCCESS; > break; > case KVM_SSBD_FORCE_ENABLE: > case KVM_SSBD_MITIGATED: > - val = SMCCC_RET_NOT_REQUIRED; > + val[0] = SMCCC_RET_NOT_REQUIRED; > break; > } > break; > case ARM_SMCCC_HV_PV_TIME_FEATURES: > - val = SMCCC_RET_SUCCESS; > + val[0] = SMCCC_RET_SUCCESS; > break; > } > break; > case ARM_SMCCC_HV_PV_TIME_FEATURES: > - val = kvm_hypercall_pv_features(vcpu); > + val[0] = kvm_hypercall_pv_features(vcpu); > break; > case ARM_SMCCC_HV_PV_TIME_ST: > gpa = kvm_init_stolen_time(vcpu); > if (gpa != GPA_INVALID) > - val = gpa; > + val[0] = gpa; > + break; > + case ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID: > + val[0] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0; > + val[1] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1; > + val[2] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2; > + val[3] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3; > + break; > + case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID: > + val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); > break; > default: > return kvm_psci_call(vcpu); > } > > - smccc_set_retval(vcpu, val, 0, 0, 0); > + smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]); > return 1; > } > -- > 2.17.1
linux-next: build failure after merge of the bluetooth tree
Hi all, After merging the bluetooth tree, today's linux-next build (arm multi_v7_defconfig) failed like this: net/bluetooth/sco.c: In function 'sco_sock_setsockopt': net/bluetooth/sco.c:862:3: error: cannot convert to a pointer type 862 | if (get_user(opt, (u32 __user *)optval)) { | ^~ net/bluetooth/sco.c:862:3: error: cannot convert to a pointer type net/bluetooth/sco.c:862:3: error: cannot convert to a pointer type Caused by commit 00398e1d5183 ("Bluetooth: Add support for BT_PKT_STATUS CMSG data for SCO connections") interacting with commit a7b75c5a8c41 ("net: pass a sockptr_t into ->setsockopt") from the net-next tree. I have applied the following merge fix patch: From: Stephen Rothwell Date: Mon, 27 Jul 2020 13:41:30 +1000 Subject: [PATCH] Bluetooth: fix for introduction of sockptr_t Signed-off-by: Stephen Rothwell --- net/bluetooth/sco.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index 6e6b03844a2a..dcf7f96ff417 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -859,7 +859,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname, break; case BT_PKT_STATUS: - if (get_user(opt, (u32 __user *)optval)) { + if (copy_from_sockptr(, optval, sizeof(u32))) { err = -EFAULT; break; } -- 2.27.0 -- Cheers, Stephen Rothwell pgprLcgJiu6cF.pgp Description: OpenPGP digital signature
[PATCH] [net/ipv6] ip6_output: Add ipv6_pinfo null check
ipv6_pinfo is initlialized by inet6_sk() which returns NULL. Hence it can cause segmentation fault. Fix this by adding a NULL check. Signed-off-by: Gaurav Singh --- net/ipv6/ip6_output.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 8a8c2d0cfcc8..7c077a6847e4 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -181,10 +181,10 @@ int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb) bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np) { - if (!np->autoflowlabel_set) - return ip6_default_np_autolabel(net); - else + if (np && np->autoflowlabel_set) return np->autoflowlabel; + else + ip6_default_np_autolabel(net); } /* -- 2.17.1
Re: [PATCH 2/2] KVM: LAPIC: Set the TDCR settable bits
On Tue, 21 Jul 2020 at 18:51, Vitaly Kuznetsov wrote: > > Wanpeng Li writes: > > > From: Wanpeng Li > > > > Only bits 0, 1, and 3 are settable, others are reserved for APIC_TDCR. > > Let's record the settable value in the virtual apic page. > > > > Signed-off-by: Wanpeng Li > > --- > > arch/x86/kvm/lapic.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > > index 4ce2ddd..8f7a14d 100644 > > --- a/arch/x86/kvm/lapic.c > > +++ b/arch/x86/kvm/lapic.c > > @@ -2068,7 +2068,7 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 > > reg, u32 val) > > case APIC_TDCR: { > > uint32_t old_divisor = apic->divide_count; > > > > - kvm_lapic_set_reg(apic, APIC_TDCR, val); > > + kvm_lapic_set_reg(apic, APIC_TDCR, val & 0xb); > > update_divide_count(apic); > > if (apic->divide_count != old_divisor && > > apic->lapic_timer.period) { > > AFAIU bit 2 should be 0 and other upper bits are reserved. Checking on > bare hardware, > > # wrmsr 0x83e 0xb > # rdmsr 0x83e > b > # wrmsr 0x83e 0xc > wrmsr: CPU 0 cannot set MSR 0x083e to 0x000c > # rdmsr 0x83e > b > > Shouldn't we fail the write in case (val & ~0xb) ? Sorry for the late response since I just come back from vacation. I can remove the "others are reserved" in patch description for the next version. It is a little different between Intel and AMD, Intel's bit 2 is 0 and AMD is reserved. On bare-metal, Intel will refuse to set APIC_TDCR once bits except 0, 1, 3 are setting, however, AMD will accept bits 0, 1, 3 and ignore other bits setting as patch does. Before the patch, we can get back anything what we set to the APIC_TDCR, this patch improves it. Wanpeng
[PATCH v3] cpuidle: Fix CFI failure
changes since v2: - add more comments on enter_s2idle to explain why it is necessary to return int even if its return value is never used. changes since v1: - add more description in commit message. *** BLURB HERE *** Neal Liu (1): cpuidle: change enter_s2idle() prototype drivers/acpi/processor_idle.c | 6 -- drivers/cpuidle/cpuidle-tegra.c | 8 +--- drivers/idle/intel_idle.c | 6 -- include/linux/cpuidle.h | 9 ++--- 4 files changed, 19 insertions(+), 10 deletions(-) -- 2.18.0
[PATCH v3] cpuidle: change enter_s2idle() prototype
Control Flow Integrity(CFI) is a security mechanism that disallows changes to the original control flow graph of a compiled binary, making it significantly harder to perform such attacks. init_state_node() assign same function callback to different function pointer declarations. static int init_state_node(struct cpuidle_state *idle_state, const struct of_device_id *matches, struct device_node *state_node) { ... idle_state->enter = match_id->data; ... idle_state->enter_s2idle = match_id->data; } Function declarations: struct cpuidle_state { ... int (*enter) (struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); void (*enter_s2idle) (struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); }; In this case, either enter() or enter_s2idle() would cause CFI check failed since they use same callee. Align function prototype of enter() since it needs return value for some use cases. The return value of enter_s2idle() is no need currently. Signed-off-by: Neal Liu Reviewed-by: Sami Tolvanen --- drivers/acpi/processor_idle.c |6 -- drivers/cpuidle/cpuidle-tegra.c |8 +--- drivers/idle/intel_idle.c |6 -- include/linux/cpuidle.h |9 ++--- 4 files changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index 75534c5..6ffb6c9 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -655,8 +655,8 @@ static int acpi_idle_enter(struct cpuidle_device *dev, return index; } -static void acpi_idle_enter_s2idle(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static int acpi_idle_enter_s2idle(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) { struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); @@ -674,6 +674,8 @@ static void acpi_idle_enter_s2idle(struct cpuidle_device *dev, } } acpi_idle_do_entry(cx); + + return 0; } static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c index 1500458..a12fb14 100644 --- a/drivers/cpuidle/cpuidle-tegra.c +++ b/drivers/cpuidle/cpuidle-tegra.c @@ -253,11 +253,13 @@ static int tegra_cpuidle_enter(struct cpuidle_device *dev, return err ? -1 : index; } -static void tegra114_enter_s2idle(struct cpuidle_device *dev, - struct cpuidle_driver *drv, - int index) +static int tegra114_enter_s2idle(struct cpuidle_device *dev, +struct cpuidle_driver *drv, +int index) { tegra_cpuidle_enter(dev, drv, index); + + return 0; } /* diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index f449584..b178da3 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -175,13 +175,15 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev, * Invoked as a suspend-to-idle callback routine with frozen user space, frozen * scheduler tick and suspended scheduler clock on the target CPU. */ -static __cpuidle void intel_idle_s2idle(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) { unsigned long eax = flg2MWAIT(drv->states[index].flags); unsigned long ecx = 1; /* break on interrupt flag */ mwait_idle_with_hints(eax, ecx); + + return 0; } /* diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h index ec2ef63..b65909a 100644 --- a/include/linux/cpuidle.h +++ b/include/linux/cpuidle.h @@ -65,10 +65,13 @@ struct cpuidle_state { * CPUs execute ->enter_s2idle with the local tick or entire timekeeping * suspended, so it must not re-enable interrupts at any point (even * temporarily) or attempt to change states of clock event devices. +* +* This callback may point to the same function as ->enter if all of +* the above requirements are met by it. */ - void (*enter_s2idle) (struct cpuidle_device *dev, - struct cpuidle_driver *drv, - int index); + int (*enter_s2idle)(struct cpuidle_device *dev, + struct cpuidle_driver *drv, + int index); }; /* Idle State Flags */ -- 1.7.9.5
linux-next: Fixes tag needs some work in the devfreq tree
Hi all, In commit 332c5b522b7c ("PM / devfrq: Fix indentaion of devfreq_summary debugfs node") Fixes tag Fixes: commit 66d0e797bf09 ("Revert "PM / devfreq: Modify the device name as devfreq(X) for sysfs"") has these problem(s): - leading word 'commit' unexpected -- Cheers, Stephen Rothwell pgpTYnsilmFFJ.pgp Description: OpenPGP digital signature
Re: ext4: delete the invalid BUGON in ext4_mb_load_buddy_gfp()
On 7/27/20 7:24 AM, brookxu wrote: Delete the invalid BUGON in ext4_mb_load_buddy_gfp(), the previous code has already judged whether page is NULL. Signed-off-by: Chunguang Xu Thanks for the patch. LGTM. Feel free to add. Reviewed-by: Ritesh Harjani --- fs/ext4/mballoc.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 28a139f..9b1c3ad 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1279,9 +1279,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp) e4b->bd_buddy_page = page; e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize); - BUG_ON(e4b->bd_bitmap_page == NULL); - BUG_ON(e4b->bd_buddy_page == NULL); - return 0; err:
Re: [PATCH v2] rtc: ds1307: provide an indication that the watchdog has fired
On Mon, Jul 27, 2020 at 11:13:06AM +1200, Chris Packham wrote: > There's not much feedback when the ds1388 watchdog fires. Generally it > yanks on the reset line and the board reboots. Capture the fact that the > watchdog has fired in the past so that userspace can retrieve it via > WDIOC_GETBOOTSTATUS. This should help distinguish a watchdog triggered > reset from a power interruption. > > Signed-off-by: Chris Packham > --- > Changes in v2: > - Set bootstatus to WDIOF_CARDRESET and let userspace decide what to do with > the information. > > drivers/rtc/rtc-ds1307.c | 8 > 1 file changed, 8 insertions(+) > > diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c > index 49702942bb08..209736db510d 100644 > --- a/drivers/rtc/rtc-ds1307.c > +++ b/drivers/rtc/rtc-ds1307.c > @@ -868,6 +868,14 @@ static int ds1388_wdt_start(struct watchdog_device > *wdt_dev) > struct ds1307 *ds1307 = watchdog_get_drvdata(wdt_dev); > u8 regs[2]; > int ret; > + int val; > + > + ret = regmap_read(ds1307->regmap, DS1388_REG_FLAG, ); > + if (ret) > + return ret; > + > + if (val & DS1388_BIT_WF) > + wdt_dev->bootstatus = WDIOF_CARDRESET; This should be done during probe, ie in ds1307_wdt_register(). Guenter > > ret = regmap_update_bits(ds1307->regmap, DS1388_REG_FLAG, >DS1388_BIT_WF, 0); > -- > 2.27.0 >
[PATCH V2 2/4] spi: lpspi: remove unused fsl_lpspi->chipselect
The cs-gpio is initailized by spi_get_gpio_descs() now. Remove the chipselect. Signed-off-by: Clark Wang --- Changes: V2: - New patch added in the v2 patchset. --- drivers/spi/spi-fsl-lpspi.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c index b0a1bb62f10a..1e426884ac37 100644 --- a/drivers/spi/spi-fsl-lpspi.c +++ b/drivers/spi/spi-fsl-lpspi.c @@ -119,8 +119,6 @@ struct fsl_lpspi_data { bool usedma; struct completion dma_rx_completion; struct completion dma_tx_completion; - - int chipselect[]; }; static const struct of_device_id fsl_lpspi_dt_ids[] = { -- 2.17.1
[PATCH V2 4/4] dt-bindings: lpspi: New property in document DT bindings for LPSPI
Add "fsl,spi-only-use-cs1-sel" to fit i.MX8DXL-EVK. Spi common code does not support use of CS signals discontinuously. It only uses CS1 without using CS0. So, add this property to re-config chipselect value. Signed-off-by: Clark Wang --- Changes: V2: - New patch added in the v2 patchset. --- Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml | 7 +++ 1 file changed, 7 insertions(+) diff --git a/Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml b/Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml index 143b94a1883a..22882e769e26 100644 --- a/Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml +++ b/Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml @@ -34,6 +34,12 @@ properties: - const: per - const: ipg + fsl,spi-only-use-cs1-sel: +description: + spi common code does not support use of CS signals discontinuously. + i.MX8DXL-EVK board only uses CS1 without using CS0. Therefore, add + this property to re-config the chipselect value in the LPSPI driver. + required: - compatible - reg @@ -57,4 +63,5 @@ examples: < IMX7ULP_CLK_DUMMY>; clock-names = "per", "ipg"; spi-slave; +fsl,spi-only-use-cs1-sel; }; -- 2.17.1
[PATCH V2 3/4] spi: lpspi: fix using CS discontinuously on i.MX8DXLEVK
SPI common code does not support using CS discontinuously for now. However, i.MX8DXL-EVK only uses CS1 without CS0. Therefore, add a flag is_only_cs1 to set the correct TCR[PCS]. Signed-off-by: Clark Wang --- Changes: V2: - No changes. --- drivers/spi/spi-fsl-lpspi.c | 11 --- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c index 1e426884ac37..85a5c952389a 100644 --- a/drivers/spi/spi-fsl-lpspi.c +++ b/drivers/spi/spi-fsl-lpspi.c @@ -98,6 +98,7 @@ struct fsl_lpspi_data { struct clk *clk_ipg; struct clk *clk_per; bool is_slave; + bool is_only_cs1; bool is_first_byte; void *rx_buf; @@ -257,10 +258,9 @@ static void fsl_lpspi_set_cmd(struct fsl_lpspi_data *fsl_lpspi) temp |= fsl_lpspi->config.bpw - 1; temp |= (fsl_lpspi->config.mode & 0x3) << 30; + temp |= (fsl_lpspi->config.chip_select & 0x3) << 24; if (!fsl_lpspi->is_slave) { temp |= fsl_lpspi->config.prescale << 27; - temp |= (fsl_lpspi->config.chip_select & 0x3) << 24; - /* * Set TCR_CONT will keep SS asserted after current transfer. * For the first transfer, clear TCR_CONTC to assert SS. @@ -421,7 +421,10 @@ static int fsl_lpspi_setup_transfer(struct spi_controller *controller, fsl_lpspi->config.mode = spi->mode; fsl_lpspi->config.bpw = t->bits_per_word; fsl_lpspi->config.speed_hz = t->speed_hz; - fsl_lpspi->config.chip_select = spi->chip_select; + if (fsl_lpspi->is_only_cs1) + fsl_lpspi->config.chip_select = 1; + else + fsl_lpspi->config.chip_select = spi->chip_select; if (!fsl_lpspi->config.speed_hz) fsl_lpspi->config.speed_hz = spi->max_speed_hz; @@ -835,6 +838,8 @@ static int fsl_lpspi_probe(struct platform_device *pdev) fsl_lpspi = spi_controller_get_devdata(controller); fsl_lpspi->dev = >dev; fsl_lpspi->is_slave = is_slave; + fsl_lpspi->is_only_cs1 = of_property_read_bool((>dev)->of_node, + "fsl,spi-only-use-cs1-sel"); controller->bits_per_word_mask = SPI_BPW_RANGE_MASK(8, 32); controller->transfer_one = fsl_lpspi_transfer_one; -- 2.17.1
[PATCH V2 0/4] Some bug fix for lpspi
Hi, This patchset mainly fixes some recently discovered problems about CS for LPSPI module on i.MX8DXLEVK. Add the dt-bindings description for the new property. Clark Wang (4): spi: lpspi: Fix kernel warning dump when probe fail after calling spi_register spi: lpspi: remove unused fsl_lpspi->chipselect spi: lpspi: fix using CS discontinuously on i.MX8DXLEVK dt-bindings: lpspi: New property in document DT bindings for LPSPI .../bindings/spi/spi-fsl-lpspi.yaml | 7 ++ drivers/spi/spi-fsl-lpspi.c | 25 +++ 2 files changed, 21 insertions(+), 11 deletions(-) -- 2.17.1
RE: 答复: PROBLEM: cgroup cost too much memory when transfer small files to tmpfs
Cc Fangxiuning On Fri 24-07-20 09:35:26, jingrui wrote: > > On Friday, July 24, 2020 3:55 PM, Michal Hocko wrote: > > > What is the reason to run under !root cgroup in those sessions if you do > > not care about accounting anyway? > > The systemd not support run those sessions under root cgroup, disable > pam-systemd will not create session/cgroup, but this is not safe and > make systemd-logind not work. Could you be more specific please? As I know, when user call sftp client to send files, the server will call pam-systemd.so lib to create session and cgroup. We can skip call pam-systemd.so by config /etc/pam.d/password-auth drop the line " -session optional pam_systemd.so". But this config is global, and will affect other services, such as ssh login. We don’t find a way just don’t create cgroup dir for sftp. @Xiuning Would you please take a look and give some suggestion? -- Michal Hocko SUSE Labs
[PATCH V2 1/4] spi: lpspi: Fix kernel warning dump when probe fail after calling spi_register
Calling devm_spi_register_controller() too early will cause problem. When probe failed occurs after calling devm_spi_register_controller(), the call of spi_controller_put() will trigger the following warning dump. [2.092138] [ cut here ] [2.096876] kernfs: can not remove 'uevent', no directory [2.102440] WARNING: CPU: 0 PID: 181 at fs/kernfs/dir.c:1503 kernfs_remove_by_name_ns+0xa0/0xb0 [2.42] Modules linked in: [2.114207] CPU: 0 PID: 181 Comm: kworker/0:7 Not tainted 5.4.24-05024-g775c6e8a738c-dirty #1314 [2.122991] Hardware name: Freescale i.MX8DXL EVK (DT) [2.128141] Workqueue: events deferred_probe_work_func [2.133281] pstate: 6005 (nZCv daif -PAN -UAO) [2.138076] pc : kernfs_remove_by_name_ns+0xa0/0xb0 [2.142958] lr : kernfs_remove_by_name_ns+0xa0/0xb0 [2.147837] sp : 8000122bba70 [2.151145] x29: 8000122bba70 x28: 8000119d6000 [2.156462] x27: x26: 800011edbce8 [2.161779] x25: x24: 3ae4f700 [2.167096] x23: 10184c10 x22: 3a3d6200 [2.172412] x21: 800011a464a8 x20: 10126a68 [2.177729] x19: 3ae5c800 x18: 000e [2.183046] x17: 0001 x16: 0019 [2.188362] x15: 0004 x14: 004c [2.193679] x13: x12: 0001 [2.198996] x11: x10: 09c0 [2.204313] x9 : 8000122bb7a0 x8 : 3a3d6c20 [2.209630] x7 : 3a3d6380 x6 : 0001 [2.214946] x5 : 0001 x4 : 3a05eb18 [2.220263] x3 : 0005 x2 : 8000119f1c48 [2.225580] x1 : 2bcbda323bf5a800 x0 : [2.230898] Call trace: [2.233345] kernfs_remove_by_name_ns+0xa0/0xb0 [2.237879] sysfs_remove_file_ns+0x14/0x20 [2.242065] device_del+0x12c/0x348 [2.24] device_unregister+0x14/0x30 [2.249492] spi_unregister_controller+0xac/0x120 [2.254201] devm_spi_unregister+0x10/0x18 [2.258304] release_nodes+0x1a8/0x220 [2.262055] devres_release_all+0x34/0x58 [2.266069] really_probe+0x1b8/0x318 [2.269733] driver_probe_device+0x54/0xe8 [2.273833] __device_attach_driver+0x80/0xb8 [2.278194] bus_for_each_drv+0x74/0xc0 [2.282034] __device_attach+0xdc/0x138 [2.285876] device_initial_probe+0x10/0x18 [2.290063] bus_probe_device+0x90/0x98 [2.293901] deferred_probe_work_func+0x64/0x98 [2.298442] process_one_work+0x198/0x320 [2.302451] worker_thread+0x1f0/0x420 [2.306208] kthread+0xf0/0x120 [2.309352] ret_from_fork+0x10/0x18 [2.312927] ---[ end trace 58abcdfae01bd3c7 ]--- So put this function at the end of the probe sequence. Signed-off-by: Clark Wang --- Changes: V2: - redo the patch base on the new code. --- drivers/spi/spi-fsl-lpspi.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c index a4a42e85e132..b0a1bb62f10a 100644 --- a/drivers/spi/spi-fsl-lpspi.c +++ b/drivers/spi/spi-fsl-lpspi.c @@ -850,12 +850,6 @@ static int fsl_lpspi_probe(struct platform_device *pdev) if (!fsl_lpspi->is_slave) controller->use_gpio_descriptors = true; - ret = devm_spi_register_controller(>dev, controller); - if (ret < 0) { - dev_err(>dev, "spi_register_controller error.\n"); - goto out_controller_put; - } - init_completion(_lpspi->xfer_done); res = platform_get_resource(pdev, IORESOURCE_MEM, 0); @@ -913,6 +907,12 @@ static int fsl_lpspi_probe(struct platform_device *pdev) if (ret < 0) dev_err(>dev, "dma setup error %d, use pio\n", ret); + ret = devm_spi_register_controller(>dev, controller); + if (ret < 0) { + dev_err(>dev, "spi_register_controller error.\n"); + goto out_pm_get; + } + pm_runtime_mark_last_busy(fsl_lpspi->dev); pm_runtime_put_autosuspend(fsl_lpspi->dev); -- 2.17.1
Re: [PATCH v4 1/2] tpm: tis: add support for MMIO TPM on SynQuacer
Hi Jarkko, Thank you for your comments. On Thu, 23 Jul 2020 at 11:36, Jarkko Sakkinen wrote: > > On Fri, Jul 17, 2020 at 05:49:31PM +0900, Masahisa Kojima wrote: > > When fitted, the SynQuacer platform exposes its SPI TPM via a MMIO > > window that is backed by the SPI command sequencer in the SPI bus > > controller. This arrangement has the limitation that only byte size > > accesses are supported, and so we'll need to provide a separate module > > that take this into account. > > > > Signed-off-by: Ard Biesheuvel > > Signed-off-by: Masahisa Kojima > > --- > > drivers/char/tpm/Kconfig | 12 ++ > > drivers/char/tpm/Makefile| 1 + > > drivers/char/tpm/tpm_tis_synquacer.c | 209 +++ > > 3 files changed, 222 insertions(+) > > create mode 100644 drivers/char/tpm/tpm_tis_synquacer.c > > > > diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig > > index 58b4c573d176..a18c314da211 100644 > > --- a/drivers/char/tpm/Kconfig > > +++ b/drivers/char/tpm/Kconfig > > @@ -74,6 +74,18 @@ config TCG_TIS_SPI_CR50 > > If you have a H1 secure module running Cr50 firmware on SPI bus, > > say Yes and it will be accessible from within Linux. > > > > +config TCG_TIS_SYNQUACER > > + tristate "TPM Interface Specification 1.2 Interface / TPM 2.0 FIFO > > Interface (MMIO - SynQuacer)" > > + depends on ARCH_SYNQUACER > > + select TCG_TIS_CORE > > + help > > + If you have a TPM security chip that is compliant with the > > + TCG TIS 1.2 TPM specification (TPM1.2) or the TCG PTP FIFO > > + specification (TPM2.0) say Yes and it will be accessible from > > + within Linux on Socionext SynQuacer platform. > > + To compile this driver as a module, choose M here; > > + the module will be called tpm_tis_synquacer. > > + > > config TCG_TIS_I2C_ATMEL > > tristate "TPM Interface Specification 1.2 Interface (I2C - Atmel)" > > depends on I2C > > diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile > > index 9567e5197f74..84db4fb3a9c9 100644 > > --- a/drivers/char/tpm/Makefile > > +++ b/drivers/char/tpm/Makefile > > @@ -21,6 +21,7 @@ tpm-$(CONFIG_EFI) += eventlog/efi.o > > tpm-$(CONFIG_OF) += eventlog/of.o > > obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o > > obj-$(CONFIG_TCG_TIS) += tpm_tis.o > > +obj-$(CONFIG_TCG_TIS_SYNQUACER) += tpm_tis_synquacer.o > > > > obj-$(CONFIG_TCG_TIS_SPI) += tpm_tis_spi.o > > tpm_tis_spi-y := tpm_tis_spi_main.o > > diff --git a/drivers/char/tpm/tpm_tis_synquacer.c > > b/drivers/char/tpm/tpm_tis_synquacer.c > > new file mode 100644 > > index ..ac2a1d2a5001 > > --- /dev/null > > +++ b/drivers/char/tpm/tpm_tis_synquacer.c > > @@ -0,0 +1,209 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * Copyright (C) 2020 Linaro Ltd. > > + * > > + * This device driver implements MMIO TPM on SynQuacer Platform. > > + */ > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include "tpm.h" > > +#include "tpm_tis_core.h" > > + > > +struct tpm_tis_synquacer_info { > > + struct resource res; > > + /* irq > 0 means: use irq $irq; > > + * irq = 0 means: autoprobe for an irq; > > + * irq = -1 means: no irq support > > + */ > > + int irq; > > +}; > > According to the coding style, multi-line comments must begin with an > empty line. > > Also it would be preferable to have the comment prepending the struct > for easier read: > > /* > * irq > 0 means: use irq $irq; > * irq = 0 means: autoprobe for an irq; > * irq = -1 means: no irq support > */ > struct tpm_tis_synquacer_info { I will modify. > > > + > > +struct tpm_tis_synquacer_phy { > > + struct tpm_tis_data priv; > > + void __iomem *iobase; > > +}; > > + > > +static inline struct tpm_tis_synquacer_phy *to_tpm_tis_tcg_phy(struct > > tpm_tis_data *data) > > +{ > > + return container_of(data, struct tpm_tis_synquacer_phy, priv); > > +} > > + > > +static int tpm_tis_synquacer_read_bytes(struct tpm_tis_data *data, u32 > > addr, > > + u16 len, u8 *result) > > +{ > > + struct tpm_tis_synquacer_phy *phy = to_tpm_tis_tcg_phy(data); > > + > > + while (len--) > > + *result++ = ioread8(phy->iobase + addr); > > + > > + return 0; > > +} > > + > > +static int tpm_tis_synquacer_write_bytes(struct tpm_tis_data *data, u32 > > addr, > > + u16 len, const u8 *value) > > +{ > > + struct tpm_tis_synquacer_phy *phy = to_tpm_tis_tcg_phy(data); > > + > > + while (len--) > > + iowrite8(*value++, phy->iobase + addr); > > + > > + return 0; > > +} > > + > > +static int tpm_tis_synquacer_read16_bw(struct tpm_tis_data *data, > > +u32 addr, u16 *result) > > +{ > > + struct tpm_tis_synquacer_phy *phy = to_tpm_tis_tcg_phy(data); > > + > > + /* > > + * Due to the
[PATCH] media: ov8856: decrease hs_trail time
To meet mipi hi speed transmission, decrease hs_trail time to pass mipi test. Signed-off-by: David Lu --- drivers/media/i2c/ov8856.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/media/i2c/ov8856.c b/drivers/media/i2c/ov8856.c index 4ca27675cc5a..1f1835b14a24 100644 --- a/drivers/media/i2c/ov8856.c +++ b/drivers/media/i2c/ov8856.c @@ -284,7 +284,7 @@ static const struct ov8856_reg mode_3280x2464_regs[] = { {0x4601, 0x80}, {0x4800, 0x44}, {0x4816, 0x53}, - {0x481b, 0x58}, + {0x481b, 0x42}, {0x481f, 0x27}, {0x4837, 0x16}, {0x483c, 0x0f}, @@ -474,7 +474,7 @@ static const struct ov8856_reg mode_1640x1232_regs[] = { {0x4601, 0x80}, {0x4800, 0x44}, {0x4816, 0x53}, - {0x481b, 0x58}, + {0x481b, 0x42}, {0x481f, 0x27}, {0x4837, 0x16}, {0x483c, 0x0f}, -- 2.17.1
Re: [PATCH 18/23] init: open code setting up stdin/stdout/stderr
On Tue, Jul 14, 2020 at 09:04:22PM +0200, Christoph Hellwig wrote: > Don't rely on the implicit set_fs(KERNEL_DS) for ksys_open to work, but > instead open a struct file for /dev/console and then install it as FD > 0/1/2 manually. I really hate that one. Every time we exposed the internal details to the fucking early init code, we paid for that afterwards. And this goes over the top wrt the level of details being exposed. _IF_ you want to keep that thing, move it to fs/file.c, with dire comment re that being very special shite for init and likely cause of subsequent trouble whenever anything gets changed, a gnat farts somewhere, etc. Do not leave that kind of crap sitting around init/*.c; KERNEL_DS may be a source of occasional PITA, but here you are trading it for a lot worse one in the future.