st remove it from the
> list, and then free it.
>
> Fixes: 6f3d791f3006 ("Drivers: hv: vmbus: Fix rescind handling issues")
> Signed-off-by: Dan Carpenter
I had this 'queued' in my list,
Reviewed-by: Andrea Parri
Andrea
> ---
> From static analysis. Un
On Fri, Apr 16, 2021 at 03:25:03PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Friday, April
> 16, 2021 7:40 AM
> >
> > If a malicious or compromised Hyper-V sends a spurious message of type
> > CHANNELMSG_UNLOAD_RESPONSE, the function vmbu
On Fri, Apr 09, 2021 at 03:49:00PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Thursday, April
> 8, 2021 9:15 AM
> >
> > Pointers to ring-buffer packets sent by Hyper-V are used within the
> > guest VM. Hyper-V can send packets with erroneous val
On Mon, Nov 09, 2020 at 11:07:04AM +0100, Andrea Parri (Microsoft) wrote:
> From: Andres Beltran
>
> For additional robustness in the face of Hyper-V errors or malicious
> behavior, validate all values that originate from packets that Hyper-V
> has sent to the guest in the hos
On Mon, Nov 09, 2020 at 11:07:27AM +0100, Andrea Parri (Microsoft) wrote:
> From: Andres Beltran
>
> Pointers to ring-buffer packets sent by Hyper-V are used within the
> guest VM. Hyper-V can send packets with erroneous values or modify
> packet fields after they are processed b
On Sun, Nov 29, 2020 at 06:29:55PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Thursday,
> November 26, 2020 11:12 AM
> >
> > Quoting from commit 7527810573436f ("Drivers: hv: vmbus: Introduce
> > the CHANNELMSG_MODIFYCHANNEL message type&quo
On Sun, Dec 06, 2020 at 04:59:32PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Wednesday,
> November 18, 2020 6:37 AM
> >
> > __vmbus_open() and vmbus_teardown_gpadl() do not inizialite the memory
> > for the vmbus_chan
On Sun, Dec 06, 2020 at 05:10:26PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Wednesday,
> November 18, 2020 6:37 AM
> >
> > vmbus_on_msg_dpc() double fetches from msgtype. The double fetch can
> > lead to an out-of-bound access when accessing t
On Sun, Dec 06, 2020 at 05:14:18PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Wednesday,
> November 18, 2020 6:37 AM
> >
> > vmbus_on_msg_dpc() double fetches from payload_size. The double fetch
> > can lead to a buffer overflow when (mem)copyi
On Sun, Dec 06, 2020 at 06:39:39PM +, Michael Kelley wrote:
> From: Andrea Parri (Microsoft) Sent: Wednesday,
> December 2, 2020 1:22 AM
> >
> > The hv_message object is in memory shared with the host. To prevent
> > an erroneous or a malicious host from '
On Tue, Nov 24, 2020 at 04:26:33PM +, Wei Liu wrote:
> On Wed, Nov 18, 2020 at 03:36:47PM +0100, Andrea Parri (Microsoft) wrote:
> > When channel->device_obj is non-NULL, vmbus_onoffer_rescind() could
> > invoke put_device(), that will eventually release the device and fr
> > @@ -1072,12 +1073,19 @@ void vmbus_on_msg_dpc(unsigned long data)
> > /* no msg */
> > return;
> >
> > + /*
> > +* The hv_message object is in memory shared with the host. The host
> > +* could erroneously or maliciously modify such object. Make sure to
> >
On Wed, Dec 02, 2020 at 01:40:04PM +, Wei Liu wrote:
> On Wed, Dec 02, 2020 at 02:37:16PM +0100, Andrea Parri wrote:
> > > > @@ -1072,12 +1073,19 @@ void vmbus_on_msg_dpc(unsigned long data)
> > > > /* no msg */
>
> > @@ -544,7 +545,8 @@ static int negotiate_nvsp_ver(struct hv_device
> > *device,
> > init_packet->msg.v2_msg.send_ndis_config.capability.ieee8021q = 1;
> >
> > if (nvsp_ver >= NVSP_PROTOCOL_VERSION_5) {
> > - init_packet->msg.v2_msg.send_ndis_config.capability.sriov =
> > 1;
>
> > > > @@ -544,7 +545,8 @@ static int negotiate_nvsp_ver(struct hv_device
> > > > *device,
> > > > init_packet->msg.v2_msg.send_ndis_config.capability.ieee8021q =
> > > > 1;
> > > >
> > > > if (nvsp_ver >= NVSP_PROTOCOL_VERSION_5) {
> > > > -
> > > > init_packet->ms
On Tue, Jan 26, 2021 at 12:38:47PM +0100, Andrea Parri (Microsoft) wrote:
> Pointers to receive-buffer packets sent by Hyper-V are used within the
> guest VM. Hyper-V can send packets with erroneous values or modify
> packet fields after they are processed by the guest. To defend agains
On Fri, Jan 15, 2021 at 08:30:22PM -0800, Jakub Kicinski wrote:
> On Thu, 14 Jan 2021 21:26:28 +0100 Andrea Parri (Microsoft) wrote:
> > For additional robustness in the face of Hyper-V errors or malicious
> > behavior, validate all values that originate from packets that Hyper-V
On Sun, Jan 17, 2021 at 03:10:32PM +, Wei Liu wrote:
> On Sat, Jan 16, 2021 at 02:02:01PM +0100, Andrea Parri wrote:
> > On Fri, Jan 15, 2021 at 08:30:22PM -0800, Jakub Kicinski wrote:
> > > On Thu, 14 Jan 2021 21:26:28 +0100 Andrea Parri (Microsoft) wrote:
> > > >
The comment is "misleading"; fix it by adapting a comment from
push_rt_tasks.
Signed-off-by: Andrea Parri
---
kernel/sched/deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 0a17af35..7c17001 100644
--
The flag "dl_boosted" is set by comparing *absolute* deadlines
(c.f., rt_mutex_setprio).
Signed-off-by: Andrea Parri
---
kernel/sched/deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 7c17001..be2c
Use the proper time-units for schedtool's reservation parameters.
Signed-off-by: Andrea Parri
---
Documentation/scheduler/sched-deadline.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/scheduler/sched-deadline.txt
b/Documentation/scheduler/
Put the opening brace last on the line in switch statement.
Signed-off-by: Andrea Parri
---
kernel/time/timer.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 2ece3aa..19e61f2 100644
--- a/kernel/time/timer.c
+++ b/kernel
Hi Michael,
On Thu, Feb 07, 2019 at 11:46:29PM +1100, Michael Ellerman wrote:
> Arch code can set a "dump stack arch description string" which is
> displayed with oops output to describe the hardware platform.
>
> It is useful to initialise this as early as possible, so that an early
> oops will
> > + if (!si)
> > + goto bad_nofile;
> > +
> > + preempt_disable();
> > + if (!(si->flags & SWP_VALID))
> > + goto unlock_out;
>
> After Hugh alluded to barriers, it seems the read of SWP_VALID could be
> reordered with the write in preempt_disable at runtime. Without s
> Alternative implementation could be replacing disable preemption with
> rcu_read_lock_sched and stop_machine() with synchronize_sched().
JFYI, starting with v4.20-rc1, synchronize_rcu{,expedited}() also wait
for preempt-disable sections (the intent seems to retire the RCU-sched
update-side API),
On Thu, Aug 22, 2019 at 03:50:52PM +0200, Petr Mladek wrote:
> On Wed 2019-08-21 07:46:28, John Ogness wrote:
> > On 2019-08-20, Sergey Senozhatsky wrote:
> > > [..]
> > >> > + *
> > >> > + * Memory barrier involvement:
> > >> > + *
> > >> > + * If dB reads from gA, then dC
> I am not suggesting to remove all comments. Some human readable
> explanation is important as long as the code is developed by humans.
>
> I think that I'll have to accept also the extra comments if you are
> really going to use them to check the consistency by a tool. Or
> if they are really us
> > + /*
> > +* bA:
> > +*
> > +* Setup the node to be a list terminator: next_id == id.
> > +*/
> > + WRITE_ONCE(n->next_id, id);
>
> Do we need WRITE_ONCE() here?
> Both "n" and "id" are given as parameters and do not change.
> The assigment must be done before "id" is set as
Sorry for top posting, but I forgot to mention: as you might have
noticed, my @amarulasolutions address is not active anymore; FWIW,
you should still be able to reach me at this @gmail address.
Thanks,
Andrea
On Mon, Aug 26, 2019 at 10:34:36AM +0200, Andrea Parri wrote
> > C S+ponarelease+addroncena
> >
> > {
> > int *y = &a;
> > }
> >
> > P0(int *x, int **y, int *a)
> > {
> > int *r0;
> >
> > *x = 2;
> > r0 = cmpxchg_release(y, a, x);
> > }
> >
> > P1(int *x, int **y)
> > {
> > int *r0;
> >
> > r0 = READ_ONCE(*y);
> > *r0 = 1;
> >
on-multicopy-atomic systems, as the WWC pattern demonstrates.
>
> This patch changes the LKMM to accept either a wr-vis or a reverse
> rw-xbstar link as a proof of non-concurrency.
>
> Signed-off-by: Alan Stern
Acked-by: Andrea
entire series,
Acked-by: Andrea Parri
Thanks,
Andrea
igned-off-by: Joel Fernandes (Google)
I don't quite understand how you ended up with that Cc: list (maybe some
other LKMM maintainers would have liked to receive this email, to update
their send-email scripts if not otherwise...) but welcome on board Joel!
Acked-by: Andrea P
HvSyntheticInvariantTscControl MSR: guests can
set bit 0 of this synthetic MSR to enable the InvariantTSC feature.
After setting the synthetic MSR, CPUID will enumerate support for
InvariantTSC.
Signed-off-by: Andrea Parri
---
arch/x86/include/asm/hyperv-tlfs.h | 5 +
arch/x86/kernel/cpu/mshyperv.c | 7
On Thu, Oct 03, 2019 at 05:52:00PM +0200, Andrea Parri wrote:
> If the hardware supports TSC scaling, Hyper-V will set bit 15 of the
> HV_PARTITION_PRIVILEGE_MASK in guest VMs with a compatible Hyper-V
> configuration version. Bit 15 corresponds to the
> AccessTscInvariantContro
> > @@ -244,21 +234,18 @@ int vmbus_connect(void)
> > * version.
> > */
> >
> > - version = VERSION_CURRENT;
> > + for (i = 0; ; i++) {
> > + version = vmbus_versions[i];
> > + if (version == VERSION_INVAL)
> > + goto cleanup;
>
> If you use e.
On Mon, Oct 07, 2019 at 05:25:18PM +, Dexuan Cui wrote:
> > From: linux-hyperv-ow...@vger.kernel.org
> > On Behalf Of Andrea Parri
> > Sent: Monday, October 7, 2019 9:31 AM
> >
> > +/*
> > + * Table of VMBus versions listed from newest to oldest; t
> IIUC, you're suggesting that I do:
>
> for (i = 0; i < ARRAY_SIZE(vmbus_versions); i++) {
> version = vmbus_versions[i];
>
> ret = vmbus_negotiate_version(msginfo, version);
> if (ret == -ETIMEDOUT)
> goto cleanup;
>
>
On Mon, Oct 07, 2019 at 04:18:26PM +0200, Dmitry Vyukov wrote:
> On Mon, Oct 7, 2019 at 4:14 PM Andrea Parri wrote:
> >
> > > > > static struct taskstats *taskstats_tgid_alloc(struct task_struct
> > > > > *tsk)
> > > > > {
&
On Mon, Oct 07, 2019 at 05:41:10PM +, Dexuan Cui wrote:
> > From: linux-hyperv-ow...@vger.kernel.org
> > On Behalf Of Andrea Parri
> > Sent: Monday, October 7, 2019 9:31 AM
> >
> > Hi all,
> >
> > The patchset:
> >
> > - simplifi
On Tue, Oct 08, 2019 at 04:24:14PM +0200, Christian Brauner wrote:
> On Tue, Oct 08, 2019 at 04:20:35PM +0200, Andrea Parri wrote:
> > On Mon, Oct 07, 2019 at 04:18:26PM +0200, Dmitry Vyukov wrote:
> > > On Mon, Oct 7, 2019 at 4:14 PM Andrea Parri
> > > wrote:
&g
> Oh ups, yeah of course :)
> https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=taskstats_syzbot
You forgot to update the commit msg. It looks good to me modulo that.
Thanks,
Andrea
On Tue, Oct 08, 2019 at 10:41:42PM +, Dexuan Cui wrote:
> > From: Vitaly Kuznetsov
> > Sent: Tuesday, October 8, 2019 6:00 AM
> > ...
> > > Looking at the uses of VERSION_INVAL, I find one remaining occurrence
> > > of this macro in vmbus_bus_resume(), which does:
> > >
> > > if (vmbus_prot
"taskstats: cleanup ->signal->stats allocation")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christian Brauner
> Reviewed-by: Dmitry Vyukov
Reviewed-by: Andrea Parri
Thanks,
Andrea
> ---
> /* v1 */
> Link:
> https://lore.kernel.org/r/20191005112806.1
On Wed, Oct 09, 2019 at 09:45:50AM +0200, Dmitry Vyukov wrote:
> On Sat, Oct 5, 2019 at 6:16 AM Dmitry Vyukov wrote:
> >
> > On Sat, Oct 5, 2019 at 2:58 AM Eric Dumazet wrote:
> > > > This one is tricky. What I think we need to avoid is an onslaught of
> > > > patches adding READ_ONCE/WRITE_ONCE
Hi Christian,
On Mon, Oct 07, 2019 at 01:52:16AM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole
an.brau...@ubuntu.com
> ---
> /* v1 */
> Link:
> https://lore.kernel.org/r/20191005112806.13960-1-christian.brau...@ubuntu.com
>
> /* v2 */
> - Dmitry Vyukov , Marco Elver :
> - fix the original double-checked locking using memory barriers
>
> /* v3 */
> - Andrea Parri :
> > > static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
> > > {
> > > struct signal_struct *sig = tsk->signal;
> > > - struct taskstats *stats;
> > > + struct taskstats *stats_new, *stats;
> > >
> > > - if (sig->stats || thread_group_empty(tsk))
> > > -
Hyper-V has added VMBus protocol versions 5.1 and 5.2 in recent release
versions. Allow Linux guests to negotiate these new protocol versions
on versions of Hyper-V that support them.
Signed-off-by: Andrea Parri
---
drivers/hv/connection.c | 12 +++-
include/linux/hyperv.h | 4
lternative, introduce a table with the version numbers listed
in order (from the most recent to the oldest). vmbus_connect() loops
through the versions listed in the table until it gets an accepted
connection or gets to the end of the table (invalid version).
Suggested-by: Michael Kelley
Signed-off-by: An
Hi all,
The patchset:
- simplifies/refactors the VMBus negotiation code by introducing
the table of VMBus protocol versions (patch 1/2),
- enables VMBus protocol versions 5.1 and 5.2 (patch 2/2).
Thanks,
Andrea
Andrea Parri (2):
Drivers: hv: vmbus: Introduce table of VMBus protocol
On Fri, Sep 06, 2019 at 02:11:29PM -0400, Alan Stern wrote:
> Folks:
>
> I have spent some time writing up a section for
> tools/memory-model/Documentation/explanation.txt on plain accesses and
> data races. The initial version is below.
>
> I'm afraid it's rather long and perhaps gets too bog
On Mon, Sep 23, 2019 at 04:49:31PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 19, 2019 at 02:59:06PM +0100, David Howells wrote:
>
> > But I don't agree with this. You're missing half the barriers. There
> > should
> > be *four* barriers. The document mandates only 3 barriers, and uses
> > REA
On Mon, Oct 21, 2019 at 01:33:27PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group ge
parameter to cap the VMBus version (Dexuan Cui)
[1] https://lkml.kernel.org/r/20191007163115.26197-1-parri.and...@gmail.com
Andrea Parri (3):
Drivers: hv: vmbus: Introduce table of VMBus protocol versions
Drivers: hv: vmbus: Enable VMBus protocol versions 4.1, 5.1 and 5.2
Drivers: hv: vmbus: Add
: Andrea Parri
---
drivers/hv/connection.c | 15 +--
drivers/net/hyperv/netvsc.c | 6 +++---
include/linux/hyperv.h | 8 +++-
net/vmw_vsock/hyperv_transport.c | 4 ++--
4 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/drivers/hv
lternative, introduce a table with the version numbers listed
in order (from the most recent to the oldest). vmbus_connect() loops
through the versions listed in the table until it gets an accepted
connection or gets to the end of the table (invalid version).
Suggested-by: Michael Kelley
Signed-off-by: An
.
Add the module parameter "max_version", to upper-bound the VMBus
versions guests can negotiate.
Suggested-by: Dexuan Cui
Signed-off-by: Andrea Parri
---
drivers/hv/connection.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/drivers/hv/connection.c b/drivers/hv/co
> > @@ -244,20 +232,18 @@ int vmbus_connect(void)
> > * version.
> > */
> >
> > - version = VERSION_CURRENT;
> > + for (i = 0; i < ARRAY_SIZE(vmbus_versions); i++) {
> > + version = vmbus_versions[i];
> >
> > - do {
> > ret = vmbus_negotiate_version(msginfo,
> > @@ -182,15 +182,21 @@ static inline u32 hv_get_avail_to_write_percent(
> > * 2 . 4 (Windows 8)
> > * 3 . 0 (Windows 8 R2)
> > * 4 . 0 (Windows 10)
> > + * 4 . 1 (Windows 10 RS3)
> > * 5 . 0 (Newer Windows 10)
> > + * 5 . 1 (Windows 10 RS4)
> > + * 5 . 2 (Windows Server 2019, RS5)
dmesg read:
[0.000138] Booting paravirtualized kernel on Hyper-V
Reported-by: Michael Kelley
Signed-off-by: Andrea Parri
---
arch/x86/kernel/cpu/mshyperv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index 105844d542e5..c7
> > @@ -154,6 +154,8 @@ static uint32_t __init ms_hyperv_platform(void)
>
> This function is for platform detection only.
>
> > if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > return 0;
> >
> > + pv_info.name = "Hyper-V";
> > +
>
> At this point we're not sure if Li
dmesg read:
[0.000172] Booting paravirtualized kernel on Hyper-V
Reported-by: Michael Kelley
Signed-off-by: Andrea Parri
---
Changes since v1 ([1]):
- move the setting of pv_info.name to ms_hyperv_init_platform() (Wei Liu)
[1] https://lkml.kernel.org/r/20191015092937.11244-1-parri.and...@gmail.
/20191007163115.26197-1-parri.and...@gmail.com
Andrea Parri (3):
Drivers: hv: vmbus: Introduce table of VMBus protocol versions
Drivers: hv: vmbus: Enable VMBus protocol versions 4.1, 5.1 and 5.2
Drivers: hv: vmbus: Add module parameter to cap the VMBus version
drivers/hv/connection.c | 72
: Andrea Parri
---
drivers/hv/connection.c | 13 -
include/linux/hyperv.h | 8 +++-
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 8dc48f53c1ac4..cadfb34b38d80 100644
--- a/drivers/hv/connection.c
+++ b
lternative, introduce a table with the version numbers listed
in order (from the most recent to the oldest). vmbus_connect() loops
through the versions listed in the table until it gets an accepted
connection or gets to the end of the table (invalid version).
Suggested-by: Michael Kelley
Signed-off-by: An
.
Add the module parameter "max_version", to upper-bound the VMBus
versions guests can negotiate.
Suggested-by: Dexuan Cui
Signed-off-by: Andrea Parri
---
drivers/hv/connection.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/drivers/hv/connection.c b/drivers/hv/co
ou mean pr_info?
> pv_info.name = "Hyper-V";
> ^~~
Ouch, sorry for this...
>
> Wrap it into a #ifdef to fix this.
>
> Fixes: 628270ef628a ("x86/hyperv: Set pv_info.name to "Hyper-V"")
> Signed-off-by: YueHaibing
Reviewed-by: Andrea Parr
> Fixes: 235b62176712 ("mm/swap: add cluster lock")
> Signed-off-by: "Huang, Ying"
> Not-Nacked-by: Hugh Dickins
> Cc: Paul E. McKenney
> Cc: Minchan Kim
> Cc: Johannes Weiner
> Cc: Tim Chen
> Cc: Mel Gorman
> Cc: Jérôme Glisse
> Cc: Mich
Fixes to inline comments, documentation, script usage.
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosawa
Cc: Daniel Lustig
Andrea Parri (2):
to
Use "herd7" in each such reference.
Signed-off-by: Andrea Parri
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosawa
Cc: Daniel Lustig
---
The comment should say "Sometimes" for the result.
Signed-off-by: Andrea Parri
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosaw
ONCE(*x, 1);
smp_store_release(y, 1);
}
P1(int *x, int *y, int *z)
{
int r0;
int r1;
int r2;
r0 = READ_ONCE(*y);
WRITE_ONCE(*z, r0);
r1 = smp_load_acquire(z);
r2 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r2=0)
Signed-off-by: Andrea Parri
Cc: Alan
On Mon, Feb 11, 2019 at 03:38:59PM +0100, Petr Mladek wrote:
> On Mon 2019-02-11 13:50:35, Andrea Parri wrote:
> > Hi Michael,
> >
> >
> > On Thu, Feb 07, 2019 at 11:46:29PM +1100, Michael Ellerman wrote:
> > > Arch code can set a "dump stack arch des
On Wed, Feb 20, 2019 at 10:26:04AM +0100, Peter Zijlstra wrote:
> On Tue, Feb 19, 2019 at 06:01:17PM -0800, Paul E. McKenney wrote:
> > On Tue, Feb 19, 2019 at 11:57:37PM +0100, Andrea Parri wrote:
> > > Remove this subtle (and, AFAICT, unused) ordering: we can add it back,
>
On Wed, Feb 20, 2019 at 09:57:00AM +, Will Deacon wrote:
> On Wed, Feb 20, 2019 at 10:26:04AM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 19, 2019 at 06:01:17PM -0800, Paul E. McKenney wrote:
> > > On Tue, Feb 19, 2019 at 11:57:37PM +0100, Andrea Parri wrote:
> > >
> >> > > + * Order the stores above in vsnprintf() vs the store
> >> > > of the
> >> > > + * space below which joins the two strings. Note this
> >> > > doesn't
> >> > > + * make the code truly race free because there is no
> >> > > barrier on
> >> > > +
On Mon, Jan 21, 2019 at 01:25:26PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 21, 2019 at 11:51:21AM +0100, Andrea Parri wrote:
> > On Wed, Jan 16, 2019 at 07:42:18PM +0100, Andrea Parri wrote:
> > > The smp_wmb() in move_queued_task() (c.f., __set_task_cpu()) pairs with
> &
)) to honor
this address dependency. Also, mark the accesses to ->cpu and ->on_rq
with READ_ONCE()/WRITE_ONCE() to comply with the LKMM.
Signed-off-by: Andrea Parri
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: "Paul E. McKenney"
Cc: Alan Stern
Cc: Will Deacon
---
Changes in v
> @@ -131,7 +159,7 @@ let rec rcu-fence = rcu-gp | srcu-gp |
> (rcu-fence ; rcu-link ; rcu-fence)
>
> (* rb orders instructions just as pb does *)
> -let rb = prop ; po ; rcu-fence ; po? ; hb* ; pb*
> +let rb = prop ; po ; rcu-fence ; po? ; hb* ; pb* ; [marked]
Testing has revealed some s
> So, you are saying that ACQUIRE does not guarantee that "po-later stores
> on the same CPU and all propagated stores from other CPUs
> must propagate to all other CPUs after the acquire operation "?
> I was reading about acquire before posting this and trying to understand,
> and this was my con
versions of these functions.
>
> Co-developed-by: Peter Zijlstra (Intel)
> Signed-off-by: Elena Reshetova
Reviewed-by: Andrea Parri
Andrea
> ---
> Documentation/core-api/refcount-vs-atomic.rst | 24 +---
> arch/x86/include/asm
the ACQUIRE itself.
Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honour
this address dependency between loads; also, mark the store to ->cpu in
__set_task_cpu() by using WRITE_ONCE() in order to tell the compiler to
not mess/tear this (synchronizing) memory access.
Signed-off
[...]
> The difficulty with incorporating plain accesses in the memory model
> is that the compiler has very few constraints on how it treats plain
> accesses. It can eliminate them, duplicate them, rearrange them,
> merge them, split them up, and goodness knows what else. To make some
> sense o
On Wed, Jan 16, 2019 at 10:36:58PM +0100, Andrea Parri wrote:
> [...]
>
> > The difficulty with incorporating plain accesses in the memory model
> > is that the compiler has very few constraints on how it treats plain
> > accesses. It can eliminate them, duplicate them, r
On Mon, Nov 26, 2018 at 05:44:12PM +0100, Andrea Parri wrote:
> As the comments for wake_up_bit() and waitqueue_active() point out,
> the barriers are needed to order the clearing of the _FL_NOT_READY
> bit and the waitqueue_active() load; match the implicit barrier in
> pre
or lock accesses). This work is based on an
> initial proposal created by Andrea Parri back in December 2017,
> although it has grown a lot since then.
>
> The adaptation involves two main aspects: recognizing the ordering
> induced by plain accesses and detecting data races. They ar
Hi Elena,
[...]
> **Important note for maintainers:
>
> Some functions from refcount_t API defined in lib/refcount.c
> have different memory ordering guarantees than their atomic
> counterparts.
> The full comparison can be seen in
> https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
>
uct.stack_refcount to refcount_t
For the series, please feel free to add:
Reviewed-by: Andrea Parri
(You may still want to update the references to the 'refcount-vs-atomic'
doc. in the commit messages.)
Andrea
>
> fs/exec.c| 4 ++--
> fs/proc/task
On Fri, Jan 18, 2019 at 10:10:22AM -0500, Alan Stern wrote:
> On Thu, 17 Jan 2019, Andrea Parri wrote:
>
> > > Can the compiler (maybe, it does?) transform, at the C or at the "asm"
> > > level, LB1's P0 in LB2's P0 (LB1 and LB2 are reported below)?
&
On Wed, Jan 16, 2019 at 07:42:18PM +0100, Andrea Parri wrote:
> The smp_wmb() in move_queued_task() (c.f., __set_task_cpu()) pairs with
> the composition of the dependency and the ACQUIRE in task_rq_lock():
>
> move_queued_task() task_rq_lock()
>
>
On Mon, Jan 21, 2019 at 10:52:37AM +0100, Dmitry Vyukov wrote:
[...]
> > Am I missing something or refcount_dec_and_test does not in fact
> > provide ACQUIRE ordering?
> >
> > +case 5) - decrement-based RMW ops that return a value
> > +-
> > +
>
unterpart
>
> Suggested-by: Kees Cook
> Reviewed-by: David Windsor
> Reviewed-by: Hans Liljestrand
> Signed-off-by: Elena Reshetova
Reviewed-by: Andrea Parri
(Same remark about the reference in the commit message. ;-) )
Andrea
> ---
> kernel/kcov.c | 9 +
>
On Mon, Jan 21, 2019 at 01:29:11PM +0100, Dmitry Vyukov wrote:
> On Mon, Jan 21, 2019 at 12:45 PM Andrea Parri
> wrote:
> >
> > On Mon, Jan 21, 2019 at 10:52:37AM +0100, Dmitry Vyukov wrote:
> >
> > [...]
> >
> > > > Am I missing som
On Tue, Apr 30, 2019 at 05:08:43PM +0800, Yan, Zheng wrote:
> On Tue, Apr 30, 2019 at 4:26 PM Peter Zijlstra wrote:
> >
> > On Mon, Apr 29, 2019 at 10:15:00PM +0200, Andrea Parri wrote:
> > > This barrier only applies to the read-modify-write operations; in
> > >
On Tue, Apr 30, 2019 at 01:16:57AM +0200, Andrea Parri wrote:
> Hi Mike,
>
> > >This barrier only applies to the read-modify-write operations; in
> > >particular, it does not apply to the atomic_read() primitive.
> > >
> > >Replace the barrier with an smp
On Thu, May 09, 2019 at 10:36:54AM -0700, Paul E. McKenney wrote:
> On Tue, May 07, 2019 at 03:16:13PM -0700, Paul E. McKenney wrote:
> > On Wed, May 01, 2019 at 01:27:13PM -0700, Paul E. McKenney wrote:
> > > On Wed, May 01, 2019 at 03:16:55PM -0400, Steven Rostedt wrote:
> > > > On Wed, 1 May 201
On Thu, May 09, 2019 at 11:40:25PM +0200, Andrea Parri wrote:
> On Thu, May 09, 2019 at 10:36:54AM -0700, Paul E. McKenney wrote:
> > On Tue, May 07, 2019 at 03:16:13PM -0700, Paul E. McKenney wrote:
> > > On Wed, May 01, 2019 at 01:27:13PM -0700, Paul E. McKenney wrote:
>
> > > Adding some "sched" folks in Cc: hopefully, they can shed some light
> > > about this.
> >
> > +Thomas, +Sebastian
> >
> > Thread starts here:
> >
> > http://lkml.kernel.org/r/20190427180246.ga15...@linux.ibm.com
>
> Peter Zijlstra kindly volunteered over IRC to look at this more closely
Hi Mark,
On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
> Currently architectures return inconsistent types for atomic64 ops. Some
> return
> long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> (e.g. x86).
(only partially related, but probably worth askin
201 - 300 of 665 matches
Mail list logo