On Wed, May 12, 2021 at 10:08:21PM +0530, Kajol Jain wrote:
> +static void nvdimm_pmu_read(struct perf_event *event)
> +{
> + struct nvdimm_pmu *nd_pmu = to_nvdimm_pmu(event->pmu);
> +
> + /* jump to arch/platform specific callbacks if any */
> + if (nd_pmu && nd_pmu->read)
> +
On Thu, May 13, 2021 at 05:56:14PM +0530, kajoljain wrote:
> But yes the current read/add/del functions are not adding value. We
> could add an arch/platform specific function which could handle the
> capturing of the counter data and do the rest of the operation here,
> is this approach better?
On Tue, Jul 14, 2020 at 12:02:09AM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> The PKRS MSR is defined as a per-core register. This isolates memory
> access by CPU. Unfortunately, the MSR is not preserved by XSAVE.
> Therefore, We must preserve the protections for individual tasks e
On Tue, Jul 14, 2020 at 12:02:16AM -0700, ira.we...@intel.com wrote:
> +static pgprot_t dev_protection_enable_get(struct dev_pagemap *pgmap,
> pgprot_t prot)
> +{
> + if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID)
> {
> + pgprotval_t val = pgprot_val(prot
On Tue, Jul 14, 2020 at 12:02:17AM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> Device managed pages may have additional protections. These protections
> need to be removed prior to valid use by kernel users.
>
> Check for special treatment of device managed pages in kmap and take
>
On Tue, Jul 14, 2020 at 11:53:22AM -0700, Ira Weiny wrote:
> On Tue, Jul 14, 2020 at 10:27:01AM +0200, Peter Zijlstra wrote:
> > On Tue, Jul 14, 2020 at 12:02:09AM -0700, ira.we...@intel.com wrote:
> > > From: Ira Weiny
> > >
> > > The PKRS MSR is defined as
On Tue, Jul 14, 2020 at 12:06:16PM -0700, Ira Weiny wrote:
> On Tue, Jul 14, 2020 at 10:44:51AM +0200, Peter Zijlstra wrote:
> > So, if I followed along correctly, you're proposing to do a WRMSR per
> > k{,un}map{_atomic}(), sounds like excellent performance all-round :-(
>
On Tue, Jul 14, 2020 at 12:10:47PM -0700, Ira Weiny wrote:
> On Tue, Jul 14, 2020 at 10:40:57AM +0200, Peter Zijlstra wrote:
> > That's an anti-pattern vs static_keys, I'm thinking you actually want
> > static_key_slow_{inc,dec}() instead of {enable,disable}().
>
>
On Tue, Jul 14, 2020 at 12:42:11PM -0700, Dave Hansen wrote:
> On 7/14/20 12:29 PM, Peter Zijlstra wrote:
> > On Tue, Jul 14, 2020 at 12:06:16PM -0700, Ira Weiny wrote:
> >> On Tue, Jul 14, 2020 at 10:44:51AM +0200, Peter Zijlstra wrote:
> >>> So, if I followed along
On Fri, Jul 17, 2020 at 12:20:43AM -0700, ira.we...@intel.com wrote:
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index f362ce0d5ac0..d69250a7c1bf 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -42,6 +42,7 @@
> #include
> #include
> #
On Fri, Jul 17, 2020 at 12:20:41AM -0700, ira.we...@intel.com wrote:
> +/*
> + * Get a new pkey register value from the user values specified.
> + *
> + * Kernel users use the same flags as user space:
> + * PKEY_DISABLE_ACCESS
> + * PKEY_DISABLE_WRITE
> + */
> +u32 get_new_pkr(u32 old_pkr,
On Fri, Jul 17, 2020 at 12:20:43AM -0700, ira.we...@intel.com wrote:
> +/*
> + * Write the PKey Register Supervisor. This must be run with preemption
> + * disabled as it does not guarantee the atomicity of updating the pkrs_cache
> + * and MSR on its own.
> + */
> +void write_pkrs(u32 pkrs_val)
>
On Fri, Jul 17, 2020 at 12:20:51AM -0700, ira.we...@intel.com wrote:
> +static pgprot_t dev_protection_enable_get(struct dev_pagemap *pgmap,
> pgprot_t prot)
> +{
> + if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID)
> {
> + pgprotval_t val = pgprot_val(prot)
On Fri, Jul 17, 2020 at 12:20:51AM -0700, ira.we...@intel.com wrote:
> +void dev_access_disable(void)
> +{
> + unsigned long flags;
> +
> + if (!static_branch_unlikely(&dev_protection_static_key))
> + return;
> +
> + local_irq_save(flags);
> + current->dev_page_access_re
On Fri, Jul 17, 2020 at 12:20:51AM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> Device managed memory exposes itself to the kernel direct map which
> allows stray pointers to access these device memories.
>
> Stray pointers to normal memory may result in a crash or other
> undesirable
On Fri, Jul 17, 2020 at 12:20:52AM -0700, ira.we...@intel.com wrote:
> @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void
> *vaddr, int size)
>
> #include
>
> +static inline void enable_access(struct page *page)
> +{
> + if (!page_is_access_protected(page))
> +
On Fri, Jul 17, 2020 at 12:20:53AM -0700, ira.we...@intel.com wrote:
> --- a/drivers/dax/super.c
> +++ b/drivers/dax/super.c
> @@ -30,12 +30,14 @@ static DEFINE_SPINLOCK(dax_host_lock);
>
> int dax_read_lock(void)
> {
> + dev_access_enable();
> return srcu_read_lock(&dax_srcu);
> }
On Fri, Jul 17, 2020 at 12:20:55AM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> Protecting against stray writes is particularly important for PMEM
> because, unlike writes to anonymous memory, writes to PMEM persists
> across a reboot. Thus data corruption could result in permanent lo
On Fri, Jul 17, 2020 at 12:20:56AM -0700, ira.we...@intel.com wrote:
> +static void noinstr idt_save_pkrs(idtentry_state_t state)
noinstr goes in the same place you would normally put inline, that's
before the return type, not after it.
___
Linux-nvdimm
On Fri, Jul 17, 2020 at 12:20:56AM -0700, ira.we...@intel.com wrote:
> +/* Define as macros to prevent conflict of inline and noinstr */
> +#define idt_save_pkrs(state)
> +#define idt_restore_pkrs(state)
Use __always_inline
___
Linux-nvdimm mailing list
On Fri, Jul 17, 2020 at 12:20:56AM -0700, ira.we...@intel.com wrote:
> First I'm not sure if adding this state to idtentry_state and having
> that state copied is the right way to go. It seems like we should start
> passing this by reference instead of value. But for now this works as
> an RFC.
On Fri, Jul 17, 2020 at 01:52:55PM -0700, Ira Weiny wrote:
> On Fri, Jul 17, 2020 at 10:54:42AM +0200, Peter Zijlstra wrote:
> > Then we at least have a little clue wtf the thing does.. Yes I started
> > with a rename and then got annoyed at the implementation too.
>
> On
On Fri, Jul 17, 2020 at 03:36:12PM -0700, Dave Hansen wrote:
> On 7/17/20 1:54 AM, Peter Zijlstra wrote:
> > This is unbelievable junk...
>
> Ouch!
>
> This is from the original user pkeys implementation.
The thing I fell over most was new in this patch; the naming of that
On Fri, Jul 17, 2020 at 03:34:07PM -0700, Ira Weiny wrote:
> On Fri, Jul 17, 2020 at 10:59:54AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 17, 2020 at 12:20:43AM -0700, ira.we...@intel.com wrote:
> > > +/*
> > > + * Write the PKey Register Supervisor. This
On Fri, Jul 17, 2020 at 10:06:50PM -0700, Ira Weiny wrote:
> On Fri, Jul 17, 2020 at 11:10:53AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 17, 2020 at 12:20:51AM -0700, ira.we...@intel.com wrote:
> > > +static pgprot_t dev_protection_enable_get(struct dev_pagemap *pgmap,
&
On Sat, Jul 18, 2020 at 09:13:19PM -0700, Ira Weiny wrote:
> On Fri, Jul 17, 2020 at 11:21:39AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 17, 2020 at 12:20:52AM -0700, ira.we...@intel.com wrote:
> > > @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void
On Tue, Jul 21, 2020 at 11:01:34AM -0700, Ira Weiny wrote:
> On Fri, Jul 17, 2020 at 11:30:41AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 17, 2020 at 12:20:56AM -0700, ira.we...@intel.com wrote:
> > > +static void noinstr idt_save_pkrs(idtentry_state_t state)
> >
>
On Tue, Jul 21, 2020 at 10:27:09PM -0700, Ira Weiny wrote:
> I've been really digging into this today and I'm very concerned that I'm
> completely missing something WRT idtentry_enter() and idtentry_exit().
>
> I've instrumented idt_{save,restore}_pkrs(), and __dev_access_{en,dis}able()
> with tr
On Fri, Mar 09, 2018 at 10:55:32PM -0800, Dan Williams wrote:
> Add a generic facility for awaiting an atomic_t to reach a value of 1.
>
> Page reference counts typically need to reach 0 to be considered a
> free / inactive page. However, ZONE_DEVICE pages allocated via
> devm_memremap_pages() are
On Sun, Mar 11, 2018 at 10:15:55AM -0700, Dan Williams wrote:
> On Sun, Mar 11, 2018 at 4:27 AM, Peter Zijlstra wrote:
> > On Fri, Mar 09, 2018 at 10:55:32PM -0800, Dan Williams wrote:
> >> Add a generic facility for awaiting an atomic_t to reach a value of 1.
> >>
On Thu, Mar 15, 2018 at 09:58:42AM +, David Howells wrote:
> Peter Zijlstra wrote:
>
> > > > Argh, no no no.. That whole wait_for_atomic_t thing is a giant
> > > > trainwreck already and now you're making it worse still.
>
> Your patch description
On Thu, Mar 15, 2018 at 09:58:42AM +, David Howells wrote:
> Peter Zijlstra wrote:
>
> > > > Argh, no no no.. That whole wait_for_atomic_t thing is a giant
> > > > trainwreck already and now you're making it worse still.
>
> Your patch description
On Thu, Mar 15, 2018 at 02:45:20PM +, David Howells wrote:
> Peter Zijlstra wrote:
>
> > Does the below address things sufficiently clear?
>
> Yep.
Thanks!
> > +wait_queue_head_t *__var_waitqueue(void *p)
> > +{
> > + if (BITS_PER_LONG == 6
On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport
>
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PM
On Fri, Sep 25, 2020 at 11:00:30AM +0200, David Hildenbrand wrote:
> On 25.09.20 09:41, Peter Zijlstra wrote:
> > On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote:
> >> From: Mike Rapoport
> >>
> >> Removing a PAGE_SIZE page from the direct map ev
On Tue, Sep 29, 2020 at 04:05:29PM +0300, Mike Rapoport wrote:
> On Fri, Sep 25, 2020 at 09:41:25AM +0200, Peter Zijlstra wrote:
> > On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote:
> > > From: Mike Rapoport
> > >
> > > Removing a PAGE_SIZE page
On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
> > It will drop them down to 4k pages. Given enough inodes, and allocating
> > only a single sekrit page per pmd, we'll shatter the directm
On Wed, Sep 30, 2020 at 01:20:31PM +0300, Mike Rapoport wrote:
> I tried to dig the regression report in the mailing list, and the best I
> could find is
>
> https://lore.kernel.org/lkml/20190823052335.572133-1-songliubrav...@fb.com/
>
> which does not mention the actual performance regression b
On Fri, Oct 09, 2020 at 12:42:51PM -0700, ira.we...@intel.com wrote:
> From: Fenghua Yu
>
> Define a helper, update_pkey_val(), which will be used to support both
> Protection Key User (PKU) and the new Protection Key for Supervisor
> (PKS) in subsequent patches.
>
> Co-developed-by: Ira Weiny
On Fri, Oct 09, 2020 at 12:42:53PM -0700, ira.we...@intel.com wrote:
> @@ -644,6 +663,8 @@ void __switch_to_xtra(struct task_struct *prev_p, struct
> task_struct *next_p)
>
> if ((tifp ^ tifn) & _TIF_SLD)
> switch_to_sld(tifn);
> +
> + pks_sched_in();
> }
>
You seem
On Fri, Oct 09, 2020 at 12:42:54PM -0700, ira.we...@intel.com wrote:
> +static inline void pks_update_protection(int pkey, unsigned long protection)
> +{
> + current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs,
> + pkey, prote
On Tue, Oct 13, 2020 at 11:31:45AM -0700, Dave Hansen wrote:
> > +/**
> > + * It should also be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
> > + * serializing but still maintains ordering properties similar to WRPKRU.
> > + * The current SDM section on PKRS needs updating but should be t
On Fri, Oct 09, 2020 at 12:42:55PM -0700, ira.we...@intel.com wrote:
> -noinstr bool idtentry_enter_nmi(struct pt_regs *regs)
> +noinstr void idtentry_enter_nmi(struct pt_regs *regs, irqentry_state_t
> *irq_state)
> {
> - bool irq_state = lockdep_hardirqs_enabled();
> + irq_state->exit_rc
On Fri, Oct 16, 2020 at 08:32:03PM -0700, Ira Weiny wrote:
> On Fri, Oct 16, 2020 at 12:57:43PM +0200, Peter Zijlstra wrote:
> > On Fri, Oct 09, 2020 at 12:42:51PM -0700, ira.we...@intel.com wrote:
> > > From: Fenghua Yu
> > >
> > > Define a helper, upda
On Fri, Oct 16, 2020 at 10:14:10PM -0700, Ira Weiny wrote:
> > so it either needs to
> > explicitly do so, or have an assertion that preemption is indeed
> > disabled.
>
> However, I don't think I understand clearly. Doesn't [get|put]_cpu_ptr()
> handle the preempt_disable() for us?
It does.
>
On Wed, Dec 02, 2020 at 10:28:12PM -0800, Dan Williams wrote:
> pmd_free() is close, but it is a messy fit due to requiring an @mm arg.
Hurpm, only parisc and s390 actually use that argument. And s390
_really_ needs it, because they're doing runtime folding per mm.
On Mon, Dec 07, 2020 at 04:54:21PM -0800, Dan Williams wrote:
> [ add perf maintainers ]
>
> On Sun, Nov 8, 2020 at 1:16 PM Vaibhav Jain wrote:
> >
> > Implement support for exposing generic nvdimm statistics via newly
> > introduced dimm-command ND_CMD_GET_STAT that can be handled by nvdimm
> >
On Thu, Dec 17, 2020 at 02:07:01PM +0100, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 14:14, Andy Lutomirski wrote:
> > On Mon, Nov 23, 2020 at 10:10 PM wrote:
> > After contemplating this for a bit, I think this isn't really the
> > right approach. It *works*, but we've mostly just created a
On Wed, Nov 16, 2016 at 12:50:21AM +0300, Kirill A. Shutemov wrote:
> On Fri, Nov 04, 2016 at 05:24:57AM +0100, Jan Kara wrote:
> > Currently we have two different structures for passing fault information
> > around - struct vm_fault and struct fault_env. DAX will need more
> > information in struc
On Wed, Nov 16, 2016 at 12:01:01PM +0100, Jan Kara wrote:
> On Wed 16-11-16 11:51:32, Peter Zijlstra wrote:
> > Now, I'm entirely out of touch wrt DAX, so I've not idea what that
> > needs/wants.
>
> Yeah, DAX does not have 'struct page' for its pages s
On Mon, Aug 14, 2017 at 02:40:59PM +0200, Jan Kara wrote:
> Hum, this proposal (and the problems you are trying to deal with) seem very
> similar to Peter Zijlstra's mpin() proposal from 2014 [1], just moved to
> the DAX area (and so additionally complicated by the fact that filesystems
> now have
On Sat, Jul 09, 2016 at 08:25:54PM -0700, Dan Williams wrote:
> The pcommit instruction is being deprecated in favor of either ADR
> (asynchronous DRAM refresh: flush-on-power-fail) at the platform level, or
> posted-write-queue flush addresses as defined by the ACPI 6.x NFIT (NVDIMM
> Firmware Int
On Mon, May 13, 2019 at 10:42:42PM -0700, Brendan Higgins wrote:
> This fixes the following warning seen on GCC 7.3:
> kunit/test-test.o: warning: objtool: kunit_test_unsuccessful_try() falls
> through to next function kunit_test_catch()
>
What is that file and function; no kernel tree near me
On Tue, May 14, 2019 at 01:12:23AM -0700, Brendan Higgins wrote:
> On Tue, May 14, 2019 at 08:56:43AM +0200, Peter Zijlstra wrote:
> > On Mon, May 13, 2019 at 10:42:42PM -0700, Brendan Higgins wrote:
> > > This fixes the following warning seen on GCC 7.3:
> > >
54 matches
Mail list logo