Dan Williams wrote:
> [ add Boris ]
[ actually add Boris ]
Boris, see below, thoughts on deprecating acpi_extlog...
> Bjorn Helgaas wrote:
> > On Mon, May 27, 2024 at 04:43:41PM +0200, Fabio M. De Francesco wrote:
> > > Currently, extlog_print() (ELOG) only reports CP
ghes_do_proc() (GHES) prints to kernel log and calls
> > pci_print_aer() to report via the ftrace infrastructure.
> >
> > Add support to report the CPER PCIe Error section also via the ftrace
> > infrastructure by calling pci_print_aer() to make ELOG act consistently
> &g
k Arcitecture events may signal failing PCIe
components or links. The AER event contains details on what was
happening on the wire when the error was signaled.
>
> Cc: Dan Williams
> Signed-off-by: Fabio M. De Francesco
> ---
> drivers/acpi/acpi_extlog.c | 30 +
t() via the IOMCA (I/O Machine Check
Architecture) mechanism. Bring parity to the extlog_print() path by
including a similar trace_non_standard_event().
---
>
> Cc: Dan Williams
> Signed-off-by: Fabio M. De Francesco
> ---
> drivers/acpi/acpi_extlog.c | 6 ++
> 1 file ch
Dan Williams wrote:
> Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > Hi,
> >
> > Following the discussion about handling of CXL fixed memory windows on
> > arm64 [1] I decided to bite the bullet and move numa_memblks from x86
move these
> functions from x86 to mm/numa_memblks.c and select
> CONFIG_NUMA_KEEP_MEMINFO when CONFIG_NUMA_MEMBLKS=y for dax and cxl.
>
> Signed-off-by: Mike Rapoport (Microsoft)
> Reviewed-by: Jonathan Cameron
> Tested-by: Zi Yan # for x86_64 and arm64
Looks good to me:
Reviewed-by: Dan Williams
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> numa_cleanup_meminfo() moves blocks outside system RAM to
> numa_reserved_meminfo and it uses 0 and PFN_PHYS(max_pfn) to determine
> the memory boundaries.
>
> Replace the memory range boundaries with more portable
> memblock_start_of_
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Move numa_emulation codfrom arch/x86 to mm/numa_emulation.c
s/codfrom/code from/
I am surprised that numa-emulation stayed x86 only for so long. I think
it is useful facility for debugging NUMA scaling and heterogenous memory
topologi
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Instead of looping over numa_meminfo array to detect node's start and
> end addresses use get_pfn_range_for_init().
>
> This is shorter and make it easier to lift numa_memblks to generic code.
>
> Signed-off-by: Mike Rapoport (Microso
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Hi,
>
> Following the discussion about handling of CXL fixed memory windows on
> arm64 [1] I decided to bite the bullet and move numa_memblks from x86 to
> the generic code so they will be available on arm64/riscv and maybe on
> loonga
Alistair Popple wrote:
>
> Dan Williams writes:
>
> > Alistair Popple wrote:
> >> FS DAX pages have always maintained their own page reference counts
> >> without following the normal rules for page reference counting. In
> >> particular pages are
Alistair Popple wrote:
> FS DAX pages have always maintained their own page reference counts
> without following the normal rules for page reference counting. In
> particular pages are considered free when the refcount hits one rather
> than zero and refcounts are not added when mapping the page.
>
Alistair Popple wrote:
> PCI P2PDMA pages are not mapped with pXX_devmap PTEs therefore the
> check in __gup_device_huge() is redundant. Remove it
>
> Signed-off-by: Alistair Popple
> Reviewed-by: Jason Gunthorpe
> Acked-by: David Hildenbrand
Acked-by: Dan Williams
Wang, Qingshun wrote:
> Fetch and store the data of 3 more registers: "Link Status", "Device
> Control 2", and "Advanced Error Capabilities and Control". This data is
> needed for external observation to better understand ANFE.
>
> Signed-off-by: "Wang, Qingshun"
> ---
> drivers/acpi/apei/ghes.c
Dan Williams wrote:
> Bjorn Helgaas wrote:
> > On Tue, Oct 31, 2023 at 04:35:23AM +0800, kernel test robot wrote:
> > > tree/branch:
> > > https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> > > branch HEAD: c503e3eec382ac708ee7
Bjorn Helgaas wrote:
> On Tue, Oct 31, 2023 at 04:35:23AM +0800, kernel test robot wrote:
> > tree/branch:
> > https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> > branch HEAD: c503e3eec382ac708ee7adf874add37b77c5d312 Add linux-next
> > specific files for 20231030
> >
Terry Bowman wrote:
> Hi Dan,
>
> On 8/31/23 15:35, Dan Williams wrote:
> > Terry Bowman wrote:
> >> From: Robert Richter
> >>
> >> In Restricted CXL Device (RCD) mode a CXL device is exposed as an
> >> RCiEP, but CXL downstream and upstream p
Terry Bowman wrote:
> From: Robert Richter
>
> In Restricted CXL Device (RCD) mode a CXL device is exposed as an
> RCiEP, but CXL downstream and upstream ports are not enumerated and
> not visible in the PCIe hierarchy. [1] Protocol and link errors from
> these non-enumerated ports are signaled a
Terry Bowman wrote:
> From: Robert Richter
>
> In Restricted CXL Device (RCD) mode a CXL device is exposed as an
> RCiEP, but CXL downstream and upstream ports are not enumerated and
> not visible in the PCIe hierarchy. [1] Protocol and link errors from
> these non-enumerated ports are signaled a
Terry Bowman wrote:
> From: Robert Richter
>
> RCEC AER corrected and uncorrectable internal errors (CIE/UIE) are
> disabled by default. [1][2] Enable them to receive CXL downstream port
> errors of a Restricted CXL Host (RCH).
>
> [1] CXL 3.0 Spec, 12.2.1.1 - RCH Downstream Port Detected Errors
Terry Bowman wrote:
> From: Robert Richter
>
> In Restricted CXL Device (RCD) mode a CXL device is exposed as an
> RCiEP, but CXL downstream and upstream ports are not enumerated and
> not visible in the PCIe hierarchy. Protocol and link errors are sent
> to an RCEC.
>
> Restricted CXL host (RCH
7;cxl'
trace system, however, it is unlikely that a single platform will ever
load both drivers simultaneously.
Cc: Steven Rostedt
Signed-off-by: Dan Williams
---
This patch is targeting v6.3. I am sending it out now to enable the
in-flight Event and Poison list patch sets to build upon. I
errors, not all may be
> logged in this way.
>
> Signed-off-by: Tony Luck
Just some minor comments below, but you can add:
Reviewed-by: Dan Williams
>
> ---
> Changes in V2:
>Naoya Horiguchi:
> 1) Use -EHWPOISON error code instead of minus one.
>
Alistair Popple wrote:
>
> Dan Williams writes:
>
> > Alistair Popple wrote:
> >>
> >> Jason Gunthorpe writes:
> >>
> >> > On Mon, Sep 26, 2022 at 04:03:06PM +1000, Alistair Popple wrote:
> >> >> Since 27674ef6c73f (&q
Alistair Popple wrote:
>
> Jason Gunthorpe writes:
>
> > On Mon, Sep 26, 2022 at 04:03:06PM +1000, Alistair Popple wrote:
> >> Since 27674ef6c73f ("mm: remove the extra ZONE_DEVICE struct page
> >> refcount") device private pages have no longer had an extra reference
> >> count when the page is
Michael Ellerman wrote:
> Sachin Sant writes:
> > Linux-next (5.19.0-rc8-next-20220728) fails to build on powerpc with
> > following error:
> >
> > ERROR: modpost: "memory_add_physaddr_to_nid" [drivers/cxl/cxl_pmem.ko]
> > undefined!
> > make[1]: *** [scripts/Makefile.modpost:128: modules-only.sy
Shivaprasad G Bhat wrote:
> With the nd_namespace_blk and nd_blk_region infrastructures being removed,
> the ndtest still has some references to the old code. So the
> compilation fails as below,
>
> ../tools/testing/nvdimm/test/ndtest.c:204:25: error:
> ‘ND_DEVICE_NAMESPACE_BLK’ undeclared here
support for nvdimm events, initially only for 'papr_scm'
devices.
- Deprecate the 'block aperture' support in libnvdimm, it only ever
existed in the specification, not in shipping product.
--------
Dan Williams (6):
On Wed, Mar 23, 2022 at 3:05 AM Michael Ellerman wrote:
>
> Dan Williams writes:
> > On Tue, Mar 22, 2022 at 7:30 AM kajoljain wrote:
> >> On 3/22/22 03:09, Dan Williams wrote:
> >> > On Fri, Mar 18, 2022 at 4:42 AM Kajol Jain wrote:
> >> >>
On Tue, Mar 22, 2022 at 7:30 AM kajoljain wrote:
>
>
>
> On 3/22/22 03:09, Dan Williams wrote:
> > On Fri, Mar 18, 2022 at 4:42 AM Kajol Jain wrote:
> >>
> >> The following build failure occures when CONFIG_PERF_EVENTS is not set
> >> as generic p
On Fri, Mar 18, 2022 at 4:42 AM Kajol Jain wrote:
>
> The following build failure occures when CONFIG_PERF_EVENTS is not set
> as generic pmu functions are not visible in that scenario.
>
> |-- s390-randconfig-r044-20220313
> | |-- nd_perf.c:(.text):undefined-reference-to-perf_pmu_migrate_contex
On Mon, Mar 21, 2022 at 2:39 PM Dan Williams wrote:
>
> On Fri, Mar 18, 2022 at 4:42 AM Kajol Jain wrote:
> >
> > The following build failure occures when CONFIG_PERF_EVENTS is not set
> > as generic pmu functions are not visible in that scenario.
> >
>
On Fri, Mar 18, 2022 at 4:42 AM Kajol Jain wrote:
>
> The following build failure occures when CONFIG_PERF_EVENTS is not set
> as generic pmu functions are not visible in that scenario.
>
> arch/powerpc/platforms/pseries/papr_scm.c:372:35: error: ‘struct perf_event’
> has no member named ‘attr’
>
On Tue, Mar 15, 2022 at 4:21 AM Michael Ellerman wrote:
>
> Stephen Rothwell writes:
> > Hi all,
> >
> > Today's linux-next merge of the nvdimm tree got a conflict in:
> >
> > arch/powerpc/platforms/pseries/papr_scm.c
> >
> > between commit:
> >
> > bbbca72352bb ("powerpc/papr_scm: Implement
On Mon, Mar 7, 2022 at 9:27 PM kajoljain wrote:
>
> Hi Dan,
> Can you take this patch-set if it looks fine to you.
>
Pushed out to my libnvdimm-pending branch for a 0day confirmation
before heading over to linux-next.
On Wed, Feb 23, 2022 at 11:07 AM Dan Williams wrote:
>
> On Fri, Feb 18, 2022 at 10:06 AM Dan Williams
> wrote:
> >
> > On Thu, Feb 17, 2022 at 8:34 AM Kajol Jain wrote:
> > >
> > > Patchset adds performance stats reporting support for nvdimm.
> &g
On Fri, Feb 18, 2022 at 10:06 AM Dan Williams wrote:
>
> On Thu, Feb 17, 2022 at 8:34 AM Kajol Jain wrote:
> >
> > Patchset adds performance stats reporting support for nvdimm.
> > Added interface includes support for pmu register/unregister
> > functions. A struct
5 -> Resend v5
> - Resend the patchset
>
> - Link to the patchset v5: https://lkml.org/lkml/2021/9/28/643
>
> v4 -> v5:
> - Remove multiple variables defined in nvdimm_pmu structure include
> name and pmu functions(event_int/add/del/read) as they are just
> used to
On Tue, Nov 2, 2021 at 5:10 PM Luis Chamberlain wrote:
>
> On Fri, Oct 15, 2021 at 05:13:48PM -0700, Dan Williams wrote:
> > On Fri, Oct 15, 2021 at 4:53 PM Luis Chamberlain wrote:
> > >
> > > If nd_integrity_init() fails we'd get del_gendisk() called,
> &
hing to unwind.
The rest looks good to me. After dropping "goto out;" you can add:
Reviewed-by: Dan Williams
vice has
not been through device_add()."
Fixes: 41cd8b70c37a ("libnvdimm, btt: add support for blk integrity")
With that you can add:
Reviewed-by: Dan Williams
On Fri, Oct 15, 2021 at 4:53 PM Luis Chamberlain wrote:
>
> If nd_integrity_init() fails we'd get del_gendisk() called,
> but that's not correct as we should only call that if we're
> done with device_add_disk(). Fix this by providing unwinding
> prior to the devm call being registered and moving
that change is less trivial it is
reserved for later.
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Andrew Donnellan
Acked-by: Frederic Barrat (v1)
Signed-off-by: Ben Widawsky
Reviewed-by: Andrew Donnellan
Signed-off-by: Dan Williams
---
arch/powerpc/platforms/powernv/ocxl.c |3 ++-
drivers/misc
ied to the Vendor ID
of the PCI component. Where the DVSEC Vendor may be a standards body
like CXL.
Cc: David E. Box
Cc: Jonathan Cameron
Cc: Bjorn Helgaas
Cc: Dan Williams
Cc: linux-...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Andrew Donnellan
Cc: Lu Baolu
Reviewed-by: Frederic Barr
pci: Split cxl_pci_setup_regs()
PCI: Add pci_find_dvsec_capability to find designated VSEC
cxl/pci: Use pci core's DVSEC functionality
ocxl: Use pci core's DVSEC functionality
Dan Williams (2):
cxl/pci: Fix NULL vs ERR_PTR confusion
cxl/pci: Add @base to cxl_registe
On Thu, Sep 23, 2021 at 10:27 AM Ben Widawsky wrote:
>
> Reduce maintenance burden of DVSEC query implementation by using the
> centralized PCI core implementation.
>
> Cc: io...@lists.linux-foundation.org
> Cc: David Woodhouse
> Cc: Lu Baolu
> Signed-off-by: Ben Widawsky
> ---
> drivers/iommu
On Thu, Sep 23, 2021 at 10:27 AM Ben Widawsky wrote:
>
> Reduce maintenance burden of DVSEC query implementation by using the
> centralized PCI core implementation.
>
> Signed-off-by: Ben Widawsky
> ---
> drivers/cxl/pci.c | 20 +---
> 1 file changed, 1 insertion(+), 19 deletions
On Thu, Sep 23, 2021 at 10:27 AM Ben Widawsky wrote:
>
> The structure exists to pass around information about register mapping.
> Using it more extensively cleans up many existing functions.
I would have liked to have seen "add @base to cxl_register_map" and
"use @map for @bar and @offset argume
On Thu, Sep 23, 2021 at 10:27 AM Ben Widawsky wrote:
>
> In preparation for moving parts of register mapping to cxl_core, the
> cxl_pci driver is refactored to utilize a new helper to find register
> blocks by type.
>
> cxl_pci scanned through all register blocks and mapping the ones that
> the dr
> only comes when the registers are mapped for their final usage, and that
> will have more precision in the request."
Looks good to me:
Reviewed-by: Dan Williams
>
> Recommended-by: Dan Williams
This isn't one of the canonical tags:
Documentation/process/submitting-p
helps reduce the LOC in a subsequent patch to refactor some
> of cxl_pci register mapping.
Looks good to me:
Reviewed-by: Dan Williams
Please spell out "register block indicator" in the subject so that the
shortlog remains somewhat readable.
On Thu, Sep 23, 2021 at 10:27 AM Ben Widawsky wrote:
>
> In preparation for passing around the Register Block Indicator (RBI) as
> a parameter, it is desirable to convert the type to an enum
On Tue, Sep 21, 2021 at 3:05 PM Ben Widawsky wrote:
>
> Reduce maintenance burden of DVSEC query implementation by using the
> centralized PCI core implementation.
>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Frederic Barrat
> Cc: Andrew Donnellan
> Signed-off-by: Ben Widawsky
> ---
> drivers/m
On Tue, Sep 14, 2021 at 9:08 PM Dan Williams wrote:
>
> On Thu, Sep 9, 2021 at 12:56 AM kajoljain wrote:
> >
> >
> >
> > On 9/8/21 3:29 AM, Dan Williams wrote:
> > > Hi Kajol,
> > >
> > > Apologies for the delay in responding to this serie
On Thu, Sep 9, 2021 at 12:56 AM kajoljain wrote:
>
>
>
> On 9/8/21 3:29 AM, Dan Williams wrote:
> > Hi Kajol,
> >
> > Apologies for the delay in responding to this series, some comments below:
>
> Hi Dan,
> No issues, thanks for reviewing the patches.
&g
On Thu, Sep 2, 2021 at 10:11 PM Kajol Jain wrote:
>
> Details is added for the event, cpumask and format attributes
> in the ABI documentation.
>
> Acked-by: Peter Zijlstra (Intel)
> Reviewed-by: Madhavan Srinivasan
> Tested-by: Nageswara R Sastry
> Signed-off-by: Kajol Jain
> ---
> Documenta
Hi Kajol,
Apologies for the delay in responding to this series, some comments below:
On Thu, Sep 2, 2021 at 10:10 PM Kajol Jain wrote:
>
> A structure is added, called nvdimm_pmu, for performance
> stats reporting support of nvdimm devices. It can be used to add
> nvdimm pmu data such as support
On Tue, Aug 31, 2021 at 6:53 AM Paul Moore wrote:
>
> On Tue, Aug 31, 2021 at 5:09 AM Ondrej Mosnacek wrote:
> > On Sat, Jun 19, 2021 at 12:18 AM Dan Williams
> > wrote:
> > > On Wed, Jun 16, 2021 at 1:51 AM Ondrej Mosnacek
> > > wrote:
>
> ...
> implemented buses return an ignored error code and so don't anticipate
> wrong expectations for driver authors.
>
> drivers/cxl/core.c| 3 +--
> drivers/dax/bus.c | 4 +---
> drivers/nvdimm/bus.c | 3 +--
For CXL, DAX, and NVDIMM
Acked-by: Dan Williams
rences:
> [1] https://pmem.io/documents/Dirty_Shutdown_Handling-V1.0.pdf
>
> Signed-off-by: Vaibhav Jain
> Reviewed-by: Aneesh Kumar K.V
Belated:
Acked-by: Dan Williams
It's looking like CXL will add one of these as well. Might be time to
add a unified location when that happens and deprecate these
bus-specific locations.
x: implement SELinux lockdown")
> Signed-off-by: Ondrej Mosnacek
[..]
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 2acc6173da36..c1747b6555c7 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -568,7 +568,7 @@ static bool cxl_mem_raw_command_allowed(u16 opcode
son
Cc: Jens Axboe
Signed-off-by: Dan Williams
---
Changes in v2 Improve the changelog.
drivers/nvdimm/pmem.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 31f3c4bd6f72..fc6b78dd2d24 100644
--- a/drivers/nvdimm
The queue_to_disk() helper can not be used after del_gendisk()
communicate @disk via the pgmap->owner.
Reported-by: Sachin Sant
Fixes: 87eb73b2ca7c ("nvdimm-pmem: convert to blk_alloc_disk/blk_cleanup_disk")
Cc: Christoph Hellwig
Cc: Ulf Hansson
Cc: Jens Axboe
Signed-off-by:
[ add Sachin who reported this commit in -next ]
On Thu, May 20, 2021 at 10:52 PM Christoph Hellwig wrote:
>
> Convert the nvdimm-pmem driver to use the blk_alloc_disk and
> blk_cleanup_disk helpers to simplify gendisk and request_queue
> allocation.
>
> Signed-off-by: Christoph Hellwig
> ---
>
;
> int rc;
>
> @@ -1179,6 +1184,14 @@ static int papr_scm_probe(struct platform_device *pdev)
> p->res.name = pdev->name;
> p->res.flags = IORESOURCE_MEM;
>
> + /* Try retriving the stat buffer and see if its supported */
s/retriving/retrieving/
> + stat_size = drc_pmem_query_stats(p, NULL, 0);
> + if (stat_size > 0) {
> + p->stat_buffer_len = stat_size;
> + dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
> + p->stat_buffer_len);
> + }
> +
> rc = papr_scm_nvdimm_init(p);
> if (rc)
> goto err2;
> --
> 2.31.1
>
After the minor fixups above you can add:
Reviewed-by: Dan Williams
...I assume this will go through the PPC tree.
On Thu, Apr 15, 2021 at 4:44 AM Vaibhav Jain wrote:
>
> Thanks for looking into this Dan,
>
> Dan Williams writes:
>
> > On Wed, Apr 14, 2021 at 5:40 AM Vaibhav Jain wrote:
> >>
> >> Currently drc_pmem_qeury_stats() generates a dev_err in case
> >
On Wed, Apr 14, 2021 at 5:40 AM Vaibhav Jain wrote:
>
> Currently drc_pmem_qeury_stats() generates a dev_err in case
> "Enable Performance Information Collection" feature is disabled from
> HMC. The error is of the form below:
>
> papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Failed to quer
olled
> disable of THP and prevent a huge fault if the hardware lacks hugepage
> support.
Looks good to me.
Reviewed-by: Dan Williams
I assume this will go through Andrew.
[ add perf maintainers ]
On Sun, Nov 8, 2020 at 1:16 PM Vaibhav Jain wrote:
>
> Implement support for exposing generic nvdimm statistics via newly
> introduced dimm-command ND_CMD_GET_STAT that can be handled by nvdimm
> command handler function and provide values for these statistics back
> to l
include of linux/mmzone.h
is not sufficient.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Andrew Morton
Reported-by: kernel test robot
Signed-off-by: Dan Williams
---
arch/powerpc/include/asm/mmzone.h |7 +--
arch/powerpc/mm/mem.c |1 +
2 fil
On Mon, Sep 21, 2020 at 11:35 AM Nick Desaulniers
wrote:
>
> Hello DAX maintainers,
> I noticed our PPC64LE builds failing last night:
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/388047043
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/388
On Tue, Jul 7, 2020 at 11:07 AM Randy Dunlap wrote:
>
> Drop the doubled word "have".
>
> Signed-off-by: Randy Dunlap
> Cc: Jonathan Corbet
> Cc: linux-...@vger.kernel.org
> Cc: Dan Williams
> ---
> Documentation/maintainer/maintainer-entry-profile.rst |
| 5 +-
> drivers/nvdimm/pmem.c | 5 +-
For drivers/nvdimm
Acked-by: Dan Williams
On Tue, Jun 30, 2020 at 8:09 PM Aneesh Kumar K.V
wrote:
>
> On 7/1/20 1:15 AM, Dan Williams wrote:
> > On Tue, Jun 30, 2020 at 2:21 AM Aneesh Kumar K.V
> > wrote:
> > [..]
> >>>> The bio argument isn't for range based flushing, it is for f
On Tue, Jun 30, 2020 at 2:21 AM Aneesh Kumar K.V
wrote:
[..]
> >> The bio argument isn't for range based flushing, it is for flush
> >> operations that need to complete asynchronously.
> > How does the block layer determine that the pmem device needs
> > asynchronous fushing?
> >
>
> set_b
turally visible for
> the platform buffer flush.
Looks good, after a few minor fixups below you can add:
Reviewed-by: Dan Williams
I'm expecting that these will be merged through the powerpc tree since
they mostly impact powerpc with only minor touches to libnvdimm.
> Si
On Mon, Jun 29, 2020 at 10:05 PM Aneesh Kumar K.V
wrote:
>
> Dan Williams writes:
>
> > On Mon, Jun 29, 2020 at 6:58 AM Aneesh Kumar K.V
> > wrote:
> >>
> >> of_pmem on POWER10 can now use phwsync instead of hwsync to ensure
> >> all previous wri
On Mon, Jun 29, 2020 at 10:02 PM Aneesh Kumar K.V
wrote:
>
> Dan Williams writes:
>
> > On Mon, Jun 29, 2020 at 1:29 PM Aneesh Kumar K.V
> > wrote:
> >>
> >> Architectures like ppc64 provide persistent memory specific barriers
> >> that will en
On Mon, Jun 29, 2020 at 6:58 AM Aneesh Kumar K.V
wrote:
>
> We only support persistent memory on P8 and above. This is enforced by the
> firmware and further checked on virtualzied platform during platform init.
> Add WARN_ONCE in pmem flush routines to catch the wrong usage of these.
>
> Signed-o
On Mon, Jun 29, 2020 at 1:41 PM Aneesh Kumar K.V
wrote:
>
> Michal Suchánek writes:
>
> > Hello,
> >
> > On Mon, Jun 29, 2020 at 07:27:20PM +0530, Aneesh Kumar K.V wrote:
> >> nvdimm expect the flush routines to just mark the cache clean. The barrier
> >> that mark the store globally visible is d
On Mon, Jun 29, 2020 at 6:58 AM Aneesh Kumar K.V
wrote:
>
> of_pmem on POWER10 can now use phwsync instead of hwsync to ensure
> all previous writes are architecturally visible for the platform
> buffer flush.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/cacheflush.h | 7
On Mon, Jun 29, 2020 at 1:29 PM Aneesh Kumar K.V
wrote:
>
> Architectures like ppc64 provide persistent memory specific barriers
> that will ensure that all stores for which the modifications are
> written to persistent storage by preceding dcbfps and dcbstps
> instructions have updated persistent
On Mon, Jun 15, 2020 at 5:56 AM Borislav Petkov wrote:
>
> On Mon, Jun 15, 2020 at 06:14:03PM +0530, Vaibhav Jain wrote:
> > 'seq_buf' provides a very useful abstraction for writing to a string
> > buffer without needing to worry about it over-flowing. However even
> > though the API has been stab
On Wed, Jun 10, 2020 at 5:10 AM Vaibhav Jain wrote:
>
> Dan Williams writes:
>
> > On Tue, Jun 9, 2020 at 10:54 AM Vaibhav Jain wrote:
> >>
> >> Thanks Dan for the consideration and taking time to look into this.
> >>
> >> My responses below:
On Tue, Jun 9, 2020 at 10:54 AM Vaibhav Jain wrote:
>
> Thanks Dan for the consideration and taking time to look into this.
>
> My responses below:
>
> Dan Williams writes:
>
> > On Mon, Jun 8, 2020 at 5:16 PM kernel test robot wrote:
> >>
> >>
On Mon, Jun 8, 2020 at 5:16 PM kernel test robot wrote:
>
> Hi Vaibhav,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on powerpc/next]
> [also build test WARNING on linus/master v5.7 next-20200605]
> [cannot apply to linux-nvdimm/libnvdimm-for-next scottwo
t; > papr_scm_ndctl() in case of a PDSM request is received via ND_CMD_CALL
> > command from libnvdimm.
> >
> > Cc: "Aneesh Kumar K . V"
> > Cc: Dan Williams
> > Cc: Michael Ellerman
> > Cc: Ira Weiny
> > Signed-off-by: Vaibhav Jain
> > -
;return' statement thereby ensuring that value of
> > 'cmd_rc' is always logged when papr_scm_ndctl() returns.
> >
> > Cc: "Aneesh Kumar K . V"
> > Cc: Dan Williams
> > Cc: Michael Ellerman
> > Cc: Ira Weiny
> > Signed-off-by: Vaibh
On Fri, Jun 5, 2020 at 8:22 AM Vaibhav Jain wrote:
[..]
> > Oh, why not define a maximal health payload with all the attributes
> > you know about today, leave some room for future expansion, and then
> > report a validity flag for each attribute? This is how the "intel"
> > smart-health payload w
On Sat, May 30, 2020 at 12:18 AM Aneesh Kumar K.V
wrote:
>
> On 5/30/20 12:52 AM, Dan Williams wrote:
> > On Fri, May 29, 2020 at 3:55 AM Aneesh Kumar K.V
> > wrote:
> >>
> >> On 5/29/20 3:22 PM, Jan Kara wrote:
> >>> Hi!
> >>
On Fri, May 29, 2020 at 3:55 AM Aneesh Kumar K.V
wrote:
>
> On 5/29/20 3:22 PM, Jan Kara wrote:
> > Hi!
> >
> > On Fri 29-05-20 15:07:31, Aneesh Kumar K.V wrote:
> >> Thanks Michal. I also missed Jeff in this email thread.
> >
> > And I think you'll also need some of the sched maintainers for the
rtunate that
we already have 2 ways to describe persistent memory devices, let's
not perpetuate a third so that "grep" has a chance to find
interrelated code across architectures. Other than that this looks
good to me.
> Cc: "Aneesh Kumar K . V"
> Cc: Dan Willi
On Thu, May 21, 2020 at 7:39 AM Jeff Moyer wrote:
>
> Dan Williams writes:
>
> >> But I agree with your concern that if we have older kernel/applications
> >> that continue to use `dcbf` on future hardware we will end up
> >> having issues w.r.t powerfa
On Thu, May 21, 2020 at 10:03 AM Aneesh Kumar K.V
wrote:
>
> On 5/21/20 8:08 PM, Jeff Moyer wrote:
> > Dan Williams writes:
> >
> >>> But I agree with your concern that if we have older kernel/applications
> >>> that continue to use `dcbf` on future h
On Tue, May 19, 2020 at 6:53 AM Aneesh Kumar K.V
wrote:
>
> Dan Williams writes:
>
> > On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
> > wrote:
>
> ...
>
> >> Applications using new instructions will behave as expected when running
> >> on P8
On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
wrote:
>
>
> Hi Dan,
>
> Apologies for the delay in response. I was waiting for feedback from
> hardware team before responding to this email.
>
>
> Dan Williams writes:
>
> > On Tue, May 12, 2020 at
On Tue, May 12, 2020 at 1:08 AM Christoph Hellwig wrote:
>
> On Sat, May 09, 2020 at 08:07:14AM -0700, Dan Williams wrote:
> > > which are all used in the I/O submission path (generic_make_request /
> > > generic_make_request_checks). This is mostly a prep cleanup patch
On Tue, May 12, 2020 at 8:47 PM Aneesh Kumar K.V
wrote:
>
> Architectures like ppc64 provide persistent memory specific barriers
> that will ensure that all stores for which the modifications are
> written to persistent storage by preceding dcbfps and dcbstps
> instructions have updated persistent
On Sat, May 9, 2020 at 1:24 AM Christoph Hellwig wrote:
>
> On Fri, May 08, 2020 at 11:04:45AM -0700, Dan Williams wrote:
> > On Fri, May 8, 2020 at 9:16 AM Christoph Hellwig wrote:
> > >
> > > Hi all,
> > >
> > > various bio based drivers use queu
On Fri, May 8, 2020 at 9:16 AM Christoph Hellwig wrote:
>
> Hi all,
>
> various bio based drivers use queue->queuedata despite already having
> set up disk->private_data, which can be used just as easily. This
> series cleans them up to only use a single private data pointer.
...but isn't the qu
1 - 100 of 465 matches
Mail list logo