Hi, Dan,

On 9/14/2021 9:44 PM, Dan Williams wrote:
On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane....@oracle.com> wrote:

If pwrite(2) encounters poison in a pmem range, it fails with EIO.
This is unecessary if hardware is capable of clearing the poison.

Though not all dax backend hardware has the capability of clearing
poison on the fly, but dax backed by Intel DCPMEM has such capability,
and it's desirable to, first, speed up repairing by means of it;
second, maintain backend continuity instead of fragmenting it in
search for clean blocks.

Jane Chu (3):
   dax: introduce dax_operation dax_clear_poison

The problem with new dax operations is that they need to be plumbed
not only through fsdax and pmem, but also through device-mapper.

In this case I think we're already covered by dax_zero_page_range().
That will ultimately trigger pmem_clear_poison() and it is routed
through device-mapper properly.

Can you clarify why the existing dax_zero_page_range() is not sufficient?

fallocate ZERO_RANGE is in itself a functionality that applied to dax
should lead to zero out the media range.  So one may argue it is part
of a block operations, and not something explicitly aimed at clearing
poison. I'm also thinking about the MOVEDIR64B instruction and how it
might be used to clear poison on the fly with a single 'store'.
Of course, that means we need to figure out how to narrow down the
error blast radius first.

With respect to plumbing through device-mapper, I thought about that,
and wasn't sure. I mean the clear-poison work will eventually fall on
the pmem driver, and thru the DM layers, how does that play out thru
DM?  BTW, our customer doesn't care about creating dax volume thru DM, so.


   dax: introduce dax_clear_poison to dax pwrite operation
   libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation

  drivers/dax/super.c   | 13 +++++++++++++
  drivers/nvdimm/pmem.c | 17 +++++++++++++++++
  fs/dax.c              |  9 +++++++++
  include/linux/dax.h   |  6 ++++++
  4 files changed, 45 insertions(+)


Reply via email to