The nvdimm_flush() mechanism helps to reduce the impact of an ADR
(asynchronous-dimm-refresh) failure. The ADR mechanism handles flushing
platform WPQ (write-pending-queue) buffers when power is removed. The
nvdimm_flush() mechanism performs that same function on-demand.
When a pmem namespace is
On Thu, 2017-04-13 at 13:31 +0200, Borislav Petkov wrote:
> On Thu, Apr 13, 2017 at 12:29:25AM +0200, Borislav Petkov wrote:
> > On Wed, Apr 12, 2017 at 03:26:19PM -0700, Luck, Tony wrote:
> > > We can futz with that and have them specify which chain (or both)
> > > that they want to be added to.
On Fri, Apr 21, 2017 at 01:27:41PM -0700, Luck, Tony wrote:
> Boris: you coded up a "static bool memory_error(struct mce *m)"
> function inside the patches for the corrected error thingy.
>
> Perhaps when it goes upstream it should be available for other
> users too?
I don't see why not. struct
On 04/21, Luck, Tony wrote:
> On Fri, Apr 21, 2017 at 02:35:51PM -0600, Vishal Verma wrote:
> > On 04/21, Luck, Tony wrote:
> > > Needs extra parentheses to make it right. Vishal, sorry I led you astray.
> > >
> > > if (!((mce->status & 0xef80) == BIT(7)))
> >
> > Is this still right though?
On Fri, Apr 21, 2017 at 02:35:51PM -0600, Vishal Verma wrote:
> On 04/21, Luck, Tony wrote:
> > Needs extra parentheses to make it right. Vishal, sorry I led you astray.
> >
> > if (!((mce->status & 0xef80) == BIT(7)))
>
> Is this still right though? Anything AND'ed with 0xef80 will never
On 04/21, Luck, Tony wrote:
> >> > + if (!(mce->status & 0xef80) == BIT(7))
> >>
> >> Can we get a define for this, or a comment explaining all the magic
> >> that's happening on that one line?
> >
> > Yes - also like lkp pointed out, the check isn't correct at all. Let me
> > figure out
On Fri, Apr 21, 2017 at 01:19:16PM -0700, Dan Williams wrote:
> On Fri, Apr 21, 2017 at 1:16 PM, Luck, Tony wrote:
> >>> > + if (!(mce->status & 0xef80) == BIT(7))
> >>>
> >>> Can we get a define for this, or a comment explaining all the magic
> >>> that's happening on
详 情 请 查 阅 附 件 大 纲
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
On Fri, Apr 21, 2017 at 12:12 AM, Oliver O'Halloran wrote:
> Read the default alignment from the hpage_pmd_size in sysfs. On PPC the
> PMD size depends on the MMU being used. When the traditional hash MMU is
> used (P9 and earlier) the PMD size is 16MB while the newer radix MMU
These are needed on powerpc since 64K is the default page size and 16MB
is the PMD size when using the hash MMU.
Signed-off-by: Oliver O'Halloran
---
ndctl/builtin-xaction-namespace.c | 2 ++
util/size.h | 1 +
2 files changed, 3 insertions(+)
diff --git
Read the default alignment from the hpage_pmd_size in sysfs. On PPC the
PMD size depends on the MMU being used. When the traditional hash MMU is
used (P9 and earlier) the PMD size is 16MB while the newer radix MMU
uses a 2MB PMD size. The choice of MMU is done at runtime depending on
what the
11 matches
Mail list logo