Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Dan Williams
On Thu, May 28, 2015 at 1:51 PM, Linda Knippers  wrote:
> On 5/28/2015 3:59 PM, Dan Williams wrote:
>> On Thu, May 28, 2015 at 11:36 AM, Toshi Kani  wrote:
>>> On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
 On Mon, May 4, 2015 at 1:26 PM, Toshi Kani  wrote:
> On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
>>>  :
>
> The libnd does not support memdev->flags, which contains "Memory Device
> State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
> errors, we should only allow a failed NVDIMM be accessed with read-only
> for possible data recovery (or not allow any access when the data is
> completely lost), and should not let users operate normally over the
> corrupted data until the error is dealt properly.

 I agree with setting read-only access when these flags show that the
 battery is not ready to persist new writes, but I don't think we
 should block access in the case where the restore from flash failed.
 If the data is potentially corrupted we should log that fact, but
 otherwise enable access.  I.e. potentially corrupt data is better than
 unavailable data.  It's up to filesystem or application to maintain
 its own checksums to catch data corruption.

> Can you set memdev->flags to nd_region(_desc) so that the pmem driver
> can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
> the disk read-only or fail probing, and log errors accordingly.

 Will do.
>>>
>>> I do not see this change in v4.  Is this part of the pending changes
>>> behind this release?
>>
>> Yes, I was holding it off until we had an upstream acceptance baseline
>> set.  That is on hold pending Christoph's review.  He's looking to
>> come back next Wednesday with deeper review comments.  The runway to
>> land this in v4.2 is running short...
>
> Hi Dan,
>
> Do you have a short list of pending changes, especially if some weren't
> discussed on the list?  That might help reviewers.
>
> I know we're still looking at and trying a number of things, like using
> the BTT on today's NVDIMMs and adding another example DSM, so we will
> have more feedback and patches and may need to adapt some of the
> structure to do that.  This can happen after the initial patches are
> pulled in but I just wanted to let you know where we are.  Not sure
> about others.
>

It seems it's just Christoph that has asserted there are things he'd
liked changed, so I don't see much potential for confusion in letting
out the pending backlog.  I'll see to it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Linda Knippers
On 5/28/2015 3:59 PM, Dan Williams wrote:
> On Thu, May 28, 2015 at 11:36 AM, Toshi Kani  wrote:
>> On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
>>> On Mon, May 4, 2015 at 1:26 PM, Toshi Kani  wrote:
 On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
>>  :

 The libnd does not support memdev->flags, which contains "Memory Device
 State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
 errors, we should only allow a failed NVDIMM be accessed with read-only
 for possible data recovery (or not allow any access when the data is
 completely lost), and should not let users operate normally over the
 corrupted data until the error is dealt properly.
>>>
>>> I agree with setting read-only access when these flags show that the
>>> battery is not ready to persist new writes, but I don't think we
>>> should block access in the case where the restore from flash failed.
>>> If the data is potentially corrupted we should log that fact, but
>>> otherwise enable access.  I.e. potentially corrupt data is better than
>>> unavailable data.  It's up to filesystem or application to maintain
>>> its own checksums to catch data corruption.
>>>
 Can you set memdev->flags to nd_region(_desc) so that the pmem driver
 can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
 the disk read-only or fail probing, and log errors accordingly.
>>>
>>> Will do.
>>
>> I do not see this change in v4.  Is this part of the pending changes
>> behind this release?
> 
> Yes, I was holding it off until we had an upstream acceptance baseline
> set.  That is on hold pending Christoph's review.  He's looking to
> come back next Wednesday with deeper review comments.  The runway to
> land this in v4.2 is running short...

Hi Dan,

Do you have a short list of pending changes, especially if some weren't
discussed on the list?  That might help reviewers.

I know we're still looking at and trying a number of things, like using
the BTT on today's NVDIMMs and adding another example DSM, so we will
have more feedback and patches and may need to adapt some of the
structure to do that.  This can happen after the initial patches are
pulled in but I just wanted to let you know where we are.  Not sure
about others.

-- ljk


> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Dan Williams
On Thu, May 28, 2015 at 11:36 AM, Toshi Kani  wrote:
> On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
>> On Mon, May 4, 2015 at 1:26 PM, Toshi Kani  wrote:
>> > On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
>  :
>> >
>> > The libnd does not support memdev->flags, which contains "Memory Device
>> > State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
>> > errors, we should only allow a failed NVDIMM be accessed with read-only
>> > for possible data recovery (or not allow any access when the data is
>> > completely lost), and should not let users operate normally over the
>> > corrupted data until the error is dealt properly.
>>
>> I agree with setting read-only access when these flags show that the
>> battery is not ready to persist new writes, but I don't think we
>> should block access in the case where the restore from flash failed.
>> If the data is potentially corrupted we should log that fact, but
>> otherwise enable access.  I.e. potentially corrupt data is better than
>> unavailable data.  It's up to filesystem or application to maintain
>> its own checksums to catch data corruption.
>>
>> > Can you set memdev->flags to nd_region(_desc) so that the pmem driver
>> > can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
>> > the disk read-only or fail probing, and log errors accordingly.
>>
>> Will do.
>
> I do not see this change in v4.  Is this part of the pending changes
> behind this release?

Yes, I was holding it off until we had an upstream acceptance baseline
set.  That is on hold pending Christoph's review.  He's looking to
come back next Wednesday with deeper review comments.  The runway to
land this in v4.2 is running short...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Toshi Kani
On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
> On Mon, May 4, 2015 at 1:26 PM, Toshi Kani  wrote:
> > On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
 :
> >
> > The libnd does not support memdev->flags, which contains "Memory Device
> > State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
> > errors, we should only allow a failed NVDIMM be accessed with read-only
> > for possible data recovery (or not allow any access when the data is
> > completely lost), and should not let users operate normally over the
> > corrupted data until the error is dealt properly.
> 
> I agree with setting read-only access when these flags show that the
> battery is not ready to persist new writes, but I don't think we
> should block access in the case where the restore from flash failed.
> If the data is potentially corrupted we should log that fact, but
> otherwise enable access.  I.e. potentially corrupt data is better than
> unavailable data.  It's up to filesystem or application to maintain
> its own checksums to catch data corruption.
> 
> > Can you set memdev->flags to nd_region(_desc) so that the pmem driver
> > can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
> > the disk read-only or fail probing, and log errors accordingly.
> 
> Will do.

I do not see this change in v4.  Is this part of the pending changes
behind this release?

Thanks,
-Toshi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Linda Knippers
On 5/28/2015 3:59 PM, Dan Williams wrote:
 On Thu, May 28, 2015 at 11:36 AM, Toshi Kani toshi.k...@hp.com wrote:
 On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
 On Mon, May 4, 2015 at 1:26 PM, Toshi Kani toshi.k...@hp.com wrote:
 On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
  :

 The libnd does not support memdev-flags, which contains Memory Device
 State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
 errors, we should only allow a failed NVDIMM be accessed with read-only
 for possible data recovery (or not allow any access when the data is
 completely lost), and should not let users operate normally over the
 corrupted data until the error is dealt properly.

 I agree with setting read-only access when these flags show that the
 battery is not ready to persist new writes, but I don't think we
 should block access in the case where the restore from flash failed.
 If the data is potentially corrupted we should log that fact, but
 otherwise enable access.  I.e. potentially corrupt data is better than
 unavailable data.  It's up to filesystem or application to maintain
 its own checksums to catch data corruption.

 Can you set memdev-flags to nd_region(_desc) so that the pmem driver
 can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
 the disk read-only or fail probing, and log errors accordingly.

 Will do.

 I do not see this change in v4.  Is this part of the pending changes
 behind this release?
 
 Yes, I was holding it off until we had an upstream acceptance baseline
 set.  That is on hold pending Christoph's review.  He's looking to
 come back next Wednesday with deeper review comments.  The runway to
 land this in v4.2 is running short...

Hi Dan,

Do you have a short list of pending changes, especially if some weren't
discussed on the list?  That might help reviewers.

I know we're still looking at and trying a number of things, like using
the BTT on today's NVDIMMs and adding another example DSM, so we will
have more feedback and patches and may need to adapt some of the
structure to do that.  This can happen after the initial patches are
pulled in but I just wanted to let you know where we are.  Not sure
about others.

-- ljk


 --
 To unsubscribe from this list: send the line unsubscribe linux-acpi in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Dan Williams
On Thu, May 28, 2015 at 11:36 AM, Toshi Kani toshi.k...@hp.com wrote:
 On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
 On Mon, May 4, 2015 at 1:26 PM, Toshi Kani toshi.k...@hp.com wrote:
  On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
  :
 
  The libnd does not support memdev-flags, which contains Memory Device
  State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
  errors, we should only allow a failed NVDIMM be accessed with read-only
  for possible data recovery (or not allow any access when the data is
  completely lost), and should not let users operate normally over the
  corrupted data until the error is dealt properly.

 I agree with setting read-only access when these flags show that the
 battery is not ready to persist new writes, but I don't think we
 should block access in the case where the restore from flash failed.
 If the data is potentially corrupted we should log that fact, but
 otherwise enable access.  I.e. potentially corrupt data is better than
 unavailable data.  It's up to filesystem or application to maintain
 its own checksums to catch data corruption.

  Can you set memdev-flags to nd_region(_desc) so that the pmem driver
  can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
  the disk read-only or fail probing, and log errors accordingly.

 Will do.

 I do not see this change in v4.  Is this part of the pending changes
 behind this release?

Yes, I was holding it off until we had an upstream acceptance baseline
set.  That is on hold pending Christoph's review.  He's looking to
come back next Wednesday with deeper review comments.  The runway to
land this in v4.2 is running short...
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Dan Williams
On Thu, May 28, 2015 at 1:51 PM, Linda Knippers linda.knipp...@hp.com wrote:
 On 5/28/2015 3:59 PM, Dan Williams wrote:
 On Thu, May 28, 2015 at 11:36 AM, Toshi Kani toshi.k...@hp.com wrote:
 On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
 On Mon, May 4, 2015 at 1:26 PM, Toshi Kani toshi.k...@hp.com wrote:
 On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
  :

 The libnd does not support memdev-flags, which contains Memory Device
 State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
 errors, we should only allow a failed NVDIMM be accessed with read-only
 for possible data recovery (or not allow any access when the data is
 completely lost), and should not let users operate normally over the
 corrupted data until the error is dealt properly.

 I agree with setting read-only access when these flags show that the
 battery is not ready to persist new writes, but I don't think we
 should block access in the case where the restore from flash failed.
 If the data is potentially corrupted we should log that fact, but
 otherwise enable access.  I.e. potentially corrupt data is better than
 unavailable data.  It's up to filesystem or application to maintain
 its own checksums to catch data corruption.

 Can you set memdev-flags to nd_region(_desc) so that the pmem driver
 can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
 the disk read-only or fail probing, and log errors accordingly.

 Will do.

 I do not see this change in v4.  Is this part of the pending changes
 behind this release?

 Yes, I was holding it off until we had an upstream acceptance baseline
 set.  That is on hold pending Christoph's review.  He's looking to
 come back next Wednesday with deeper review comments.  The runway to
 land this in v4.2 is running short...

 Hi Dan,

 Do you have a short list of pending changes, especially if some weren't
 discussed on the list?  That might help reviewers.

 I know we're still looking at and trying a number of things, like using
 the BTT on today's NVDIMMs and adding another example DSM, so we will
 have more feedback and patches and may need to adapt some of the
 structure to do that.  This can happen after the initial patches are
 pulled in but I just wanted to let you know where we are.  Not sure
 about others.


It seems it's just Christoph that has asserted there are things he'd
liked changed, so I don't see much potential for confusion in letting
out the pending backlog.  I'll see to it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-28 Thread Toshi Kani
On Sat, 2015-05-09 at 16:55 -0700, Dan Williams wrote:
 On Mon, May 4, 2015 at 1:26 PM, Toshi Kani toshi.k...@hp.com wrote:
  On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
 :
 
  The libnd does not support memdev-flags, which contains Memory Device
  State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
  errors, we should only allow a failed NVDIMM be accessed with read-only
  for possible data recovery (or not allow any access when the data is
  completely lost), and should not let users operate normally over the
  corrupted data until the error is dealt properly.
 
 I agree with setting read-only access when these flags show that the
 battery is not ready to persist new writes, but I don't think we
 should block access in the case where the restore from flash failed.
 If the data is potentially corrupted we should log that fact, but
 otherwise enable access.  I.e. potentially corrupt data is better than
 unavailable data.  It's up to filesystem or application to maintain
 its own checksums to catch data corruption.
 
  Can you set memdev-flags to nd_region(_desc) so that the pmem driver
  can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
  the disk read-only or fail probing, and log errors accordingly.
 
 Will do.

I do not see this change in v4.  Is this part of the pending changes
behind this release?

Thanks,
-Toshi

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-09 Thread Dan Williams
On Mon, May 4, 2015 at 1:26 PM, Toshi Kani  wrote:
> On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
>  :
>> +
>> +static int nd_acpi_register_region(struct acpi_nfit_desc *acpi_desc,
>> + struct nfit_spa *nfit_spa)
>> +{
>> + static struct nd_mapping nd_mappings[ND_MAX_MAPPINGS];
>> + struct acpi_nfit_spa *spa = nfit_spa->spa;
>> + struct nfit_memdev *nfit_memdev;
>> + struct nd_region_desc ndr_desc;
>> + int spa_type, count = 0;
>> + struct resource res;
>> + u16 spa_index;
>> +
>> + spa_type = nfit_spa_type(spa);
>> + spa_index = spa->spa_index;
>> + if (spa_index == 0) {
>> + dev_dbg(acpi_desc->dev, "%s: detected invalid spa index\n",
>> + __func__);
>> + return 0;
>> + }
>> +
>> + memset(, 0, sizeof(res));
>> + memset(_mappings, 0, sizeof(nd_mappings));
>> + memset(_desc, 0, sizeof(ndr_desc));
>> + res.start = spa->spa_base;
>> + res.end = res.start + spa->spa_length - 1;
>> + ndr_desc.res = 
>> + ndr_desc.provider_data = nfit_spa;
>> + ndr_desc.attr_groups = nd_acpi_region_attribute_groups;
>> + list_for_each_entry(nfit_memdev, _desc->memdevs, list) {
>> + struct acpi_nfit_memdev *memdev = nfit_memdev->memdev;
>> + struct nd_mapping *nd_mapping;
>> + struct nd_dimm *nd_dimm;
>> +
>> + if (memdev->spa_index != spa_index)
>> + continue;
>
> The libnd does not support memdev->flags, which contains "Memory Device
> State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
> errors, we should only allow a failed NVDIMM be accessed with read-only
> for possible data recovery (or not allow any access when the data is
> completely lost), and should not let users operate normally over the
> corrupted data until the error is dealt properly.

I agree with setting read-only access when these flags show that the
battery is not ready to persist new writes, but I don't think we
should block access in the case where the restore from flash failed.
If the data is potentially corrupted we should log that fact, but
otherwise enable access.  I.e. potentially corrupt data is better than
unavailable data.  It's up to filesystem or application to maintain
its own checksums to catch data corruption.

> Can you set memdev->flags to nd_region(_desc) so that the pmem driver
> can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
> the disk read-only or fail probing, and log errors accordingly.

Will do.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-09 Thread Dan Williams
On Mon, May 4, 2015 at 1:26 PM, Toshi Kani toshi.k...@hp.com wrote:
 On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
  :
 +
 +static int nd_acpi_register_region(struct acpi_nfit_desc *acpi_desc,
 + struct nfit_spa *nfit_spa)
 +{
 + static struct nd_mapping nd_mappings[ND_MAX_MAPPINGS];
 + struct acpi_nfit_spa *spa = nfit_spa-spa;
 + struct nfit_memdev *nfit_memdev;
 + struct nd_region_desc ndr_desc;
 + int spa_type, count = 0;
 + struct resource res;
 + u16 spa_index;
 +
 + spa_type = nfit_spa_type(spa);
 + spa_index = spa-spa_index;
 + if (spa_index == 0) {
 + dev_dbg(acpi_desc-dev, %s: detected invalid spa index\n,
 + __func__);
 + return 0;
 + }
 +
 + memset(res, 0, sizeof(res));
 + memset(nd_mappings, 0, sizeof(nd_mappings));
 + memset(ndr_desc, 0, sizeof(ndr_desc));
 + res.start = spa-spa_base;
 + res.end = res.start + spa-spa_length - 1;
 + ndr_desc.res = res;
 + ndr_desc.provider_data = nfit_spa;
 + ndr_desc.attr_groups = nd_acpi_region_attribute_groups;
 + list_for_each_entry(nfit_memdev, acpi_desc-memdevs, list) {
 + struct acpi_nfit_memdev *memdev = nfit_memdev-memdev;
 + struct nd_mapping *nd_mapping;
 + struct nd_dimm *nd_dimm;
 +
 + if (memdev-spa_index != spa_index)
 + continue;

 The libnd does not support memdev-flags, which contains Memory Device
 State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
 errors, we should only allow a failed NVDIMM be accessed with read-only
 for possible data recovery (or not allow any access when the data is
 completely lost), and should not let users operate normally over the
 corrupted data until the error is dealt properly.

I agree with setting read-only access when these flags show that the
battery is not ready to persist new writes, but I don't think we
should block access in the case where the restore from flash failed.
If the data is potentially corrupted we should log that fact, but
otherwise enable access.  I.e. potentially corrupt data is better than
unavailable data.  It's up to filesystem or application to maintain
its own checksums to catch data corruption.

 Can you set memdev-flags to nd_region(_desc) so that the pmem driver
 can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
 the disk read-only or fail probing, and log errors accordingly.

Will do.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-04 Thread Toshi Kani
On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
 :
> +
> +static int nd_acpi_register_region(struct acpi_nfit_desc *acpi_desc,
> + struct nfit_spa *nfit_spa)
> +{
> + static struct nd_mapping nd_mappings[ND_MAX_MAPPINGS];
> + struct acpi_nfit_spa *spa = nfit_spa->spa;
> + struct nfit_memdev *nfit_memdev;
> + struct nd_region_desc ndr_desc;
> + int spa_type, count = 0;
> + struct resource res;
> + u16 spa_index;
> +
> + spa_type = nfit_spa_type(spa);
> + spa_index = spa->spa_index;
> + if (spa_index == 0) {
> + dev_dbg(acpi_desc->dev, "%s: detected invalid spa index\n",
> + __func__);
> + return 0;
> + }
> +
> + memset(, 0, sizeof(res));
> + memset(_mappings, 0, sizeof(nd_mappings));
> + memset(_desc, 0, sizeof(ndr_desc));
> + res.start = spa->spa_base;
> + res.end = res.start + spa->spa_length - 1;
> + ndr_desc.res = 
> + ndr_desc.provider_data = nfit_spa;
> + ndr_desc.attr_groups = nd_acpi_region_attribute_groups;
> + list_for_each_entry(nfit_memdev, _desc->memdevs, list) {
> + struct acpi_nfit_memdev *memdev = nfit_memdev->memdev;
> + struct nd_mapping *nd_mapping;
> + struct nd_dimm *nd_dimm;
> +
> + if (memdev->spa_index != spa_index)
> + continue;

The libnd does not support memdev->flags, which contains "Memory Device
State Flags" defined in Table 5-129 of ACPI 6.0.  In case of major
errors, we should only allow a failed NVDIMM be accessed with read-only
for possible data recovery (or not allow any access when the data is
completely lost), and should not let users operate normally over the
corrupted data until the error is dealt properly.

Can you set memdev->flags to nd_region(_desc) so that the pmem driver
can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
the disk read-only or fail probing, and log errors accordingly.

Thanks,
-Toshi








--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-05-04 Thread Toshi Kani
On Tue, 2015-04-28 at 14:24 -0400, Dan Williams wrote:
 :
 +
 +static int nd_acpi_register_region(struct acpi_nfit_desc *acpi_desc,
 + struct nfit_spa *nfit_spa)
 +{
 + static struct nd_mapping nd_mappings[ND_MAX_MAPPINGS];
 + struct acpi_nfit_spa *spa = nfit_spa-spa;
 + struct nfit_memdev *nfit_memdev;
 + struct nd_region_desc ndr_desc;
 + int spa_type, count = 0;
 + struct resource res;
 + u16 spa_index;
 +
 + spa_type = nfit_spa_type(spa);
 + spa_index = spa-spa_index;
 + if (spa_index == 0) {
 + dev_dbg(acpi_desc-dev, %s: detected invalid spa index\n,
 + __func__);
 + return 0;
 + }
 +
 + memset(res, 0, sizeof(res));
 + memset(nd_mappings, 0, sizeof(nd_mappings));
 + memset(ndr_desc, 0, sizeof(ndr_desc));
 + res.start = spa-spa_base;
 + res.end = res.start + spa-spa_length - 1;
 + ndr_desc.res = res;
 + ndr_desc.provider_data = nfit_spa;
 + ndr_desc.attr_groups = nd_acpi_region_attribute_groups;
 + list_for_each_entry(nfit_memdev, acpi_desc-memdevs, list) {
 + struct acpi_nfit_memdev *memdev = nfit_memdev-memdev;
 + struct nd_mapping *nd_mapping;
 + struct nd_dimm *nd_dimm;
 +
 + if (memdev-spa_index != spa_index)
 + continue;

The libnd does not support memdev-flags, which contains Memory Device
State Flags defined in Table 5-129 of ACPI 6.0.  In case of major
errors, we should only allow a failed NVDIMM be accessed with read-only
for possible data recovery (or not allow any access when the data is
completely lost), and should not let users operate normally over the
corrupted data until the error is dealt properly.

Can you set memdev-flags to nd_region(_desc) so that the pmem driver
can check the status in nd_pmem_probe()?  nd_pmem_probe() can then set
the disk read-only or fail probing, and log errors accordingly.

Thanks,
-Toshi








--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-04-29 Thread Dan Williams
On Wed, Apr 29, 2015 at 8:53 AM, Elliott, Robert (Server Storage)
 wrote:
>> -Original Message-
>> From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
>> Dan Williams
>> Sent: Tuesday, April 28, 2015 1:25 PM
>> Subject: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-
>> data-window, persistent memory, volatile memory)
>>
>> A "region" device represents the maximum capacity of a BLK range (mmio
>> block-data-window(s)), or a PMEM range (DAX-capable persistent memory or
>> volatile memory), without regard for aliasing.  Aliasing, in the
>> dimm-local address space (DPA), is resolved by metadata on a dimm to
>> designate which exclusive interface will access the aliased DPA ranges.
>> Support for the per-dimm metadata/label arrvies is in a subsequent
>> patch.
>>
>> The name format of "region" devices is "regionN" where, like dimms, N is
>> a global ida index assigned at discovery time.  This id is not reliable
>> across reboots nor in the presence of hotplug.  Look to attributes of
>> the region or static id-data of the sub-namespace to generate a
>> persistent name.
> ...
>> +++ b/drivers/block/nd/region_devs.c
> ...
>> +static noinline struct nd_region *nd_region_create(struct nd_bus *nd_bus,
>> + struct nd_region_desc *ndr_desc, struct device_type *dev_type)
>> +{
>> + struct nd_region *nd_region;
>> + struct device *dev;
>> + u16 i;
>> +
>> + for (i = 0; i < ndr_desc->num_mappings; i++) {
>> + struct nd_mapping *nd_mapping = _desc->nd_mapping[i];
>> + struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
>> +
>> + if ((nd_mapping->start | nd_mapping->size) % SZ_4K) {
>> + dev_err(_bus->dev, "%pf: %s mapping%d is not 4K
>> aligned\n",
>> + __builtin_return_address(0),
>
> Please use "KiB" rather than the unclear "K".

Ok.

> Same comment for a dev_dbg print in patch 14.

It's a debug statement, but ok.

[..]
>
> Could this include "nd" in the name, like "ndregion%d"?
>
> The other dev_set_name calls in this patch set use:
> btt%d
> ndbus%d
> nmem%d
> namespace%d.%d
>
> which are a bit more distinctive.

They sit on an "nd" bus and don't have global device nodes, I don't
see a need to make them anymore distinctive.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-04-29 Thread Elliott, Robert (Server Storage)
> -Original Message-
> From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
> Dan Williams
> Sent: Tuesday, April 28, 2015 1:25 PM
> Subject: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-
> data-window, persistent memory, volatile memory)
> 
> A "region" device represents the maximum capacity of a BLK range (mmio
> block-data-window(s)), or a PMEM range (DAX-capable persistent memory or
> volatile memory), without regard for aliasing.  Aliasing, in the
> dimm-local address space (DPA), is resolved by metadata on a dimm to
> designate which exclusive interface will access the aliased DPA ranges.
> Support for the per-dimm metadata/label arrvies is in a subsequent
> patch.
> 
> The name format of "region" devices is "regionN" where, like dimms, N is
> a global ida index assigned at discovery time.  This id is not reliable
> across reboots nor in the presence of hotplug.  Look to attributes of
> the region or static id-data of the sub-namespace to generate a
> persistent name.
...
> +++ b/drivers/block/nd/region_devs.c
...
> +static noinline struct nd_region *nd_region_create(struct nd_bus *nd_bus,
> + struct nd_region_desc *ndr_desc, struct device_type *dev_type)
> +{
> + struct nd_region *nd_region;
> + struct device *dev;
> + u16 i;
> +
> + for (i = 0; i < ndr_desc->num_mappings; i++) {
> + struct nd_mapping *nd_mapping = _desc->nd_mapping[i];
> + struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
> +
> + if ((nd_mapping->start | nd_mapping->size) % SZ_4K) {
> + dev_err(_bus->dev, "%pf: %s mapping%d is not 4K
> aligned\n",
> + __builtin_return_address(0),

Please use "KiB" rather than the unclear "K".

Same comment for a dev_dbg print in patch 14.

> + dev_name(_dimm->dev), i);
> +
> + return NULL;
> + }
> + }
> +
> + nd_region = kzalloc(sizeof(struct nd_region)
> + + sizeof(struct nd_mapping) * ndr_desc->num_mappings,
> + GFP_KERNEL);
> + if (!nd_region)
> + return NULL;
> + nd_region->id = ida_simple_get(_ida, 0, 0, GFP_KERNEL);
> + if (nd_region->id < 0) {
> + kfree(nd_region);
> + return NULL;
> + }
> +
> + memcpy(nd_region->mapping, ndr_desc->nd_mapping,
> + sizeof(struct nd_mapping) * ndr_desc->num_mappings);
> + for (i = 0; i < ndr_desc->num_mappings; i++) {
> + struct nd_mapping *nd_mapping = _desc->nd_mapping[i];
> + struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
> +
> + get_device(_dimm->dev);
> + }
> + nd_region->ndr_mappings = ndr_desc->num_mappings;
> + nd_region->provider_data = ndr_desc->provider_data;
> + dev = _region->dev;
> + dev_set_name(dev, "region%d", nd_region->id);

Could this include "nd" in the name, like "ndregion%d"?

The other dev_set_name calls in this patch set use:
btt%d
ndbus%d
nmem%d
namespace%d.%d

which are a bit more distinctive.

---
Robert Elliott, HP Server Storage
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-04-29 Thread Dan Williams
On Wed, Apr 29, 2015 at 8:53 AM, Elliott, Robert (Server Storage)
elli...@hp.com wrote:
 -Original Message-
 From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
 Dan Williams
 Sent: Tuesday, April 28, 2015 1:25 PM
 Subject: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-
 data-window, persistent memory, volatile memory)

 A region device represents the maximum capacity of a BLK range (mmio
 block-data-window(s)), or a PMEM range (DAX-capable persistent memory or
 volatile memory), without regard for aliasing.  Aliasing, in the
 dimm-local address space (DPA), is resolved by metadata on a dimm to
 designate which exclusive interface will access the aliased DPA ranges.
 Support for the per-dimm metadata/label arrvies is in a subsequent
 patch.

 The name format of region devices is regionN where, like dimms, N is
 a global ida index assigned at discovery time.  This id is not reliable
 across reboots nor in the presence of hotplug.  Look to attributes of
 the region or static id-data of the sub-namespace to generate a
 persistent name.
 ...
 +++ b/drivers/block/nd/region_devs.c
 ...
 +static noinline struct nd_region *nd_region_create(struct nd_bus *nd_bus,
 + struct nd_region_desc *ndr_desc, struct device_type *dev_type)
 +{
 + struct nd_region *nd_region;
 + struct device *dev;
 + u16 i;
 +
 + for (i = 0; i  ndr_desc-num_mappings; i++) {
 + struct nd_mapping *nd_mapping = ndr_desc-nd_mapping[i];
 + struct nd_dimm *nd_dimm = nd_mapping-nd_dimm;
 +
 + if ((nd_mapping-start | nd_mapping-size) % SZ_4K) {
 + dev_err(nd_bus-dev, %pf: %s mapping%d is not 4K
 aligned\n,
 + __builtin_return_address(0),

 Please use KiB rather than the unclear K.

Ok.

 Same comment for a dev_dbg print in patch 14.

It's a debug statement, but ok.

[..]

 Could this include nd in the name, like ndregion%d?

 The other dev_set_name calls in this patch set use:
 btt%d
 ndbus%d
 nmem%d
 namespace%d.%d

 which are a bit more distinctive.

They sit on an nd bus and don't have global device nodes, I don't
see a need to make them anymore distinctive.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-data-window, persistent memory, volatile memory)

2015-04-29 Thread Elliott, Robert (Server Storage)
 -Original Message-
 From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
 Dan Williams
 Sent: Tuesday, April 28, 2015 1:25 PM
 Subject: [Linux-nvdimm] [PATCH v2 08/20] libnd, nd_acpi: regions (block-
 data-window, persistent memory, volatile memory)
 
 A region device represents the maximum capacity of a BLK range (mmio
 block-data-window(s)), or a PMEM range (DAX-capable persistent memory or
 volatile memory), without regard for aliasing.  Aliasing, in the
 dimm-local address space (DPA), is resolved by metadata on a dimm to
 designate which exclusive interface will access the aliased DPA ranges.
 Support for the per-dimm metadata/label arrvies is in a subsequent
 patch.
 
 The name format of region devices is regionN where, like dimms, N is
 a global ida index assigned at discovery time.  This id is not reliable
 across reboots nor in the presence of hotplug.  Look to attributes of
 the region or static id-data of the sub-namespace to generate a
 persistent name.
...
 +++ b/drivers/block/nd/region_devs.c
...
 +static noinline struct nd_region *nd_region_create(struct nd_bus *nd_bus,
 + struct nd_region_desc *ndr_desc, struct device_type *dev_type)
 +{
 + struct nd_region *nd_region;
 + struct device *dev;
 + u16 i;
 +
 + for (i = 0; i  ndr_desc-num_mappings; i++) {
 + struct nd_mapping *nd_mapping = ndr_desc-nd_mapping[i];
 + struct nd_dimm *nd_dimm = nd_mapping-nd_dimm;
 +
 + if ((nd_mapping-start | nd_mapping-size) % SZ_4K) {
 + dev_err(nd_bus-dev, %pf: %s mapping%d is not 4K
 aligned\n,
 + __builtin_return_address(0),

Please use KiB rather than the unclear K.

Same comment for a dev_dbg print in patch 14.

 + dev_name(nd_dimm-dev), i);
 +
 + return NULL;
 + }
 + }
 +
 + nd_region = kzalloc(sizeof(struct nd_region)
 + + sizeof(struct nd_mapping) * ndr_desc-num_mappings,
 + GFP_KERNEL);
 + if (!nd_region)
 + return NULL;
 + nd_region-id = ida_simple_get(region_ida, 0, 0, GFP_KERNEL);
 + if (nd_region-id  0) {
 + kfree(nd_region);
 + return NULL;
 + }
 +
 + memcpy(nd_region-mapping, ndr_desc-nd_mapping,
 + sizeof(struct nd_mapping) * ndr_desc-num_mappings);
 + for (i = 0; i  ndr_desc-num_mappings; i++) {
 + struct nd_mapping *nd_mapping = ndr_desc-nd_mapping[i];
 + struct nd_dimm *nd_dimm = nd_mapping-nd_dimm;
 +
 + get_device(nd_dimm-dev);
 + }
 + nd_region-ndr_mappings = ndr_desc-num_mappings;
 + nd_region-provider_data = ndr_desc-provider_data;
 + dev = nd_region-dev;
 + dev_set_name(dev, region%d, nd_region-id);

Could this include nd in the name, like ndregion%d?

The other dev_set_name calls in this patch set use:
btt%d
ndbus%d
nmem%d
namespace%d.%d

which are a bit more distinctive.

---
Robert Elliott, HP Server Storage
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/