RE: [PATCH v3 0/3] fsdax: Factor helper functions to simplify the code
Hi, Dan Do you have any comments on this? -- Thanks, Ruan Shiyang. > -Original Message- > From: Shiyang Ruan > Sent: Thursday, April 22, 2021 9:45 PM > Subject: [PATCH v3 0/3] fsdax: Factor helper functions to simplify the code > > From: Shiyang Ruan > > The page fault part of fsdax code is little complex. In order to add CoW > feature > and make it easy to understand, I was suggested to factor some helper > functions > to simplify the current dax code. > > This is separated from the previous patchset called "V3 fsdax,xfs: Add > reflink&dedupe support for fsdax", and the previous comments are here[1]. > > [1]: > https://patchwork.kernel.org/project/linux-nvdimm/patch/20210319015237.99 > 3880-3-ruansy.f...@fujitsu.com/ > > Changes from V2: > - fix the type of 'major' in patch 2 > - Rebased on v5.12-rc8 > > Changes from V1: > - fix Ritesh's email address > - simplify return logic in dax_fault_cow_page() > > (Rebased on v5.12-rc8) > == > > Shiyang Ruan (3): > fsdax: Factor helpers to simplify dax fault code > fsdax: Factor helper: dax_fault_actor() > fsdax: Output address in dax_iomap_pfn() and rename it > > fs/dax.c | 443 +-- > 1 file changed, 234 insertions(+), 209 deletions(-) > > -- > 2.31.1 ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
[PATCH v3] powerpc/papr_scm: Reduce error severity if nvdimm stats inaccessible
Currently drc_pmem_qeury_stats() generates a dev_err in case "Enable Performance Information Collection" feature is disabled from HMC or performance stats are not available for an nvdimm. The error is of the form below: papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Failed to query performance stats, Err:-10 This error message confuses users as it implies a possible problem with the nvdimm even though its due to a disabled/unavailable feature. We fix this by explicitly handling the H_AUTHORITY and H_UNSUPPORTED errors from the H_SCM_PERFORMANCE_STATS hcall. In case of H_AUTHORITY error an info message is logged instead of an error, saying that "Permission denied while accessing performance stats" and an EPERM error is returned back. In case of H_UNSUPPORTED error we return a EOPNOTSUPP error back from drc_pmem_query_stats() indicating that performance stats-query operation is not supported on this nvdimm. Fixes: 2d02bf835e57('powerpc/papr_scm: Fetch nvdimm performance stats from PHYP') Signed-off-by: Vaibhav Jain --- Changelog v3: * Return EOPNOTSUPP error in case of H_UNSUPPORTED [ Ira ] * Return EPERM in case of H_AUTHORITY [ Ira ] * Updated patch description v2: * Updated the message logged in case of H_AUTHORITY error [ Ira ] * Switched from dev_warn to dev_info in case of H_AUTHORITY error. * Instead of -EPERM return -EACCESS for H_AUTHORITY error. * Added explicit handling of H_UNSUPPORTED error. --- arch/powerpc/platforms/pseries/papr_scm.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c index ef26fe40efb0..e2b69cc3beaf 100644 --- a/arch/powerpc/platforms/pseries/papr_scm.c +++ b/arch/powerpc/platforms/pseries/papr_scm.c @@ -310,6 +310,13 @@ static ssize_t drc_pmem_query_stats(struct papr_scm_priv *p, dev_err(&p->pdev->dev, "Unknown performance stats, Err:0x%016lX\n", ret[0]); return -ENOENT; + } else if (rc == H_AUTHORITY) { + dev_info(&p->pdev->dev, +"Permission denied while accessing performance stats"); + return -EPERM; + } else if (rc == H_UNSUPPORTED) { + dev_dbg(&p->pdev->dev, "Performance stats unsupported\n"); + return -EOPNOTSUPP; } else if (rc != H_SUCCESS) { dev_err(&p->pdev->dev, "Failed to query performance stats, Err:%lld\n", rc); -- 2.31.1 ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote: > On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote: > > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote: > [...] > > > > I think that a very complete description of the threats which > > > > this feature addresses would be helpful. > > > > > > It's designed to protect against three different threats: > > > > > >1. Detection of user secret memory mismanagement > > > > I would say "cross-process secret userspace memory exposures" (via a > > number of common interfaces by blocking it at the GUP level). > > > > >2. significant protection against privilege escalation > > > > I don't see how this series protects against privilege escalation. > > (It protects against exfiltration.) Maybe you mean include this in > > the first bullet point (i.e. "cross-process secret userspace memory > > exposures, even in the face of privileged processes")? > > It doesn't prevent privilege escalation from happening in the first > place, but once the escalation has happened it protects against > exfiltration by the newly minted root attacker. So, after thinking a bit more about this, I don't think there is protection here against privileged execution. This feature kind of helps against cross-process read/write attempts, but it doesn't help with sufficiently privileged (i.e. ptraced) execution, since we can just ask the process itself to do the reading: $ gdb ./memfd_secret ... ready: 0x77ffb000 Breakpoint 1, ... (gdb) compile code unsigned long addr = 0x77ffb000UL; printf("%016lx\n", *((unsigned long *)addr)); 5 And since process_vm_readv() requires PTRACE_ATTACH, there's very little difference in effort between process_vm_readv() and the above. So, what other paths through GUP exist that aren't covered by PTRACE_ATTACH? And if none, then should this actually just be done by setting the process undumpable? (This is already what things like gnupg do.) So, the user-space side of this doesn't seem to really help. The kernel side protection is interesting for kernel read/write flaws, though, in the sense that the process is likely not being attacked from "current", so a kernel-side attack would need to either walk the page tables and create new ones, or spawn a new userspace process to do the ptracing. So, while I like the idea of this stuff, and I see how it provides certain coverages, I'm curious to learn more about the threat model to make sure it's actually providing meaningful hurdles to attacks. -- Kees Cook ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH] ACPI: NFIT: Fix support for variable 'SPA' structure size
On Fri, May 7, 2021 at 7:49 AM Rafael J. Wysocki wrote: > > On Fri, May 7, 2021 at 4:12 PM Dan Williams wrote: > > > > On Fri, May 7, 2021 at 2:47 AM Rafael J. Wysocki wrote: > > > > > > Hi Dan, > > > > > > On Fri, May 7, 2021 at 9:33 AM Dan Williams > > > wrote: > > > > > > > > ACPI 6.4 introduced the "SpaLocationCookie" to the NFIT "System Physical > > > > Address (SPA) Range Structure". The presence of that new field is > > > > indicated by the ACPI_NFIT_LOCATION_COOKIE_VALID flag. Pre-ACPI-6.4 > > > > firmware implementations omit the flag and maintain the original size of > > > > the structure. > > > > > > > > Update the implementation to check that flag to determine the size > > > > rather than the ACPI 6.4 compliant definition of 'struct > > > > acpi_nfit_system_address' from the Linux ACPICA definitions. > > > > > > > > Update the test infrastructure for the new expectations as well, i.e. > > > > continue to emulate the ACPI 6.3 definition of that structure. > > > > > > > > Without this fix the kernel fails to validate 'SPA' structures and this > > > > leads to a crash in nfit_get_smbios_id() since that routine assumes that > > > > SPAs are valid if it finds valid SMBIOS tables. > > > > > > > > BUG: unable to handle page fault for address: ffa8 > > > > [..] > > > > Call Trace: > > > > skx_get_nvdimm_info+0x56/0x130 [skx_edac] > > > > skx_get_dimm_config+0x1f5/0x213 [skx_edac] > > > > skx_register_mci+0x132/0x1c0 [skx_edac] > > > > > > > > Cc: Bob Moore > > > > Cc: Erik Kaneda > > > > Fixes: cf16b05c607b ("ACPICA: ACPI 6.4: NFIT: add Location Cookie > > > > field") > > > > > > Do you want me to apply this (as the commit being fixed went in > > > through the ACPI tree)? > > > > Yes, I would need to wait for a signed tag so if you're sending urgent > > fixes in the next few days please take this one, otherwise I'll circle > > back next week after -rc1. > > I'll be doing my next push after -rc1 either, so I guess it doesn't > matter time-wise. Ok, I got it, thanks for the offer. ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
URGENT REPLY NEEDED
My names are Mrs Suzara Maling Wan, I am a Nationality of the Republic of the Philippine presently base in West Africa B/F, dealing with exportation of Gold, I was diagnose of blood Causal decease, and my doctor have announce to me that I have few days to leave due to the condition of my sickness. I have a desire to build an orphanage home in your country of which i cannot execute the project myself due to my present health condition, I am willing to hand over the project under your care for you to help me fulfill my dreams and desire of building an orphanage home in your country. Reply in you are will to help so that I can direct you to my bank for the urgent transfer of the fund/money require for the project to your account as I have already made the fund/money available. With kind regards Mrs Suzara Maling Wan ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
THANKS!
Email:ahpeiyas...@gmail.com Dearest Friend, I am Mrs. Ah-Pei Yassin. A dual France and Saudi Arabic National, I decided to donate part of what I have to you for investment towards the good work of charity organization, and also to help the motherless and the less privileged ones and to carry out a charitable works in your Country and around the World on my Behalf. I am diagnosing of throat Cancer, hospitalize for good 2 years and some months now and quite obvious that I have few days to live, and I am a Widow no child; I decided to will/donate the sum of $7.8 million to you for the good work of God, and also to help the motherless and less privilege and also forth assistance of the widows. At the moment I cannot take any telephone calls right now due to the fact that my relatives (that have squandered the funds for this purpose before) are round me and my health status also. I have adjusted my will and my Bank is aware. I have willed those properties to you by quoting my Personal File Routing and Account Information. And I have also notified the bank that I am willing that properties to you for a good, effective and prudent work. It is right to say that I have been directed to do this by God. I will be going in for a surgery soon and I want to make sure that I make this donation before undergoing this surgery. I will need your support to make this dream come through, could you let me know your interest to enable me give you further information. And I hereby advice to contact me by this email address : ahpeiya...@gmail.com Thanks Ah-Pei Yassin ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH v3 2/2] secretmem: optimize page_is_secretmem()
On Tue, Apr 20, 2021 at 06:00:49PM +0300, Mike Rapoport wrote: > + mapping = (struct address_space *) > + ((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS); > + > + if (mapping != page->mapping) > + return false; > + > + return page->mapping->a_ops == &secretmem_aops; ... why do you go back to page->mapping here? return mapping->a_ops == &secretmem_aops ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH] ACPI: NFIT: Fix support for variable 'SPA' structure size
On Fri, May 7, 2021 at 4:12 PM Dan Williams wrote: > > On Fri, May 7, 2021 at 2:47 AM Rafael J. Wysocki wrote: > > > > Hi Dan, > > > > On Fri, May 7, 2021 at 9:33 AM Dan Williams > > wrote: > > > > > > ACPI 6.4 introduced the "SpaLocationCookie" to the NFIT "System Physical > > > Address (SPA) Range Structure". The presence of that new field is > > > indicated by the ACPI_NFIT_LOCATION_COOKIE_VALID flag. Pre-ACPI-6.4 > > > firmware implementations omit the flag and maintain the original size of > > > the structure. > > > > > > Update the implementation to check that flag to determine the size > > > rather than the ACPI 6.4 compliant definition of 'struct > > > acpi_nfit_system_address' from the Linux ACPICA definitions. > > > > > > Update the test infrastructure for the new expectations as well, i.e. > > > continue to emulate the ACPI 6.3 definition of that structure. > > > > > > Without this fix the kernel fails to validate 'SPA' structures and this > > > leads to a crash in nfit_get_smbios_id() since that routine assumes that > > > SPAs are valid if it finds valid SMBIOS tables. > > > > > > BUG: unable to handle page fault for address: ffa8 > > > [..] > > > Call Trace: > > > skx_get_nvdimm_info+0x56/0x130 [skx_edac] > > > skx_get_dimm_config+0x1f5/0x213 [skx_edac] > > > skx_register_mci+0x132/0x1c0 [skx_edac] > > > > > > Cc: Bob Moore > > > Cc: Erik Kaneda > > > Fixes: cf16b05c607b ("ACPICA: ACPI 6.4: NFIT: add Location Cookie field") > > > > Do you want me to apply this (as the commit being fixed went in > > through the ACPI tree)? > > Yes, I would need to wait for a signed tag so if you're sending urgent > fixes in the next few days please take this one, otherwise I'll circle > back next week after -rc1. I'll be doing my next push after -rc1 either, so I guess it doesn't matter time-wise. ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH] ACPI: NFIT: Fix support for variable 'SPA' structure size
On Fri, May 7, 2021 at 2:47 AM Rafael J. Wysocki wrote: > > Hi Dan, > > On Fri, May 7, 2021 at 9:33 AM Dan Williams wrote: > > > > ACPI 6.4 introduced the "SpaLocationCookie" to the NFIT "System Physical > > Address (SPA) Range Structure". The presence of that new field is > > indicated by the ACPI_NFIT_LOCATION_COOKIE_VALID flag. Pre-ACPI-6.4 > > firmware implementations omit the flag and maintain the original size of > > the structure. > > > > Update the implementation to check that flag to determine the size > > rather than the ACPI 6.4 compliant definition of 'struct > > acpi_nfit_system_address' from the Linux ACPICA definitions. > > > > Update the test infrastructure for the new expectations as well, i.e. > > continue to emulate the ACPI 6.3 definition of that structure. > > > > Without this fix the kernel fails to validate 'SPA' structures and this > > leads to a crash in nfit_get_smbios_id() since that routine assumes that > > SPAs are valid if it finds valid SMBIOS tables. > > > > BUG: unable to handle page fault for address: ffa8 > > [..] > > Call Trace: > > skx_get_nvdimm_info+0x56/0x130 [skx_edac] > > skx_get_dimm_config+0x1f5/0x213 [skx_edac] > > skx_register_mci+0x132/0x1c0 [skx_edac] > > > > Cc: Bob Moore > > Cc: Erik Kaneda > > Fixes: cf16b05c607b ("ACPICA: ACPI 6.4: NFIT: add Location Cookie field") > > Do you want me to apply this (as the commit being fixed went in > through the ACPI tree)? Yes, I would need to wait for a signed tag so if you're sending urgent fixes in the next few days please take this one, otherwise I'll circle back next week after -rc1. > > If you'd rather take care of it yourself: > > Reviewed-by: Rafael J. Wysocki Thanks! ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH v2] powerpc/papr_scm: Reduce error severity if nvdimm stats inaccessible
Hi Ira, Thanks for looking into this patch Ira Weiny writes: > On Thu, May 06, 2021 at 12:46:06AM +0530, Vaibhav Jain wrote: >> Currently drc_pmem_qeury_stats() generates a dev_err in case >> "Enable Performance Information Collection" feature is disabled from >> HMC or performance stats are not available for an nvdimm. The error is >> of the form below: >> >> papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Failed to query >> performance stats, Err:-10 >> >> This error message confuses users as it implies a possible problem >> with the nvdimm even though its due to a disabled/unavailable >> feature. We fix this by explicitly handling the H_AUTHORITY and >> H_UNSUPPORTED errors from the H_SCM_PERFORMANCE_STATS hcall. >> >> In case of H_AUTHORITY error an info message is logged instead of an >> error, saying that "Permission denied while accessing performance >> stats". Also '-EACCES' error is return instead of -EPERM. > > I thought you clarified before that this was a permission issue. So why > change > the error to EACCES? > EACCESS("Permission Denied") felt like a more accurate error code for this case than EPERM("Operation not permitted"). So switched the usage of EPERM error code to handle the case if this hcall is not supported for an nvdimm. >> >> In case of H_UNSUPPORTED error we return a -EPERM error back from >> drc_pmem_query_stats() indicating that performance stats-query >> operation is not supported on this nvdimm. > > EPERM seems wrong here too... ENOTSUP? Yes, will change it to EOPNOTSUPP in v3. > > Ira > ___ > Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org > To unsubscribe send an email to linux-nvdimm-le...@lists.01.org -- Cheers ~ Vaibhav ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH] powerpc/papr_scm: Make 'perf_stats' invisible if perf-stats unavailable
"Aneesh Kumar K.V" writes: > Vaibhav Jain writes: > >> In case performance stats for an nvdimm are not available, reading the >> 'perf_stats' sysfs file returns an -ENOENT error. A better approach is >> to make the 'perf_stats' file entirely invisible to indicate that >> performance stats for an nvdimm are unavailable. >> >> So this patch updates 'papr_nd_attribute_group' to add a 'is_visible' >> callback implemented as newly introduced 'papr_nd_attribute_visible()' >> that returns an appropriate mode in case performance stats aren't >> supported in a given nvdimm. >> >> Also the initialization of 'papr_scm_priv.stat_buffer_len' is moved >> from papr_scm_nvdimm_init() to papr_scm_probe() so that it value is >> available when 'papr_nd_attribute_visible()' is called during nvdimm >> initialization. >> >> Fixes: 2d02bf835e57('powerpc/papr_scm: Fetch nvdimm performance stats from >> PHYP') >> Signed-off-by: Vaibhav Jain >> --- >> arch/powerpc/platforms/pseries/papr_scm.c | 37 --- >> 1 file changed, 26 insertions(+), 11 deletions(-) >> >> diff --git a/arch/powerpc/platforms/pseries/papr_scm.c >> b/arch/powerpc/platforms/pseries/papr_scm.c >> index 12f1513f0fca..90f0af8fefe8 100644 >> --- a/arch/powerpc/platforms/pseries/papr_scm.c >> +++ b/arch/powerpc/platforms/pseries/papr_scm.c >> @@ -907,6 +907,20 @@ static ssize_t flags_show(struct device *dev, >> } >> DEVICE_ATTR_RO(flags); >> >> +umode_t papr_nd_attribute_visible(struct kobject *kobj, struct attribute >> *attr, >> + int n) >> +{ >> +struct device *dev = container_of(kobj, typeof(*dev), kobj); >> +struct nvdimm *nvdimm = to_nvdimm(dev); >> +struct papr_scm_priv *p = nvdimm_provider_data(nvdimm); >> + >> +/* For if perf-stats not available remove perf_stats sysfs */ >> +if (attr == &dev_attr_perf_stats.attr && p->stat_buffer_len == 0) >> +return 0; >> + >> +return attr->mode; >> +} >> + >> /* papr_scm specific dimm attributes */ >> static struct attribute *papr_nd_attributes[] = { >> &dev_attr_flags.attr, >> @@ -916,6 +930,7 @@ static struct attribute *papr_nd_attributes[] = { >> >> static struct attribute_group papr_nd_attribute_group = { >> .name = "papr", >> +.is_visible = papr_nd_attribute_visible, >> .attrs = papr_nd_attributes, >> }; >> >> @@ -931,7 +946,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p) >> struct nd_region_desc ndr_desc; >> unsigned long dimm_flags; >> int target_nid, online_nid; >> -ssize_t stat_size; >> >> p->bus_desc.ndctl = papr_scm_ndctl; >> p->bus_desc.module = THIS_MODULE; >> @@ -1016,16 +1030,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv >> *p) >> list_add_tail(&p->region_list, &papr_nd_regions); >> mutex_unlock(&papr_ndr_lock); >> >> -/* Try retriving the stat buffer and see if its supported */ >> -stat_size = drc_pmem_query_stats(p, NULL, 0); >> -if (stat_size > 0) { >> -p->stat_buffer_len = stat_size; >> -dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n", >> -p->stat_buffer_len); >> -} else { >> -dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n"); >> -} >> - >> return 0; >> >> err:nvdimm_bus_unregister(p->bus); >> @@ -1102,6 +1106,7 @@ static int papr_scm_probe(struct platform_device *pdev) >> u64 blocks, block_size; >> struct papr_scm_priv *p; >> const char *uuid_str; >> +ssize_t stat_size; >> u64 uuid[2]; >> int rc; >> >> @@ -1179,6 +1184,16 @@ static int papr_scm_probe(struct platform_device >> *pdev) >> p->res.name = pdev->name; >> p->res.flags = IORESOURCE_MEM; >> >> +/* Try retriving the stat buffer and see if its supported */ >> +stat_size = drc_pmem_query_stats(p, NULL, 0); >> +if (stat_size > 0) { >> +p->stat_buffer_len = stat_size; >> +dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n", >> +p->stat_buffer_len); >> +} else { >> +dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n"); >> +} > > With this patch > https://lore.kernel.org/linuxppc-dev/20210505191606.51666-1-vaib...@linux.ibm.com > We are adding details of whyy performance stat query hcall failed. Do we > need to print again here? Are we being more verbose here? > Yes agree this looks more verbose with the other patch you mentioned. I have sent out a v2 of this patch with this dev_info removed. > -aneesh > ___ > Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org > To unsubscribe send an email to linux-nvdimm-le...@lists.01.org -- Cheers ~ Vaibhav ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
[PATCH v2] powerpc/papr_scm: Make 'perf_stats' invisible if perf-stats unavailable
In case performance stats for an nvdimm are not available, reading the 'perf_stats' sysfs file returns an -ENOENT error. A better approach is to make the 'perf_stats' file entirely invisible to indicate that performance stats for an nvdimm are unavailable. So this patch updates 'papr_nd_attribute_group' to add a 'is_visible' callback implemented as newly introduced 'papr_nd_attribute_visible()' that returns an appropriate mode in case performance stats aren't supported in a given nvdimm. Also the initialization of 'papr_scm_priv.stat_buffer_len' is moved from papr_scm_nvdimm_init() to papr_scm_probe() so that it value is available when 'papr_nd_attribute_visible()' is called during nvdimm initialization. Fixes: 2d02bf835e57('powerpc/papr_scm: Fetch nvdimm performance stats from PHYP') Signed-off-by: Vaibhav Jain --- Changelog: v2: * Removed a redundant dev_info() from pap_scm_nvdimm_init() [ Aneesh ] * Marked papr_nd_attribute_visible() as static which also fixed the build warning reported by kernel build robot --- arch/powerpc/platforms/pseries/papr_scm.c | 35 --- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c index e2b69cc3beaf..11e7b90a3360 100644 --- a/arch/powerpc/platforms/pseries/papr_scm.c +++ b/arch/powerpc/platforms/pseries/papr_scm.c @@ -907,6 +907,20 @@ static ssize_t flags_show(struct device *dev, } DEVICE_ATTR_RO(flags); +static umode_t papr_nd_attribute_visible(struct kobject *kobj, +struct attribute *attr, int n) +{ + struct device *dev = container_of(kobj, typeof(*dev), kobj); + struct nvdimm *nvdimm = to_nvdimm(dev); + struct papr_scm_priv *p = nvdimm_provider_data(nvdimm); + + /* For if perf-stats not available remove perf_stats sysfs */ + if (attr == &dev_attr_perf_stats.attr && p->stat_buffer_len == 0) + return 0; + + return attr->mode; +} + /* papr_scm specific dimm attributes */ static struct attribute *papr_nd_attributes[] = { &dev_attr_flags.attr, @@ -916,6 +930,7 @@ static struct attribute *papr_nd_attributes[] = { static struct attribute_group papr_nd_attribute_group = { .name = "papr", + .is_visible = papr_nd_attribute_visible, .attrs = papr_nd_attributes, }; @@ -931,7 +946,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p) struct nd_region_desc ndr_desc; unsigned long dimm_flags; int target_nid, online_nid; - ssize_t stat_size; p->bus_desc.ndctl = papr_scm_ndctl; p->bus_desc.module = THIS_MODULE; @@ -1016,16 +1030,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p) list_add_tail(&p->region_list, &papr_nd_regions); mutex_unlock(&papr_ndr_lock); - /* Try retriving the stat buffer and see if its supported */ - stat_size = drc_pmem_query_stats(p, NULL, 0); - if (stat_size > 0) { - p->stat_buffer_len = stat_size; - dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n", - p->stat_buffer_len); - } else { - dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n"); - } - return 0; err: nvdimm_bus_unregister(p->bus); @@ -1102,6 +1106,7 @@ static int papr_scm_probe(struct platform_device *pdev) u64 blocks, block_size; struct papr_scm_priv *p; const char *uuid_str; + ssize_t stat_size; u64 uuid[2]; int rc; @@ -1179,6 +1184,14 @@ static int papr_scm_probe(struct platform_device *pdev) p->res.name = pdev->name; p->res.flags = IORESOURCE_MEM; + /* Try retriving the stat buffer and see if its supported */ + stat_size = drc_pmem_query_stats(p, NULL, 0); + if (stat_size > 0) { + p->stat_buffer_len = stat_size; + dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n", + p->stat_buffer_len); + } + rc = papr_scm_nvdimm_init(p); if (rc) goto err2; -- 2.31.1 ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Re: [PATCH] ACPI: NFIT: Fix support for variable 'SPA' structure size
Hi Dan, On Fri, May 7, 2021 at 9:33 AM Dan Williams wrote: > > ACPI 6.4 introduced the "SpaLocationCookie" to the NFIT "System Physical > Address (SPA) Range Structure". The presence of that new field is > indicated by the ACPI_NFIT_LOCATION_COOKIE_VALID flag. Pre-ACPI-6.4 > firmware implementations omit the flag and maintain the original size of > the structure. > > Update the implementation to check that flag to determine the size > rather than the ACPI 6.4 compliant definition of 'struct > acpi_nfit_system_address' from the Linux ACPICA definitions. > > Update the test infrastructure for the new expectations as well, i.e. > continue to emulate the ACPI 6.3 definition of that structure. > > Without this fix the kernel fails to validate 'SPA' structures and this > leads to a crash in nfit_get_smbios_id() since that routine assumes that > SPAs are valid if it finds valid SMBIOS tables. > > BUG: unable to handle page fault for address: ffa8 > [..] > Call Trace: > skx_get_nvdimm_info+0x56/0x130 [skx_edac] > skx_get_dimm_config+0x1f5/0x213 [skx_edac] > skx_register_mci+0x132/0x1c0 [skx_edac] > > Cc: Bob Moore > Cc: Erik Kaneda > Fixes: cf16b05c607b ("ACPICA: ACPI 6.4: NFIT: add Location Cookie field") Do you want me to apply this (as the commit being fixed went in through the ACPI tree)? If you'd rather take care of it yourself: Reviewed-by: Rafael J. Wysocki > Reported-by: Yi Zhang > Tested-by: Yi Zhang > Signed-off-by: Dan Williams > --- > > Rafael, I can take this through nvdimm.git after -rc1, but if you are > sending any fixes to Linus based on your merge baseline between now and > Monday, please pick up this one. > > drivers/acpi/nfit/core.c | 15 ++ > tools/testing/nvdimm/test/nfit.c | 42 > +++--- > 2 files changed, 36 insertions(+), 21 deletions(-) > > diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c > index 958aaac869e8..23d9a09d7060 100644 > --- a/drivers/acpi/nfit/core.c > +++ b/drivers/acpi/nfit/core.c > @@ -686,6 +686,13 @@ int nfit_spa_type(struct acpi_nfit_system_address *spa) > return -1; > } > > +static size_t sizeof_spa(struct acpi_nfit_system_address *spa) > +{ > + if (spa->flags & ACPI_NFIT_LOCATION_COOKIE_VALID) > + return sizeof(*spa); > + return sizeof(*spa) - 8; > +} > + > static bool add_spa(struct acpi_nfit_desc *acpi_desc, > struct nfit_table_prev *prev, > struct acpi_nfit_system_address *spa) > @@ -693,22 +700,22 @@ static bool add_spa(struct acpi_nfit_desc *acpi_desc, > struct device *dev = acpi_desc->dev; > struct nfit_spa *nfit_spa; > > - if (spa->header.length != sizeof(*spa)) > + if (spa->header.length != sizeof_spa(spa)) > return false; > > list_for_each_entry(nfit_spa, &prev->spas, list) { > - if (memcmp(nfit_spa->spa, spa, sizeof(*spa)) == 0) { > + if (memcmp(nfit_spa->spa, spa, sizeof_spa(spa)) == 0) { > list_move_tail(&nfit_spa->list, &acpi_desc->spas); > return true; > } > } > > - nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof(*spa), > + nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof_spa(spa), > GFP_KERNEL); > if (!nfit_spa) > return false; > INIT_LIST_HEAD(&nfit_spa->list); > - memcpy(nfit_spa->spa, spa, sizeof(*spa)); > + memcpy(nfit_spa->spa, spa, sizeof_spa(spa)); > list_add_tail(&nfit_spa->list, &acpi_desc->spas); > dev_dbg(dev, "spa index: %d type: %s\n", > spa->range_index, > diff --git a/tools/testing/nvdimm/test/nfit.c > b/tools/testing/nvdimm/test/nfit.c > index 9b185bf82da8..54f367cbadae 100644 > --- a/tools/testing/nvdimm/test/nfit.c > +++ b/tools/testing/nvdimm/test/nfit.c > @@ -1871,9 +1871,16 @@ static void smart_init(struct nfit_test *t) > } > } > > +static size_t sizeof_spa(struct acpi_nfit_system_address *spa) > +{ > + /* until spa location cookie support is added... */ > + return sizeof(*spa) - 8; > +} > + > static int nfit_test0_alloc(struct nfit_test *t) > { > - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * NUM_SPA > + struct acpi_nfit_system_address *spa = NULL; > + size_t nfit_size = sizeof_spa(spa) * NUM_SPA > + sizeof(struct acpi_nfit_memory_map) * NUM_MEM > + sizeof(struct acpi_nfit_control_region) * NUM_DCR > + offsetof(struct acpi_nfit_control_region, > @@ -1937,7 +1944,8 @@ static int nfit_test0_alloc(struct nfit_test *t) > > static int nfit_test1_alloc(struct nfit_test *t) > { > - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * 2 > + struct acpi_nfit_system_address *spa = NULL; > + size_t nfit_size = sizeof_
Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
On 07.05.21 01:16, Nick Kossifidis wrote: Στις 2021-05-06 20:05, James Bottomley έγραψε: On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote: Also, there is a way to still read that memory when root by 1. Having kdump active (which would often be the case, but maybe not to dump user pages ) 2. Triggering a kernel crash (easy via proc as root) 3. Waiting for the reboot after kump() created the dump and then reading the content from disk. Anything that can leave physical memory intact but boot to a kernel where the missing direct map entry is restored could theoretically extract the secret. However, it's not exactly going to be a stealthy extraction ... Or, as an attacker, load a custom kexec() kernel and read memory from the new environment. Of course, the latter two are advanced mechanisms, but they are possible when root. We might be able to mitigate, for example, by zeroing out secretmem pages before booting into the kexec kernel, if we care :) I think we could handle it by marking the region, yes, and a zero on shutdown might be useful ... it would prevent all warm reboot type attacks. I had similar concerns about recovering secrets with kdump, and considered cleaning up keyrings before jumping to the new kernel. The problem is we can't provide guarantees in that case, once the kernel has crashed and we are on our way to run crashkernel, we can't be sure we can reliably zero-out anything, the more code we add to that path the Well, I think it depends. Assume we do the following 1) Zero out any secretmem pages when handing them back to the buddy. (alternative: init_on_free=1) -- if not already done, I didn't check the code. 2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd just allocated from a fixed physical memory area; otherwise we have to walk process page tables or use a PFN walker. And zeroing out secretmem pages without a direct mapping is a different challenge. Now, during 2) it can happen that a) We crash in our clearing code (e.g., something is seriously messed up) and fail to start the kdump kernel. That's actually good, instead of leaking data we fail hard. b) We don't find all secretmem pages, for example, because process page tables are messed up or something messed up our memmap (if we'd use that to identify secretmem pages via a PFN walker somehow) But for the simple cases (e.g., malicious root tries to crash the kernel via /proc/sysrq-trigger) both a) and b) wouldn't apply. Obviously, if an admin would want to mitigate right now, he would want to disable kdump completely, meaning any attempt to load a crashkernel would fail and cannot be enabled again for that kernel (also not via cmdline an attacker could modify to reboot into a system with the option for a crashkernel). Disabling kdump in the kernel when secretmem pages are allocated is one approach, although sub-optimal. more risky it gets. However during reboot/normal kexec() we should do some cleanup, it makes sense and secretmem can indeed be useful in that case. Regarding loading custom kexec() kernels, we mitigate this with the kexec file-based API where we can verify the signature of the loaded kimage (assuming the system runs a kernel provided by a trusted 3rd party and we 've maintained a chain of trust since booting). For example in VMs (like QEMU), we often don't clear physical memory during a reboot. So if an attacker manages to load a kernel that you can trick into reading random physical memory areas, we can leak secretmem data I think. And there might be ways to achieve that just using the cmdline, not necessarily loading a different kernel. For example if you limit the kernel footprint ("mem=256M") and disable strict_iomem_checks ("strict_iomem_checks=relaxed") you can just extract that memory via /dev/mem if I am not wrong. So as an attacker, modify the (grub) cmdline to "mem=256M strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem. Or load a signed kexec kernel with that cmdline and boot into it. Interesting problem :) -- Thanks, David / dhildenb ___ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
[PATCH] ACPI: NFIT: Fix support for variable 'SPA' structure size
ACPI 6.4 introduced the "SpaLocationCookie" to the NFIT "System Physical Address (SPA) Range Structure". The presence of that new field is indicated by the ACPI_NFIT_LOCATION_COOKIE_VALID flag. Pre-ACPI-6.4 firmware implementations omit the flag and maintain the original size of the structure. Update the implementation to check that flag to determine the size rather than the ACPI 6.4 compliant definition of 'struct acpi_nfit_system_address' from the Linux ACPICA definitions. Update the test infrastructure for the new expectations as well, i.e. continue to emulate the ACPI 6.3 definition of that structure. Without this fix the kernel fails to validate 'SPA' structures and this leads to a crash in nfit_get_smbios_id() since that routine assumes that SPAs are valid if it finds valid SMBIOS tables. BUG: unable to handle page fault for address: ffa8 [..] Call Trace: skx_get_nvdimm_info+0x56/0x130 [skx_edac] skx_get_dimm_config+0x1f5/0x213 [skx_edac] skx_register_mci+0x132/0x1c0 [skx_edac] Cc: Bob Moore Cc: Erik Kaneda Fixes: cf16b05c607b ("ACPICA: ACPI 6.4: NFIT: add Location Cookie field") Reported-by: Yi Zhang Tested-by: Yi Zhang Signed-off-by: Dan Williams --- Rafael, I can take this through nvdimm.git after -rc1, but if you are sending any fixes to Linus based on your merge baseline between now and Monday, please pick up this one. drivers/acpi/nfit/core.c | 15 ++ tools/testing/nvdimm/test/nfit.c | 42 +++--- 2 files changed, 36 insertions(+), 21 deletions(-) diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c index 958aaac869e8..23d9a09d7060 100644 --- a/drivers/acpi/nfit/core.c +++ b/drivers/acpi/nfit/core.c @@ -686,6 +686,13 @@ int nfit_spa_type(struct acpi_nfit_system_address *spa) return -1; } +static size_t sizeof_spa(struct acpi_nfit_system_address *spa) +{ + if (spa->flags & ACPI_NFIT_LOCATION_COOKIE_VALID) + return sizeof(*spa); + return sizeof(*spa) - 8; +} + static bool add_spa(struct acpi_nfit_desc *acpi_desc, struct nfit_table_prev *prev, struct acpi_nfit_system_address *spa) @@ -693,22 +700,22 @@ static bool add_spa(struct acpi_nfit_desc *acpi_desc, struct device *dev = acpi_desc->dev; struct nfit_spa *nfit_spa; - if (spa->header.length != sizeof(*spa)) + if (spa->header.length != sizeof_spa(spa)) return false; list_for_each_entry(nfit_spa, &prev->spas, list) { - if (memcmp(nfit_spa->spa, spa, sizeof(*spa)) == 0) { + if (memcmp(nfit_spa->spa, spa, sizeof_spa(spa)) == 0) { list_move_tail(&nfit_spa->list, &acpi_desc->spas); return true; } } - nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof(*spa), + nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof_spa(spa), GFP_KERNEL); if (!nfit_spa) return false; INIT_LIST_HEAD(&nfit_spa->list); - memcpy(nfit_spa->spa, spa, sizeof(*spa)); + memcpy(nfit_spa->spa, spa, sizeof_spa(spa)); list_add_tail(&nfit_spa->list, &acpi_desc->spas); dev_dbg(dev, "spa index: %d type: %s\n", spa->range_index, diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c index 9b185bf82da8..54f367cbadae 100644 --- a/tools/testing/nvdimm/test/nfit.c +++ b/tools/testing/nvdimm/test/nfit.c @@ -1871,9 +1871,16 @@ static void smart_init(struct nfit_test *t) } } +static size_t sizeof_spa(struct acpi_nfit_system_address *spa) +{ + /* until spa location cookie support is added... */ + return sizeof(*spa) - 8; +} + static int nfit_test0_alloc(struct nfit_test *t) { - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * NUM_SPA + struct acpi_nfit_system_address *spa = NULL; + size_t nfit_size = sizeof_spa(spa) * NUM_SPA + sizeof(struct acpi_nfit_memory_map) * NUM_MEM + sizeof(struct acpi_nfit_control_region) * NUM_DCR + offsetof(struct acpi_nfit_control_region, @@ -1937,7 +1944,8 @@ static int nfit_test0_alloc(struct nfit_test *t) static int nfit_test1_alloc(struct nfit_test *t) { - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * 2 + struct acpi_nfit_system_address *spa = NULL; + size_t nfit_size = sizeof_spa(spa) * 2 + sizeof(struct acpi_nfit_memory_map) * 2 + offsetof(struct acpi_nfit_control_region, window_size) * 2; int i; @@ -2000,7 +2008,7 @@ static void nfit_test0_setup(struct nfit_test *t) */ spa = nfit_buf; spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; - spa->header.length = sizeof(*spa); + spa->header.length = sizeof_spa(spa); memcpy(spa->rang