Re: [PATCH 1/4] drivers core: Introduce CPU type sysfs interface

2020-11-18 Thread Brice Goglin
Le 17/11/2020 à 16:55, Brice Goglin a écrit : > Le 12/11/2020 à 11:49, Greg Kroah-Hartman a écrit : >> On Thu, Nov 12, 2020 at 10:10:57AM +0100, Brice Goglin wrote: >>> Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit : >>>> On Thu, Nov 12, 2020 at 07:19:

Re: [PATCH 1/4] drivers core: Introduce CPU type sysfs interface

2020-11-17 Thread Brice Goglin
Le 12/11/2020 à 11:49, Greg Kroah-Hartman a écrit : > On Thu, Nov 12, 2020 at 10:10:57AM +0100, Brice Goglin wrote: >> Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit : >>> On Thu, Nov 12, 2020 at 07:19:48AM +0100, Brice Goglin wrote: >>>> >>>>

Re: [PATCH 1/4] drivers core: Introduce CPU type sysfs interface

2020-11-12 Thread Brice Goglin
Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit : > On Thu, Nov 12, 2020 at 07:19:48AM +0100, Brice Goglin wrote: >> Le 07/10/2020 à 07:15, Greg Kroah-Hartman a écrit : >>> On Tue, Oct 06, 2020 at 08:14:47PM -0700, Ricardo Neri wrote: >>>> On Tue, Oct 06, 2020 a

Re: [PATCH 1/4] drivers core: Introduce CPU type sysfs interface

2020-11-11 Thread Brice Goglin
Le 07/10/2020 à 07:15, Greg Kroah-Hartman a écrit : > On Tue, Oct 06, 2020 at 08:14:47PM -0700, Ricardo Neri wrote: >> On Tue, Oct 06, 2020 at 09:37:44AM +0200, Greg Kroah-Hartman wrote: >>> On Mon, Oct 05, 2020 at 05:57:36PM -0700, Ricardo Neri wrote: On Sat, Oct 03, 2020 at 10:53:45AM

Re: [RFC PATCH] topology: Represent clusters of CPUs within a die.

2020-10-19 Thread Brice Goglin
Le 19/10/2020 à 16:16, Morten Rasmussen a écrit : > >>> If there is a provable benefit of having interconnect grouping >>> information, I think it would be better represented by a distance matrix >>> like we have for NUMA. >> There have been some discussions in various forums about how to >>

Re: [RFC PATCH] topology: Represent clusters of CPUs within a die.

2020-10-19 Thread Brice Goglin
Le 19/10/2020 à 14:50, Peter Zijlstra a écrit : > On Mon, Oct 19, 2020 at 01:32:26PM +0100, Jonathan Cameron wrote: >> On Mon, 19 Oct 2020 12:35:22 +0200 >> Peter Zijlstra wrote: >>> I'm confused by all of this. The core level is exactly what you seem to >>> want. >> It's the level above the

Re: [RFC PATCH] topology: Represent clusters of CPUs within a die.

2020-10-19 Thread Brice Goglin
Le 16/10/2020 à 17:27, Jonathan Cameron a écrit : > Both ACPI and DT provide the ability to describe additional layers of > topology between that of individual cores and higher level constructs > such as the level at which the last level cache is shared. > In ACPI this can be represented in PPTT

Re: [PATCHv2 1/2] hmat: Register memory-side cache after parsing

2019-07-01 Thread Brice Goglin
IMM node). Tested-by: Brice Goglin > --- > v1 -> v2: > > Fixed multi-level caches, and no caches. v1 incorrectly assumed only a level > 1 always existed (Brice). > > drivers/acpi/hmat/hmat.c | 70 > +--- > 1 file cha

Re: [PATCH 0/14] v2 multi-die/package topology support

2019-04-12 Thread Brice Goglin
Le 12/04/2019 à 21:52, Len Brown a écrit : I think I prefer 's/threads/cpus/g' on that. Threads makes me think SMT, and I don't think there's any guarantee the part in question will have SMT on. >>> I think 'threads' is a bit confusing as well. We seem to be using 'cpu' >>>

Re: [PATCH] hmat: Register attributes for memory hot add

2019-04-10 Thread Brice Goglin
-and-tested-by: Brice Goglin Just one minor typo below. Le 09/04/2019 à 23:44, Keith Busch a écrit : > Some types of memory nodes that HMAT describes may not be online at the > time we initially parse their nodes' tables. If the node should be set > to online later, as can happen when using PM

Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node

2019-03-25 Thread Brice Goglin
Le 25/03/2019 à 20:29, Dan Williams a écrit : > Perhaps "path" might be a suitable replacement identifier rather than > type. I.e. memory that originates from an ACPI.NFIT root device is > likely "pmem". Could work. What kind of "path" would we get for other types of memory? (DDR,

Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node

2019-03-25 Thread Brice Goglin
Le 25/03/2019 à 17:56, Dan Williams a écrit : > > I'm generally against the concept that a "pmem" or "type" flag should > indicate anything about the expected performance of the address range. > The kernel should explicitly look to the HMAT for performance data and > not otherwise make type-based

Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node

2019-03-25 Thread Brice Goglin
Le 23/03/2019 à 05:44, Yang Shi a écrit : > With Dave Hansen's patches merged into Linus's tree > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c221c0b0308fd01d9fb33a16f64d2fd95f8830a4 > > PMEM could be hot plugged as NUMA node now. But, how to use PMEM as NUMA

Re: [PATCHv8 00/10] Heterogenous memory node attributes

2019-03-11 Thread Brice Goglin
cellaneous typos, editorial clarifications, and whitespace fixups. > > Merged to most current linux-next. > > Added received review, test, and ack by's. Tested-by: Brice Goglin I tested this series with several manually-created HMATs. I already have user-space support

Re: [PATCHv6 07/10] acpi/hmat: Register processor domain to its memory

2019-03-07 Thread Brice Goglin
Le 14/02/2019 à 18:10, Keith Busch a écrit : > If the HMAT Subsystem Address Range provides a valid processor proximity > domain for a memory domain, or a processor domain matches the performance > access of the valid processor proximity domain, register the memory > target with that initiator so

Re: [PATCH 03/11] x86 topology: Add CPUID.1F multi-die/package support

2019-02-24 Thread Brice Goglin
Le 19/02/2019 à 04:40, Len Brown a écrit : > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c > index ccd1f2a8e557..4250a87f57db 100644 > --- a/arch/x86/kernel/smpboot.c > +++ b/arch/x86/kernel/smpboot.c > @@ -393,6 +393,7 @@ static bool match_smt(struct cpuinfo_x86 *c, struct >

Re: [PATCHv6 06/10] node: Add memory-side caching attributes

2019-02-22 Thread Brice Goglin
Le 14/02/2019 à 18:10, Keith Busch a écrit : > System memory may have caches to help improve access speed to frequently > requested address ranges. While the system provided cache is transparent > to the software accessing these memory ranges, applications can optimize > their own access based on

Re: [PATCHv6 06/10] node: Add memory-side caching attributes

2019-02-22 Thread Brice Goglin
Le 14/02/2019 à 18:10, Keith Busch a écrit : > System memory may have caches to help improve access speed to frequently > requested address ranges. While the system provided cache is transparent > to the software accessing these memory ranges, applications can optimize > their own access based on

Re: [PATCH 05/11] x86 topology: export die_siblings

2019-02-21 Thread Brice Goglin
Le 21/02/2019 à 08:41, Len Brown a écrit : > > Here is my list of applications that care about the new CPUID leaf > and the concepts of packages and die: > > cpuid > lscpu > x86_energy_perf_policy > turbostat You may add hwloc/lstopo which is used by most HPC runtimes (including your

Re: [PATCH 05/11] x86 topology: export die_siblings

2019-02-20 Thread Brice Goglin
Le 19/02/2019 à 04:40, Len Brown a écrit : > From: Len Brown > > like core_siblings, except it shows which die are in the same package. > > This is needed for lscpu(1) to correctly display die topology. > > Signed-off-by: Len Brown > Cc: linux-...@vger.kernel.org > Signed-off-by: Len Brown >

Re: [PATCHv6 00/10] Heterogenous memory node attributes

2019-02-18 Thread Brice Goglin
Le 14/02/2019 à 18:10, Keith Busch a écrit : > == Changes since v5 == > > Updated HMAT parsing to account for the recently released ACPI 6.3 > changes. > > HMAT attribute calculation overflow checks. > > Fixed memory leak if HMAT parse fails. > > Minor change to the patch order. All

Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal RAM

2019-02-13 Thread Brice Goglin
Le 13/02/2019 à 09:43, Brice Goglin a écrit : > Le 13/02/2019 à 09:24, Dan Williams a écrit : >> On Wed, Feb 13, 2019 at 12:12 AM Brice Goglin wrote: >>> Le 13/02/2019 à 01:30, Dan Williams a écrit : >>>> On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin >>&g

Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal RAM

2019-02-13 Thread Brice Goglin
Le 13/02/2019 à 09:24, Dan Williams a écrit : > On Wed, Feb 13, 2019 at 12:12 AM Brice Goglin wrote: >> Le 13/02/2019 à 01:30, Dan Williams a écrit : >>> On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin wrote: >>>> # ndctl disable-region all >>>> # ndctl

Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal RAM

2019-02-13 Thread Brice Goglin
Le 13/02/2019 à 01:30, Dan Williams a écrit : > On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin wrote: >> # ndctl disable-region all >> # ndctl zero-labels all >> # ndctl enable-region region0 >> # ndctl create-namespace -r region0 -t pmem -m devdax >> { >

Re: [PATCH v2] device-dax: Auto-bind device after successful new_id

2019-02-13 Thread Brice Goglin
Alexander Duyck > Reported-by: Brice Goglin > Cc: Dave Hansen > Signed-off-by: Dan Williams > --- > Changes since v1: > * Fix the remove_id path since do_id_store() is shared with the new_id > path (Brice) > > Brice, this works for me. I'll push it out on libnvdimm-pending, or

Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal RAM

2019-02-12 Thread Brice Goglin
Le 11/02/2019 à 17:22, Dave Hansen a écrit : > On 2/9/19 3:00 AM, Brice Goglin wrote: >> I've used your patches on fake hardware (memmap=xx!yy) with an older >> nvdimm-pending branch (without Keith's patches). It worked fine. This >> time I am running on real Intel har

Re: [PATCHv4 10/13] node: Add memory caching attributes

2019-02-12 Thread Brice Goglin
Le 11/02/2019 à 16:23, Keith Busch a écrit : > On Sun, Feb 10, 2019 at 09:19:58AM -0800, Jonathan Cameron wrote: >> On Sat, 9 Feb 2019 09:20:53 +0100 >> Brice Goglin wrote: >> >>> Hello Keith >>> >>> Could we ever have a single side cache in

Re: [PATCHv4 10/13] node: Add memory caching attributes

2019-02-09 Thread Brice Goglin
Hello Keith Could we ever have a single side cache in front of two NUMA nodes ? I don't see a way to find that out in the current implementation. Would we have an "id" and/or "nodemap" bitmask in the sidecache structure ? Thanks Brice Le 16/01/2019 à 18:58, Keith Busch a écrit : > System

Re: [PATCH 0/9] Allow persistent memory to be used like normal RAM

2018-12-03 Thread Brice Goglin
Le 22/10/2018 à 22:13, Dave Hansen a écrit : > Persistent memory is cool. But, currently, you have to rewrite > your applications to use it. Wouldn't it be cool if you could > just have it show up in your system like normal RAM and get to > it like a slow blob of memory? Well... have I got the

Re: [PATCH 0/9] Allow persistent memory to be used like normal RAM

2018-12-03 Thread Brice Goglin
Le 22/10/2018 à 22:13, Dave Hansen a écrit : > Persistent memory is cool. But, currently, you have to rewrite > your applications to use it. Wouldn't it be cool if you could > just have it show up in your system like normal RAM and get to > it like a slow blob of memory? Well... have I got the

Re: [PATCH] ACPI/PPTT: Handle architecturally unknown cache types

2018-09-13 Thread Brice Goglin
Le 13/09/2018 à 11:35, Sudeep Holla a écrit : > On Thu, Sep 13, 2018 at 10:39:10AM +0100, James Morse wrote: >> Hi Brice, >> >> On 13/09/18 06:51, Brice Goglin wrote: >>> Le 12/09/2018 à 11:49, Sudeep Holla a écrit : >>>>> Yes.  Without this change,

Re: [PATCH] ACPI/PPTT: Handle architecturally unknown cache types

2018-09-13 Thread Brice Goglin
Le 13/09/2018 à 11:35, Sudeep Holla a écrit : > On Thu, Sep 13, 2018 at 10:39:10AM +0100, James Morse wrote: >> Hi Brice, >> >> On 13/09/18 06:51, Brice Goglin wrote: >>> Le 12/09/2018 à 11:49, Sudeep Holla a écrit : >>>>> Yes.  Without this change,

Re: [PATCH] ACPI/PPTT: Handle architecturally unknown cache types

2018-09-12 Thread Brice Goglin
Le 12/09/2018 à 11:49, Sudeep Holla a écrit : > >> Yes.  Without this change, we hit the lscpu error in the commit message, >> and get zero output about the system.  We don't even get information >> about the caches which are architecturally specified or how many cpus >> are present.  With this

Re: [PATCH] ACPI/PPTT: Handle architecturally unknown cache types

2018-09-12 Thread Brice Goglin
Le 12/09/2018 à 11:49, Sudeep Holla a écrit : > >> Yes.  Without this change, we hit the lscpu error in the commit message, >> and get zero output about the system.  We don't even get information >> about the caches which are architecturally specified or how many cpus >> are present.  With this

Re: [PATCH v7 13/13] arm64: topology: divorce MC scheduling domain from core_siblings

2018-03-08 Thread Brice Goglin
> Is there a good reason for diverging instead of adjusting the > core_sibling mask? On x86 the core_siblings mask is defined by the last > level cache span so they don't have this issue. No. core_siblings is defined as the list of cores that have the same physical_package_id (see the doc of

Re: [PATCH v7 13/13] arm64: topology: divorce MC scheduling domain from core_siblings

2018-03-08 Thread Brice Goglin
> Is there a good reason for diverging instead of adjusting the > core_sibling mask? On x86 the core_siblings mask is defined by the last > level cache span so they don't have this issue. No. core_siblings is defined as the list of cores that have the same physical_package_id (see the doc of

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-30 Thread Brice Goglin
Le 30/12/2017 à 07:58, Matthew Wilcox a écrit : > On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote: >>> Perhaps we can enlist /proc/iomem or a similar enumeration interface >>> to tell userspace the NUMA node and whether the kernel thinks it has >>&g

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-30 Thread Brice Goglin
Le 30/12/2017 à 07:58, Matthew Wilcox a écrit : > On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote: >>> Perhaps we can enlist /proc/iomem or a similar enumeration interface >>> to tell userspace the NUMA node and whether the kernel thinks it has >>&g

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-27 Thread Brice Goglin
Le 22/12/2017 à 23:53, Dan Williams a écrit : > On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.gog...@gmail.com> wrote: >> Le 20/12/2017 à 23:41, Ross Zwisler a écrit : > [..] >> Hello >> >> I can confirm that HPC runtimes are going to use these patche

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-27 Thread Brice Goglin
Le 22/12/2017 à 23:53, Dan Williams a écrit : > On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin wrote: >> Le 20/12/2017 à 23:41, Ross Zwisler a écrit : > [..] >> Hello >> >> I can confirm that HPC runtimes are going to use these patches (at least >> all

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-21 Thread Brice Goglin
Le 20/12/2017 à 23:41, Ross Zwisler a écrit : > On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote: >> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler >> wrote: >>> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote: On Wed, Dec 20, 2017 at

Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT

2017-12-21 Thread Brice Goglin
Le 20/12/2017 à 23:41, Ross Zwisler a écrit : > On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote: >> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler >> wrote: >>> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote: On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen

Re: [PATCH 1/2] x86/CPU/AMD: Present package as die instead of socket

2017-06-27 Thread Brice Goglin
Le 27/06/2017 16:21, Thomas Gleixner a écrit : > On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote: >> On 6/27/17 17:48, Borislav Petkov wrote: >>> On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote: However, this is not the case on AMD family17h multi-die processor

Re: [PATCH 1/2] x86/CPU/AMD: Present package as die instead of socket

2017-06-27 Thread Brice Goglin
Le 27/06/2017 16:21, Thomas Gleixner a écrit : > On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote: >> On 6/27/17 17:48, Borislav Petkov wrote: >>> On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote: However, this is not the case on AMD family17h multi-die processor

Re: [RFC PATCH 2/3] Implement sysfs based cpuinfo for x86 cpus.

2017-06-09 Thread Brice Goglin
Le 09/06/2017 15:28, Thomas Renninger a écrit : > On Thursday, June 08, 2017 08:24:01 PM Greg KH wrote: >> On Thu, Jun 08, 2017 at 06:56:14PM +0200, Felix Schnizlein wrote: >>> --- >>> arch/x86/kernel/Makefile| 1 + >>> arch/x86/kernel/cpuinfo_sysfs.c | 166 >

Re: [RFC PATCH 2/3] Implement sysfs based cpuinfo for x86 cpus.

2017-06-09 Thread Brice Goglin
Le 09/06/2017 15:28, Thomas Renninger a écrit : > On Thursday, June 08, 2017 08:24:01 PM Greg KH wrote: >> On Thu, Jun 08, 2017 at 06:56:14PM +0200, Felix Schnizlein wrote: >>> --- >>> arch/x86/kernel/Makefile| 1 + >>> arch/x86/kernel/cpuinfo_sysfs.c | 166 >

Re: AMD Bulldozer topology regression since 4.6

2017-01-03 Thread Brice Goglin
Le 29/11/2016 22:02, Brice Goglin a écrit : > Le 29/11/2016 20:39, Borislav Petkov a écrit : >> Does that fix it? >> >> Patch is against latest tip/master because we have some more changes in >> that area. > I tested the second patch on top of 4.8.11, it

Re: AMD Bulldozer topology regression since 4.6

2017-01-03 Thread Brice Goglin
Le 29/11/2016 22:02, Brice Goglin a écrit : > Le 29/11/2016 20:39, Borislav Petkov a écrit : >> Does that fix it? >> >> Patch is against latest tip/master because we have some more changes in >> that area. > I tested the second patch on top of 4.8.11, it

Re: bnx2 breaks Dell R815 BMC IPMI since 4.8

2016-11-29 Thread Brice Goglin
Le 30 novembre 2016 00:28:08 GMT+01:00, Gavin Shan <gws...@linux.vnet.ibm.com> a écrit : >On Tue, Nov 29, 2016 at 07:57:51AM +0100, Brice Goglin wrote: >>Hello >> >>My Dell PowerEdge R815 doesn't have IPMI anymore when I boot a 4.8 >>kernel, the BMC doesn'

Re: bnx2 breaks Dell R815 BMC IPMI since 4.8

2016-11-29 Thread Brice Goglin
Le 30 novembre 2016 00:28:08 GMT+01:00, Gavin Shan a écrit : >On Tue, Nov 29, 2016 at 07:57:51AM +0100, Brice Goglin wrote: >>Hello >> >>My Dell PowerEdge R815 doesn't have IPMI anymore when I boot a 4.8 >>kernel, the BMC doesn't even ping anymore. Its Etherne

Re: AMD Bulldozer topology regression since 4.6

2016-11-29 Thread Brice Goglin
Le 29/11/2016 20:39, Borislav Petkov a écrit : > Does that fix it? > > Patch is against latest tip/master because we have some more changes in > that area. I tested the second patch on top of 4.8.11, it brings core_id back to where it was before 4.6, thanks. Reported-and-tested-by:

Re: AMD Bulldozer topology regression since 4.6

2016-11-29 Thread Brice Goglin
Le 29/11/2016 20:39, Borislav Petkov a écrit : > Does that fix it? > > Patch is against latest tip/master because we have some more changes in > that area. I tested the second patch on top of 4.8.11, it brings core_id back to where it was before 4.6, thanks. Reported-and-tested-by:

AMD Bulldozer topology regression since 4.6

2016-11-28 Thread Brice Goglin
Hello Since Linux 4.6 (and still in 4.9-rc5 at least), both AMD Bulldozer cores of a single dual-core compute unit report the same core_id: $ cat /sys/devices/system/cpu/cpu{?,??}/topology/core_id 0 0 1 1 2 2 3 0 3 [...] Before 4.5 (and for a very long time), the kernel reported different

AMD Bulldozer topology regression since 4.6

2016-11-28 Thread Brice Goglin
Hello Since Linux 4.6 (and still in 4.9-rc5 at least), both AMD Bulldozer cores of a single dual-core compute unit report the same core_id: $ cat /sys/devices/system/cpu/cpu{?,??}/topology/core_id 0 0 1 1 2 2 3 0 3 [...] Before 4.5 (and for a very long time), the kernel reported different

Re: Why is SECTOR_SIZE = 512 inside kernel ?

2015-08-18 Thread Brice Goglin
Le 17/08/2015 15:54, Theodore Ts'o a écrit : > > It's cast in stone. There are too many places all over the kernel, > especially in a huge number of file systems, which assume that the > sector size is 512 bytes. So above the block layer, the sector size > is always going to be 512. Could this

Re: Why is SECTOR_SIZE = 512 inside kernel ?

2015-08-18 Thread Brice Goglin
Le 17/08/2015 15:54, Theodore Ts'o a écrit : It's cast in stone. There are too many places all over the kernel, especially in a huge number of file systems, which assume that the sector size is 512 bytes. So above the block layer, the sector size is always going to be 512. Could this be a

Re: [PATCH v3 3/3] libnvdimm: Add sysfs numa_node to NVDIMM devices

2015-06-19 Thread Brice Goglin
Do you have local_cpus and local_cpulist attributes as well? User-space tools such as hwloc use those for binding near I/O devices, although I guess we could have some CPU-less NVDIMM NUMA nodes? Brice Le 19/06/2015 20:18, Toshi Kani a écrit : > Add support of sysfs 'numa_node' to I/O-related

Re: [PATCH v3 3/3] libnvdimm: Add sysfs numa_node to NVDIMM devices

2015-06-19 Thread Brice Goglin
Do you have local_cpus and local_cpulist attributes as well? User-space tools such as hwloc use those for binding near I/O devices, although I guess we could have some CPU-less NVDIMM NUMA nodes? Brice Le 19/06/2015 20:18, Toshi Kani a écrit : Add support of sysfs 'numa_node' to I/O-related

Re: Topology updates and NUMA-level sched domains

2015-04-08 Thread Brice Goglin
Le 07/04/2015 21:41, Peter Zijlstra a écrit : > No, that's very much not the same. Even if it were dealing with hotplug > it would still assume the cpu to return to the same node. > > But mostly people do not even bother to handle hotplug. > You said userspace assumes the cpu<->node relation is a

Re: Topology updates and NUMA-level sched domains

2015-04-08 Thread Brice Goglin
Le 07/04/2015 21:41, Peter Zijlstra a écrit : No, that's very much not the same. Even if it were dealing with hotplug it would still assume the cpu to return to the same node. But mostly people do not even bother to handle hotplug. You said userspace assumes the cpu-node relation is a

Re: [PATCH] x86: new topology for multi-NUMA-node CPUs

2014-09-21 Thread Brice Goglin
Le 18/09/2014 21:33, Dave Hansen a écrit : > After this set, there are only 2 sets of core siblings, which > is what we expect for a 2-socket system. > > # cat cpu*/topology/physical_package_id | sort | uniq -c > 18 0 > 18 1 > # cat cpu*/topology/core_siblings_list | sort | uniq -c >

Re: [PATCH] x86: new topology for multi-NUMA-node CPUs

2014-09-21 Thread Brice Goglin
Le 18/09/2014 21:33, Dave Hansen a écrit : After this set, there are only 2 sets of core siblings, which is what we expect for a 2-socket system. # cat cpu*/topology/physical_package_id | sort | uniq -c 18 0 18 1 # cat cpu*/topology/core_siblings_list | sort | uniq -c 18

Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane"

2014-09-16 Thread Brice Goglin
Le 16/09/2014 05:29, Peter Zijlstra a écrit : > >> This also fixes sysfs because CPUs with the same 'physical_package_id' >> in /sys/devices/system/cpu/cpu*/topology/ are not listed together >> in the same 'core_siblings_list'. This violates a statement from >>

Re: [PATCH] x86: Consider multiple nodes in a single socket to be sane

2014-09-16 Thread Brice Goglin
Le 16/09/2014 05:29, Peter Zijlstra a écrit : This also fixes sysfs because CPUs with the same 'physical_package_id' in /sys/devices/system/cpu/cpu*/topology/ are not listed together in the same 'core_siblings_list'. This violates a statement from

Re: NUMA processor numbering

2013-10-03 Thread Brice Goglin
Le 03/10/2013 12:46, Stephan von Krawczynski a écrit : > Ok, let me re-phrase the question a bit. > Is it really possible what you see here: > > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 45 > model name : Intel(R) Xeon(R) CPU E5-2660 0 @

Re: NUMA processor numbering

2013-10-03 Thread Brice Goglin
Le 03/10/2013 12:46, Stephan von Krawczynski a écrit : Ok, let me re-phrase the question a bit. Is it really possible what you see here: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 45 model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz

Re: dmaengine: make dma_channel_rebalance() NUMA aware

2013-08-19 Thread Brice Goglin
goes to processor X and to its hyperthread sibling). Signed-off-by: Brice Goglin --- drivers/dma/dmaengine.c | 64 +++- 1 file changed, 37 insertions(+), 27 deletions(-) Index: linux-3.11-rc3/drivers/dma/dmaengine.c ===

Re: dmaengine: make dma_channel_rebalance() NUMA aware

2013-08-19 Thread Brice Goglin
used, so this won't hurt. On the above SuperMicro machine, channels are still allocated the same. On the Dells, there are no locality issue anymore (MEMCPY channel X goes to processor X and to its hyperthread sibling). Signed-off-by: Brice Goglin --- drivers/dma/dmaengine.c | 64 +

Re: dmaengine: make dma_channel_rebalance() NUMA aware

2013-08-19 Thread Brice Goglin
. On the Dells, there are no locality issue anymore (MEMCPY channel X goes to processor X and to its hyperthread sibling). Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/dmaengine.c | 64 +++- 1 file changed, 37 insertions(+), 27

Re: dmaengine: make dma_channel_rebalance() NUMA aware

2013-08-19 Thread Brice Goglin
-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/dmaengine.c | 64 +++- 1 file changed, 37 insertions(+), 27 deletions(-) Index: linux-3.11-rc3/drivers/dma/dmaengine.c === --- linux-3.11-rc3

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
ups such as the 64-byte alignement restriction on legacy DMA operations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin --- drivers/dma/ioat/dma_v3.c | 24 +--- 1 file changed, 1 insertion(+), 23 deletions(-) Index: b/drivers

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
ations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin --- drivers/dma/ioat/dma_v3.c |5 + 1 file changed, 1 insertion(+), 4 deletions(-) Index: b/drivers/dma/ioat/dma_v3.c ===

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
s now disabled by default on buggy 3.2 platforms. Passing ioat_raid_enabled=1 force-enables it on all platforms (previous behavior). Passing ioat_raid_enabled=0 force-disables it everywhere. When RAID offload is disabled, legacy operations (memcpy, etc.) can work again without alignment restrict

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/ioat/dma_v3.c |5 + 1 file changed, 1 insertion(+), 4 deletions(-) Index: b/drivers/dma/ioat/dma_v3.c

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
alignement restriction on legacy DMA operations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/ioat/dma_v3.c | 24 +--- 1 file changed, 1 insertion(+), 23 deletions(-) Index: b/drivers/dma

Re: ioatdma: add ioat_raid_enabled module parameter

2013-08-02 Thread Brice Goglin
force-enables it on all platforms (previous behavior). Passing ioat_raid_enabled=0 force-disables it everywhere. When RAID offload is disabled, legacy operations (memcpy, etc.) can work again without alignment restrictions. Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/ioat

ioatdma: add ioat_raid_enabled module parameter

2013-07-31 Thread Brice Goglin
operations (memcpy, etc.) can work without alignment restrictions anymore. Signed-off-by: Brice Goglin --- drivers/dma/ioat/dma_v3.c |9 +++-- 1 file changed, 7 insertions(+), 2 deletions(-) Index: b/drivers/dma/ioat/dma_v3.c

dmaengine: make dma_channel_rebalance() NUMA aware

2013-07-31 Thread Brice Goglin
annels are still allocated the same. On the Dells, there are no locality issue anymore (each MEMCPY channel goes to both hyperthreads of a single core of the local socket). Signed-off-by: Brice Goglin --- drivers/dma/dmaengine.c | 64 +++- 1 file chang

dmaengine: make dma_channel_rebalance() NUMA aware

2013-07-31 Thread Brice Goglin
are still allocated the same. On the Dells, there are no locality issue anymore (each MEMCPY channel goes to both hyperthreads of a single core of the local socket). Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/dmaengine.c | 64 +++- 1

ioatdma: add ioat_raid_enabled module parameter

2013-07-31 Thread Brice Goglin
operations (memcpy, etc.) can work without alignment restrictions anymore. Signed-off-by: Brice Goglin brice.gog...@inria.fr --- drivers/dma/ioat/dma_v3.c |9 +++-- 1 file changed, 7 insertions(+), 2 deletions(-) Index: b/drivers/dma/ioat/dma_v3.c

Re: MTRR use in drivers

2013-06-23 Thread Brice Goglin
Le 21/06/2013 07:00, H. Peter Anvin a écrit : > An awful lot of drivers, mostly DRI drivers, are still mucking with > MTRRs directly as opposed to using ioremap_wc() or similar interfaces. > In addition to the architecture dependency, this is really undesirable > because MTRRs are a limited

Re: MTRR use in drivers

2013-06-23 Thread Brice Goglin
Le 21/06/2013 07:00, H. Peter Anvin a écrit : An awful lot of drivers, mostly DRI drivers, are still mucking with MTRRs directly as opposed to using ioremap_wc() or similar interfaces. In addition to the architecture dependency, this is really undesirable because MTRRs are a limited resource,

Re: [PATCH v3 04/10] thp: do_huge_pmd_wp_page(): handle huge zero page

2012-10-02 Thread Brice Goglin
Le 02/10/2012 17:19, Kirill A. Shutemov a écrit : > From: "Kirill A. Shutemov" > > On right access to huge zero page we alloc a new page and clear it. > s/right/write/ ? Brice -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to

Re: [PATCH v3 04/10] thp: do_huge_pmd_wp_page(): handle huge zero page

2012-10-02 Thread Brice Goglin
Le 02/10/2012 17:19, Kirill A. Shutemov a écrit : From: Kirill A. Shutemov kirill.shute...@linux.intel.com On right access to huge zero page we alloc a new page and clear it. s/right/write/ ? Brice -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Brice Goglin
Andrew Morton wrote: > What is the status of getting infiniband to use this facility? > > How important is this feature to KVM? > > To xpmem? > > Which other potential clients have been identified and how important it it > to those? > As I said when Andrea posted the first patch series, I used

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Brice Goglin
Andrew Morton wrote: What is the status of getting infiniband to use this facility? How important is this feature to KVM? To xpmem? Which other potential clients have been identified and how important it it to those? As I said when Andrea posted the first patch series, I used

[PATCH][I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec

2008-02-13 Thread Brice Goglin
[I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec No need to compute copy twice in the frags loop in dma_skb_copy_datagram_iovec(). Signed-off-by: Brice Goglin <[EMAIL PROTECTED]> --- user_dma.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ne

Re: Purpose of numa_node?

2008-02-13 Thread Brice Goglin
Yinghai Lu wrote: >>> Have a look at the above link. I don't get -1. I get 0 everywhere, while >>> I should get 1 for some devices. And if I unplug/replug a device using >>> fakephp, numa_node becomes correct (1 instead of 0). This just looks >>> like the code is there but things are initialized

Re: Purpose of numa_node?

2008-02-13 Thread Brice Goglin
Yinghai Lu wrote: Have a look at the above link. I don't get -1. I get 0 everywhere, while I should get 1 for some devices. And if I unplug/replug a device using fakephp, numa_node becomes correct (1 instead of 0). This just looks like the code is there but things are initialized in the wrong

[PATCH][I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec

2008-02-13 Thread Brice Goglin
[I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec No need to compute copy twice in the frags loop in dma_skb_copy_datagram_iovec(). Signed-off-by: Brice Goglin [EMAIL PROTECTED] --- user_dma.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core

Re: Linux 2.6.25-rc1

2008-02-11 Thread Brice Goglin
Linus Torvalds wrote: > - Lots of cleanups from the x86 merge (making more and more use of common >files), but also the big page attribute stuff is in and caused a fair >amount of churn, and while most of the issues should have been very >obvious and all got fixed, this is

Re: Linux 2.6.25-rc1

2008-02-11 Thread Brice Goglin
Linus Torvalds wrote: - Lots of cleanups from the x86 merge (making more and more use of common files), but also the big page attribute stuff is in and caused a fair amount of churn, and while most of the issues should have been very obvious and all got fixed, this is definitely

Re: Purpose of numa_node?

2008-01-31 Thread Brice Goglin
Yinghai Lu wrote: > On Jan 31, 2008 5:42 AM, Brice Goglin <[EMAIL PROTECTED]> wrote: > >> It works fine on regular machines such as dual opterons. However, I >> noticed recently that it was wrong on some quad-opteron machines (see >> http://marc.info/?l=linux-

Re: Purpose of numa_node?

2008-01-31 Thread Brice Goglin
Paul Mundt wrote: On Wed, Jan 30, 2008 at 07:48:13PM -0500, Chris Snook wrote: While pondering ways to optimize I/O and swapping on large NUMA machines, I noticed that the numa_node field in struct device isn't actually used anywhere. We just have a couple dozen lines of code to

Re: Purpose of numa_node?

2008-01-31 Thread Brice Goglin
Paul Mundt wrote: On Wed, Jan 30, 2008 at 07:48:13PM -0500, Chris Snook wrote: While pondering ways to optimize I/O and swapping on large NUMA machines, I noticed that the numa_node field in struct device isn't actually used anywhere. We just have a couple dozen lines of code to

Re: Purpose of numa_node?

2008-01-31 Thread Brice Goglin
Yinghai Lu wrote: On Jan 31, 2008 5:42 AM, Brice Goglin [EMAIL PROTECTED] wrote: It works fine on regular machines such as dual opterons. However, I noticed recently that it was wrong on some quad-opteron machines (see http://marc.info/?l=linux-pcim=11907248538w=2) because something

Re: [patch] PCI: disable the MSI of AMD RS690

2008-01-24 Thread Brice Goglin
Shane Huang wrote: This patch recover Tejun's commit 4be8f906435a6af241821ab5b94b2b12cb7d57d8 because there is one MSI bug on RS690+SB600 board which will lead to boot failure. This bug is NOT same as the one in SB700 SATA controller, quirk_msi_intx_disable_bug does not work to SB600.

Re: [patch] PCI: disable the MSI of AMD RS690

2008-01-24 Thread Brice Goglin
Shane Huang wrote: This patch recover Tejun's commit 4be8f906435a6af241821ab5b94b2b12cb7d57d8 because there is one MSI bug on RS690+SB600 board which will lead to boot failure. This bug is NOT same as the one in SB700 SATA controller, quirk_msi_intx_disable_bug does not work to SB600.

Re: [PATCH] mmu notifiers #v2

2008-01-16 Thread Brice Goglin
Andrea Arcangeli wrote: This patch is last version of a basic implementation of the mmu notifiers. In short when the linux VM decides to free a page, it will unmap it from the linux pagetables. However when a page is mapped not just by the regular linux ptes, but also from the shadow

Re: [PATCH] mmu notifiers #v2

2008-01-16 Thread Brice Goglin
Andrea Arcangeli wrote: This patch is last version of a basic implementation of the mmu notifiers. In short when the linux VM decides to free a page, it will unmap it from the linux pagetables. However when a page is mapped not just by the regular linux ptes, but also from the shadow

  1   2   3   >