On 11/22/2018 11:38 PM, Dan Williams wrote:
> On Thu, Nov 22, 2018 at 3:52 AM Anshuman Khandual
> <anshuman.khand...@arm.com> wrote:
>>
>>
>>
>> On 11/19/2018 11:07 PM, Dave Hansen wrote:
>>> On 11/18/18 9:44 PM, Anshuman Khandual wrote:
>>>> IIUC NUMA re-work in principle involves these functional changes
>>>>
>>>> 1. Enumerating compute and memory nodes in heterogeneous environment 
>>>> (short/medium term)
>>>
>>> This patch set _does_ that, though.
>>>
>>>> 2. Enumerating memory node attributes as seen from the compute nodes 
>>>> (short/medium term)
>>>
>>> It does that as well (a subset at least).
>>>
>>> It sounds like the subset that's being exposed is insufficient for yo
>>> We did that because we think doing anything but a subset in sysfs will
>>> just blow up sysfs:  MAX_NUMNODES is as high as 1024, so if we have 4
>>> attributes, that's at _least_ 1024*1024*4 files if we expose *all*
>>> combinations.
>> Each permutation need not be a separate file inside all possible NODE X
>> (/sys/devices/system/node/nodeX) directories. It can be a top level file
>> enumerating various attribute values for a given (X, Y) node pair based
>> on an offset something like /proc/pid/pagemap.
>>
>>>
>>> Do we agree that sysfs is unsuitable for exposing attributes in this manner?
>>>
>>
>> Yes, for individual files. But this can be worked around with an offset
>> based access from a top level global attributes file as mentioned above.
>> Is there any particular advantage of using individual files for each
>> given attribute ? I was wondering that a single unsigned long (u64) will
>> be able to pack 8 different attributes where each individual attribute
>> values can be abstracted out in 8 bits.
> 
> sysfs has a 4K limit, and in general I don't think there is much
> incremental value to go describe the entirety of the system from sysfs
> or anywhere else in the kernel for that matter. It's simply too much> 
> information to reasonably consume. Instead the kernel can describe the

I agree that it may be some amount of information to parse but is crucial
for any task on a heterogeneous system to evaluate (probably re-evaluate
if the task moves around) its memory and CPU binding at runtime to make
sure it has got the right one.

> coarse boundaries and some semblance of "best" access initiator for a
> given target. That should cover the "80%" case of what applications

The current proposal just assumes that the best one is the nearest one.
This may be true for bandwidth and latency but may not be true for some
other properties. This assumptions should not be there while defining
new ABI.

> want to discover, for the other "20%" we likely need some userspace
> library that can go parse these platform specific information sources
> and supplement the kernel view. I also think a simpler kernel starting
> point gives us room to go pull in more commonly used attributes if it
> turns out they are useful, and avoid going down the path of exporting
> attributes that have questionable value in practice.
> 

Applications can just query platform information right now and just use
them for mbind() without requiring this new interface. We are not even
changing any core MM yet. So if it's just about identifying the node's
memory properties it can be scanned from platform itself. But I agree
we would like the kernel to start adding interfaces for multi attribute
memory but all I am saying is that it has to be comprehensive. Some of
the attributes have more usefulness now and some have less but the new
ABI interface has to accommodate exporting all of these.

Reply via email to