On Fri, Dec 07, 2018 at 03:06:36PM +, Jonathan Cameron wrote:
> On Thu, 6 Dec 2018 19:20:45 -0500
> Jerome Glisse wrote:
>
> > On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > > > On 12/6/18 3:28 PM, Logan
On Fri, Dec 07, 2018 at 03:06:36PM +, Jonathan Cameron wrote:
> On Thu, 6 Dec 2018 19:20:45 -0500
> Jerome Glisse wrote:
>
> > On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > > > On 12/6/18 3:28 PM, Logan
On Thu, 6 Dec 2018 19:20:45 -0500
Jerome Glisse wrote:
> On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
> >
> >
> > On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > > On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> > >> I didn't think this was meant to describe actual real
On Thu, 6 Dec 2018 19:20:45 -0500
Jerome Glisse wrote:
> On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
> >
> >
> > On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > > On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> > >> I didn't think this was meant to describe actual real
On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
>
>
> On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> >> I didn't think this was meant to describe actual real world performance
> >> between all of the links. If that's the case all of
On Thu, Dec 06, 2018 at 04:48:57PM -0700, Logan Gunthorpe wrote:
>
>
> On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> > On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> >> I didn't think this was meant to describe actual real world performance
> >> between all of the links. If that's the case all of
On Thu, Dec 06, 2018 at 03:09:21PM -0800, Dave Hansen wrote:
> On 12/6/18 2:39 PM, Jerome Glisse wrote:
> > No if the 4 sockets are connect in a ring fashion ie:
> > Socket0 - Socket1
> >| |
> > Socket3 - Socket2
> >
> > Then you have 4 links:
> > link0:
On Thu, Dec 06, 2018 at 03:09:21PM -0800, Dave Hansen wrote:
> On 12/6/18 2:39 PM, Jerome Glisse wrote:
> > No if the 4 sockets are connect in a ring fashion ie:
> > Socket0 - Socket1
> >| |
> > Socket3 - Socket2
> >
> > Then you have 4 links:
> > link0:
On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
>> I didn't think this was meant to describe actual real world performance
>> between all of the links. If that's the case all of this seems like a
>> pipe dream to me.
>
> The HMAT discussions (that I was
On 2018-12-06 4:38 p.m., Dave Hansen wrote:
> On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
>> I didn't think this was meant to describe actual real world performance
>> between all of the links. If that's the case all of this seems like a
>> pipe dream to me.
>
> The HMAT discussions (that I was
On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> I didn't think this was meant to describe actual real world performance
> between all of the links. If that's the case all of this seems like a
> pipe dream to me.
The HMAT discussions (that I was a part of at least) settled on just
trying to describe
On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> I didn't think this was meant to describe actual real world performance
> between all of the links. If that's the case all of this seems like a
> pipe dream to me.
The HMAT discussions (that I was a part of at least) settled on just
trying to describe
On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> These patches are really tied to world view #1. But, the HMAT is really
> tied to world view #1.
Whoops, should have been "the HMAT is really tied to world view #2"
On 12/6/18 3:28 PM, Logan Gunthorpe wrote:
> These patches are really tied to world view #1. But, the HMAT is really
> tied to world view #1.
Whoops, should have been "the HMAT is really tied to world view #2"
On 2018-12-06 4:09 p.m., Dave Hansen wrote:
> This looks great. But, we don't _have_ this kind of information for any
> system that I know about or any system available in the near future.
>
> We basically have two different world views:
> 1. The system is described point-to-point. A
On 2018-12-06 4:09 p.m., Dave Hansen wrote:
> This looks great. But, we don't _have_ this kind of information for any
> system that I know about or any system available in the near future.
>
> We basically have two different world views:
> 1. The system is described point-to-point. A
On 12/6/18 2:39 PM, Jerome Glisse wrote:
> No if the 4 sockets are connect in a ring fashion ie:
> Socket0 - Socket1
>| |
> Socket3 - Socket2
>
> Then you have 4 links:
> link0: socket0 socket1
> link1: socket1 socket2
> link3: socket2 socket3
> link4: socket3
On 12/6/18 2:39 PM, Jerome Glisse wrote:
> No if the 4 sockets are connect in a ring fashion ie:
> Socket0 - Socket1
>| |
> Socket3 - Socket2
>
> Then you have 4 links:
> link0: socket0 socket1
> link1: socket1 socket2
> link3: socket2 socket3
> link4: socket3
On Thu, Dec 06, 2018 at 02:04:46PM -0800, Dave Hansen wrote:
> On 12/6/18 12:11 PM, Logan Gunthorpe wrote:
> >> My concern with having folks do per-program parsing, *and* having a huge
> >> amount of data to parse makes it unusable. The largest systems will
> >> literally have hundreds of
On Thu, Dec 06, 2018 at 02:04:46PM -0800, Dave Hansen wrote:
> On 12/6/18 12:11 PM, Logan Gunthorpe wrote:
> >> My concern with having folks do per-program parsing, *and* having a huge
> >> amount of data to parse makes it unusable. The largest systems will
> >> literally have hundreds of
On 12/6/18 12:11 PM, Logan Gunthorpe wrote:
>> My concern with having folks do per-program parsing, *and* having a huge
>> amount of data to parse makes it unusable. The largest systems will
>> literally have hundreds of thousands of objects in /sysfs, even in a
>> single directory. That makes
On 12/6/18 12:11 PM, Logan Gunthorpe wrote:
>> My concern with having folks do per-program parsing, *and* having a huge
>> amount of data to parse makes it unusable. The largest systems will
>> literally have hundreds of thousands of objects in /sysfs, even in a
>> single directory. That makes
On Thu, Dec 06, 2018 at 03:27:06PM -0500, Jerome Glisse wrote:
> On Thu, Dec 06, 2018 at 11:31:21AM -0800, Dave Hansen wrote:
> > On 12/6/18 11:20 AM, Jerome Glisse wrote:
> > >>> For case 1 you can pre-parse stuff but this can be done by helper
> > >>> library
> > >> How would that work? Would
On Thu, Dec 06, 2018 at 03:27:06PM -0500, Jerome Glisse wrote:
> On Thu, Dec 06, 2018 at 11:31:21AM -0800, Dave Hansen wrote:
> > On 12/6/18 11:20 AM, Jerome Glisse wrote:
> > >>> For case 1 you can pre-parse stuff but this can be done by helper
> > >>> library
> > >> How would that work? Would
On Thu, Dec 06, 2018 at 11:31:21AM -0800, Dave Hansen wrote:
> On 12/6/18 11:20 AM, Jerome Glisse wrote:
> >>> For case 1 you can pre-parse stuff but this can be done by helper library
> >> How would that work? Would each user/container/whatever do this once?
> >> Where would they keep the
On Thu, Dec 06, 2018 at 11:31:21AM -0800, Dave Hansen wrote:
> On 12/6/18 11:20 AM, Jerome Glisse wrote:
> >>> For case 1 you can pre-parse stuff but this can be done by helper library
> >> How would that work? Would each user/container/whatever do this once?
> >> Where would they keep the
On 2018-12-06 12:31 p.m., Dave Hansen wrote:
> On 12/6/18 11:20 AM, Jerome Glisse wrote:
For case 1 you can pre-parse stuff but this can be done by helper library
>>> How would that work? Would each user/container/whatever do this once?
>>> Where would they keep the pre-parsed stuff? How
On 2018-12-06 12:31 p.m., Dave Hansen wrote:
> On 12/6/18 11:20 AM, Jerome Glisse wrote:
For case 1 you can pre-parse stuff but this can be done by helper library
>>> How would that work? Would each user/container/whatever do this once?
>>> Where would they keep the pre-parsed stuff? How
On 12/6/18 11:20 AM, Jerome Glisse wrote:
>>> For case 1 you can pre-parse stuff but this can be done by helper library
>> How would that work? Would each user/container/whatever do this once?
>> Where would they keep the pre-parsed stuff? How do they manage their
>> cache if the topology
On 12/6/18 11:20 AM, Jerome Glisse wrote:
>>> For case 1 you can pre-parse stuff but this can be done by helper library
>> How would that work? Would each user/container/whatever do this once?
>> Where would they keep the pre-parsed stuff? How do they manage their
>> cache if the topology
On Thu, Dec 06, 2018 at 10:25:08AM -0800, Dave Hansen wrote:
> On 12/5/18 9:53 AM, Jerome Glisse wrote:
> > No so there is 2 kinds of applications:
> > 1) average one: i am using device {1, 3, 9} give me best memory for
> >those devices
> ...
> >
> > For case 1 you can pre-parse stuff
On Thu, Dec 06, 2018 at 10:25:08AM -0800, Dave Hansen wrote:
> On 12/5/18 9:53 AM, Jerome Glisse wrote:
> > No so there is 2 kinds of applications:
> > 1) average one: i am using device {1, 3, 9} give me best memory for
> >those devices
> ...
> >
> > For case 1 you can pre-parse stuff
On 12/5/18 9:53 AM, Jerome Glisse wrote:
> No so there is 2 kinds of applications:
> 1) average one: i am using device {1, 3, 9} give me best memory for
>those devices
...
>
> For case 1 you can pre-parse stuff but this can be done by helper library
How would that work? Would each
On 12/5/18 9:53 AM, Jerome Glisse wrote:
> No so there is 2 kinds of applications:
> 1) average one: i am using device {1, 3, 9} give me best memory for
>those devices
...
>
> For case 1 you can pre-parse stuff but this can be done by helper library
How would that work? Would each
On Wed, Dec 05, 2018 at 09:27:09AM -0800, Dave Hansen wrote:
> On 12/4/18 6:13 PM, Jerome Glisse wrote:
> > On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
> >> OK, but there are 1024*1024 matrix cells on a systems with 1024
> >> proximity domains (ACPI term for NUMA node). So it
On Wed, Dec 05, 2018 at 09:27:09AM -0800, Dave Hansen wrote:
> On 12/4/18 6:13 PM, Jerome Glisse wrote:
> > On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
> >> OK, but there are 1024*1024 matrix cells on a systems with 1024
> >> proximity domains (ACPI term for NUMA node). So it
On 12/4/18 6:13 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
>> OK, but there are 1024*1024 matrix cells on a systems with 1024
>> proximity domains (ACPI term for NUMA node). So it sounds like you are
>> proposing a million-directory approach.
>
> No,
On 12/4/18 6:13 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
>> OK, but there are 1024*1024 matrix cells on a systems with 1024
>> proximity domains (ACPI term for NUMA node). So it sounds like you are
>> proposing a million-directory approach.
>
> No,
On Wed, Dec 05, 2018 at 04:57:17PM +0530, Aneesh Kumar K.V wrote:
> On 12/5/18 12:19 AM, Jerome Glisse wrote:
>
> > Above example is for migrate. Here is an example for how the
> > topology is use today:
> >
> > Application knows that the platform is running on have 16
> > GPU split
On Wed, Dec 05, 2018 at 04:57:17PM +0530, Aneesh Kumar K.V wrote:
> On 12/5/18 12:19 AM, Jerome Glisse wrote:
>
> > Above example is for migrate. Here is an example for how the
> > topology is use today:
> >
> > Application knows that the platform is running on have 16
> > GPU split
On 12/5/18 12:19 AM, Jerome Glisse wrote:
Above example is for migrate. Here is an example for how the
topology is use today:
Application knows that the platform is running on have 16
GPU split into 2 group of 8 GPUs each. GPU in each group can
access each other memory with
On 12/5/18 12:19 AM, Jerome Glisse wrote:
Above example is for migrate. Here is an example for how the
topology is use today:
Application knows that the platform is running on have 16
GPU split into 2 group of 8 GPUs each. GPU in each group can
access each other memory with
On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
> On 12/4/18 4:15 PM, Jerome Glisse wrote:
> > On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
> >> Basically, is sysfs the right place to even expose this much data?
> >
> > I definitly want to avoid the memoryX mistake.
On Tue, Dec 04, 2018 at 05:06:49PM -0800, Dave Hansen wrote:
> On 12/4/18 4:15 PM, Jerome Glisse wrote:
> > On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
> >> Basically, is sysfs the right place to even expose this much data?
> >
> > I definitly want to avoid the memoryX mistake.
On 2018-12-04 4:57 p.m., Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 01:37:56PM -0800, Dave Hansen wrote:
>> Yeah, our NUMA mechanisms are for managing memory that the kernel itself
>> manages in the "normal" allocator and supports a full feature set on.
>> That has a bunch of implications,
On 2018-12-04 4:57 p.m., Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 01:37:56PM -0800, Dave Hansen wrote:
>> Yeah, our NUMA mechanisms are for managing memory that the kernel itself
>> manages in the "normal" allocator and supports a full feature set on.
>> That has a bunch of implications,
On 12/4/18 4:15 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
>> Basically, is sysfs the right place to even expose this much data?
>
> I definitly want to avoid the memoryX mistake. So i do not want to
> see one link directory per device. Taking my
On 12/4/18 4:15 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
>> Basically, is sysfs the right place to even expose this much data?
>
> I definitly want to avoid the memoryX mistake. So i do not want to
> see one link directory per device. Taking my
On Tue, Dec 04, 2018 at 03:58:23PM -0800, Dave Hansen wrote:
> On 12/4/18 1:57 PM, Jerome Glisse wrote:
> > Fully correct mind if i steal that perfect summary description next time
> > i post ? I am so bad at explaining thing :)
>
> Go for it!
>
> > Intention is to allow program to do everything
On Tue, Dec 04, 2018 at 03:58:23PM -0800, Dave Hansen wrote:
> On 12/4/18 1:57 PM, Jerome Glisse wrote:
> > Fully correct mind if i steal that perfect summary description next time
> > i post ? I am so bad at explaining thing :)
>
> Go for it!
>
> > Intention is to allow program to do everything
On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
> On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> > This patchset use the above scheme to expose system topology through
> > sysfs under /sys/bus/hms/ with:
> > - /sys/bus/hms/devices/v%version-%id-target/ : a target memory,
> >
On Tue, Dec 04, 2018 at 03:54:22PM -0800, Dave Hansen wrote:
> On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> > This patchset use the above scheme to expose system topology through
> > sysfs under /sys/bus/hms/ with:
> > - /sys/bus/hms/devices/v%version-%id-target/ : a target memory,
> >
On 12/4/18 1:57 PM, Jerome Glisse wrote:
> Fully correct mind if i steal that perfect summary description next time
> i post ? I am so bad at explaining thing :)
Go for it!
> Intention is to allow program to do everything they do with mbind() today
> and tomorrow with the HMAT patchset and on
On 12/4/18 1:57 PM, Jerome Glisse wrote:
> Fully correct mind if i steal that perfect summary description next time
> i post ? I am so bad at explaining thing :)
Go for it!
> Intention is to allow program to do everything they do with mbind() today
> and tomorrow with the HMAT patchset and on
On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> This patchset use the above scheme to expose system topology through
> sysfs under /sys/bus/hms/ with:
> - /sys/bus/hms/devices/v%version-%id-target/ : a target memory,
> each has a UID and you can usual value in that folder (node id,
>
On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> This patchset use the above scheme to expose system topology through
> sysfs under /sys/bus/hms/ with:
> - /sys/bus/hms/devices/v%version-%id-target/ : a target memory,
> each has a UID and you can usual value in that folder (node id,
>
On Tue, Dec 04, 2018 at 01:37:56PM -0800, Dave Hansen wrote:
> On 12/4/18 10:49 AM, Jerome Glisse wrote:
> >> Also, could you add a simple, example program for how someone might use
> >> this? I got lost in all the new sysfs and ioctl gunk. Can you
> >> characterize how this would work with the
On Tue, Dec 04, 2018 at 01:37:56PM -0800, Dave Hansen wrote:
> On 12/4/18 10:49 AM, Jerome Glisse wrote:
> >> Also, could you add a simple, example program for how someone might use
> >> this? I got lost in all the new sysfs and ioctl gunk. Can you
> >> characterize how this would work with the
On 12/4/18 10:49 AM, Jerome Glisse wrote:
>> Also, could you add a simple, example program for how someone might use
>> this? I got lost in all the new sysfs and ioctl gunk. Can you
>> characterize how this would work with the *exiting* NUMA interfaces that
>> we have?
> That is the issue i can
On 12/4/18 10:49 AM, Jerome Glisse wrote:
>> Also, could you add a simple, example program for how someone might use
>> this? I got lost in all the new sysfs and ioctl gunk. Can you
>> characterize how this would work with the *exiting* NUMA interfaces that
>> we have?
> That is the issue i can
On Tue, Dec 04, 2018 at 10:54:10AM -0800, Dave Hansen wrote:
> On 12/4/18 10:49 AM, Jerome Glisse wrote:
> > Policy is same kind of story, this email is long enough now :) But
> > i can write one down if you want.
>
> Yes, please. I'd love to see the code.
>
> We'll do the same on the "HMAT"
On Tue, Dec 04, 2018 at 10:54:10AM -0800, Dave Hansen wrote:
> On 12/4/18 10:49 AM, Jerome Glisse wrote:
> > Policy is same kind of story, this email is long enough now :) But
> > i can write one down if you want.
>
> Yes, please. I'd love to see the code.
>
> We'll do the same on the "HMAT"
On 12/4/18 10:49 AM, Jerome Glisse wrote:
> Policy is same kind of story, this email is long enough now :) But
> i can write one down if you want.
Yes, please. I'd love to see the code.
We'll do the same on the "HMAT" side and we can compare notes.
On 12/4/18 10:49 AM, Jerome Glisse wrote:
> Policy is same kind of story, this email is long enough now :) But
> i can write one down if you want.
Yes, please. I'd love to see the code.
We'll do the same on the "HMAT" side and we can compare notes.
On Tue, Dec 04, 2018 at 10:02:55AM -0800, Dave Hansen wrote:
> On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> > This means that it is no longer sufficient to consider a flat view
> > for each node in a system but for maximum performance we need to
> > account for all of this new memory but also
On Tue, Dec 04, 2018 at 10:02:55AM -0800, Dave Hansen wrote:
> On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> > This means that it is no longer sufficient to consider a flat view
> > for each node in a system but for maximum performance we need to
> > account for all of this new memory but also
On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> This means that it is no longer sufficient to consider a flat view
> for each node in a system but for maximum performance we need to
> account for all of this new memory but also for system topology.
> This is why this proposal is unlike the HMAT
On 12/3/18 3:34 PM, jgli...@redhat.com wrote:
> This means that it is no longer sufficient to consider a flat view
> for each node in a system but for maximum performance we need to
> account for all of this new memory but also for system topology.
> This is why this proposal is unlike the HMAT
On Tue, Dec 04, 2018 at 01:14:14PM +0530, Aneesh Kumar K.V wrote:
> On 12/4/18 5:04 AM, jgli...@redhat.com wrote:
> > From: Jérôme Glisse
[...]
> > This patchset use the above scheme to expose system topology through
> > sysfs under /sys/bus/hms/ with:
> > -
On Tue, Dec 04, 2018 at 01:14:14PM +0530, Aneesh Kumar K.V wrote:
> On 12/4/18 5:04 AM, jgli...@redhat.com wrote:
> > From: Jérôme Glisse
[...]
> > This patchset use the above scheme to expose system topology through
> > sysfs under /sys/bus/hms/ with:
> > -
On 12/4/18 5:04 AM, jgli...@redhat.com wrote:
From: Jérôme Glisse
Heterogeneous memory system are becoming more and more the norm, in
those system there is not only the main system memory for each node,
but also device memory and|or memory hierarchy to consider. Device
memory can comes from a
On 12/4/18 5:04 AM, jgli...@redhat.com wrote:
From: Jérôme Glisse
Heterogeneous memory system are becoming more and more the norm, in
those system there is not only the main system memory for each node,
but also device memory and|or memory hierarchy to consider. Device
memory can comes from a
72 matches
Mail list logo