On Fri, 24 Apr 2015, Jerome Glisse wrote:
What exactly is the more advanced version's benefit? What are the features
that the other platforms do not provide?
Transparent access to device memory from the CPU, you can map any of the GPU
memory inside the CPU and have the whole cache
On 04/24/2015 10:30 AM, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Paul E. McKenney wrote:
If by entire industry you mean everyone who might want to use hardware
acceleration, for example, including mechanical computer-aided design,
I am skeptical.
The industry designs GPUs with super
On Fri, Apr 24, 2015 at 11:03:52AM -0500, Christoph Lameter wrote:
On Fri, 24 Apr 2015, Jerome Glisse wrote:
On Fri, Apr 24, 2015 at 09:29:12AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Jerome Glisse wrote:
No this not have been solve properly. Today solution is doing an
On Fri, Apr 24, 2015 at 11:58:39AM -0500, Christoph Lameter wrote:
On Fri, 24 Apr 2015, Jerome Glisse wrote:
What exactly is the more advanced version's benefit? What are the features
that the other platforms do not provide?
Transparent access to device memory from the CPU, you can
On Fri, 24 Apr 2015, Jerome Glisse wrote:
Right this is how things work and you could improve on that. Stay with the
scheme. Why would that not work if you map things the same way in both
environments if both accellerator and host processor can acceess each
others memory?
Again and
On Fri, Apr 24, 2015 at 01:56:45PM -0500, Christoph Lameter wrote:
On Fri, 24 Apr 2015, Jerome Glisse wrote:
Right this is how things work and you could improve on that. Stay with the
scheme. Why would that not work if you map things the same way in both
environments if both
On Fri, 24 Apr 2015, Jerome Glisse wrote:
Still no answer as to why is that not possible with the current scheme?
You keep on talking about pointers and I keep on responding that this is a
matter of making the address space compatible on both sides.
So if do that in a naive way, how can
On Fri, Apr 24, 2015 at 03:00:18PM -0500, Christoph Lameter wrote:
On Fri, 24 Apr 2015, Jerome Glisse wrote:
Still no answer as to why is that not possible with the current scheme?
You keep on talking about pointers and I keep on responding that this is a
matter of making the address
On Thu, 2015-04-23 at 09:10 -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > > Anyone
> > > wanting performance (and that is the prime reason to use a GPU) would
> > > switch this off because the latencies are otherwise not controllable and
> > > those
On Thu, 2015-04-23 at 11:25 -0400, Austin S Hemmelgarn wrote:
> Looking at this whole conversation, all I see is two different views on
> how to present the asymmetric multiprocessing arrangements that have
> become commonplace in today's systems to userspace. Your model favors
> performance,
And another update, again diffs followed by the full document. The
diffs are against the version at https://lkml.org/lkml/2015/4/22/235.
Thanx, Paul
diff --git
On Thu, Apr 23, 2015 at 09:12:38AM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Paul E. McKenney wrote:
>
> > Agreed, the use case that Jerome is thinking of differs from yours.
> > You would not (and should not) tolerate things like page faults because
> > it would destroy your
On Thu, Apr 23, 2015 at 09:20:55AM -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > > There are hooks in glibc where you can replace the memory
> > > management of the apps if you want that.
> >
> > We don't control the app. Let's say we are doing a
On Thu, Apr 23, 2015 at 09:38:15AM -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
[ . . . ]
> > It might not be *your* model based on *your* application but that doesn't
> > mean
> > it's not there, and isn't relevant.
>
> Sadly this is the way that an
On 04/22/2015 01:14 PM, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Jerome Glisse wrote:
>
>> Glibc hooks will not work, this is about having same address space on
>> CPU and GPU/accelerator while allowing backing memory to be regular
>> system memory or device memory all this in a
On Thu, Apr 23, 2015 at 09:20:55AM -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > > There are hooks in glibc where you can replace the memory
> > > management of the apps if you want that.
> >
> > We don't control the app. Let's say we are doing a
On Thu, Apr 23, 2015 at 09:38:15AM -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
[...]
> > You have something in memory, whether you got it via malloc, mmap'ing a
> > file,
> > shmem with some other application, ... and you want to work on it with the
> >
On 04/21/2015 08:50 PM, Christoph Lameter wrote:
> On Tue, 21 Apr 2015, Jerome Glisse wrote:
>> So big use case here, let say you have an application that rely on a
>> scientific library that do matrix computation. Your application simply
>> use malloc and give pointer to this scientific library.
On Thu, Apr 23, 2015 at 09:10:13AM -0500, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > > Anyone
> > > wanting performance (and that is the prime reason to use a GPU) would
> > > switch this off because the latencies are otherwise not controllable and
> > >
On 2015-04-23 10:25, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
They are via MMIO space. The big differences here are that via CAPI the
memory can be fully cachable and thus have the same characteristics as
normal memory from the processor point of view, and
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
> In fact I'm quite surprised, what we want to achieve is the most natural
> way from an application perspective.
Well the most natural thing would be if the beast would just do what I
tell it in plain english. But then I would not have my job
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
> They are via MMIO space. The big differences here are that via CAPI the
> memory can be fully cachable and thus have the same characteristics as
> normal memory from the processor point of view, and the device shares
> the MMU with the host.
>
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
> > There are hooks in glibc where you can replace the memory
> > management of the apps if you want that.
>
> We don't control the app. Let's say we are doing a plugin for libfoo
> which accelerates "foo" using GPUs.
There are numerous examples
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
> Agreed, the use case that Jerome is thinking of differs from yours.
> You would not (and should not) tolerate things like page faults because
> it would destroy your worst-case response times. I believe that Jerome
> is more interested in throughput
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
> > Anyone
> > wanting performance (and that is the prime reason to use a GPU) would
> > switch this off because the latencies are otherwise not controllable and
> > those may impact performance severely. There are typically multiple
> >
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
There are hooks in glibc where you can replace the memory
management of the apps if you want that.
We don't control the app. Let's say we are doing a plugin for libfoo
which accelerates foo using GPUs.
There are numerous examples of
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
In fact I'm quite surprised, what we want to achieve is the most natural
way from an application perspective.
Well the most natural thing would be if the beast would just do what I
tell it in plain english. But then I would not have my job
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
They are via MMIO space. The big differences here are that via CAPI the
memory can be fully cachable and thus have the same characteristics as
normal memory from the processor point of view, and the device shares
the MMU with the host.
On Thu, Apr 23, 2015 at 09:20:55AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
There are hooks in glibc where you can replace the memory
management of the apps if you want that.
We don't control the app. Let's say we are doing a plugin for libfoo
On 04/21/2015 08:50 PM, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Jerome Glisse wrote:
So big use case here, let say you have an application that rely on a
scientific library that do matrix computation. Your application simply
use malloc and give pointer to this scientific library. Now
On Thu, Apr 23, 2015 at 09:38:15AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
[...]
You have something in memory, whether you got it via malloc, mmap'ing a
file,
shmem with some other application, ... and you want to work on it with the
On Thu, Apr 23, 2015 at 09:10:13AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
Anyone
wanting performance (and that is the prime reason to use a GPU) would
switch this off because the latencies are otherwise not controllable and
those may
On 04/22/2015 01:14 PM, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Jerome Glisse wrote:
Glibc hooks will not work, this is about having same address space on
CPU and GPU/accelerator while allowing backing memory to be regular
system memory or device memory all this in a transparent manner
On 2015-04-23 10:25, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
They are via MMIO space. The big differences here are that via CAPI the
memory can be fully cachable and thus have the same characteristics as
normal memory from the processor point of view, and
On Thu, Apr 23, 2015 at 09:20:55AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
There are hooks in glibc where you can replace the memory
management of the apps if you want that.
We don't control the app. Let's say we are doing a plugin for libfoo
And another update, again diffs followed by the full document. The
diffs are against the version at https://lkml.org/lkml/2015/4/22/235.
Thanx, Paul
diff --git
On Thu, Apr 23, 2015 at 09:38:15AM -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
[ . . . ]
It might not be *your* model based on *your* application but that doesn't
mean
it's not there, and isn't relevant.
Sadly this is the way that an entire
On Thu, Apr 23, 2015 at 09:12:38AM -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
Agreed, the use case that Jerome is thinking of differs from yours.
You would not (and should not) tolerate things like page faults because
it would destroy your worst-case
On Thu, 2015-04-23 at 11:25 -0400, Austin S Hemmelgarn wrote:
Looking at this whole conversation, all I see is two different views on
how to present the asymmetric multiprocessing arrangements that have
become commonplace in today's systems to userspace. Your model favors
performance,
On Thu, 2015-04-23 at 09:10 -0500, Christoph Lameter wrote:
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
Anyone
wanting performance (and that is the prime reason to use a GPU) would
switch this off because the latencies are otherwise not controllable and
those may impact
On Thu, 23 Apr 2015, Benjamin Herrenschmidt wrote:
Anyone
wanting performance (and that is the prime reason to use a GPU) would
switch this off because the latencies are otherwise not controllable and
those may impact performance severely. There are typically multiple
parallel strands
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
Agreed, the use case that Jerome is thinking of differs from yours.
You would not (and should not) tolerate things like page faults because
it would destroy your worst-case response times. I believe that Jerome
is more interested in throughput
On Wed, 2015-04-22 at 13:17 -0500, Christoph Lameter wrote:
>
> > But again let me stress that application that want to be in control will
> > stay in control. If you want to make the decission yourself about where
> > things should end up then nothing in all we are proposing will preclude
> >
On Wed, 2015-04-22 at 12:14 -0500, Christoph Lameter wrote:
>
> > Bottom line is we want today anonymous, share or file mapped memory
> > to stay the only kind of memory that exist and we want to choose the
> > backing store of each of those kind for better placement depending
> > on how memory
On Wed, 2015-04-22 at 11:16 -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Paul E. McKenney wrote:
>
> > I completely agree that some critically important use cases, such as
> > yours, will absolutely require that the application explicitly choose
> > memory placement and have the memory
On Wed, 2015-04-22 at 10:25 -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > Right, it doesn't look at all like what we want.
>
> Its definitely a way to map memory that is outside of the kernel managed
> pool into a user space process. For that matter
On Wed, Apr 22, 2015 at 12:14:50PM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Jerome Glisse wrote:
>
> > Glibc hooks will not work, this is about having same address space on
> > CPU and GPU/accelerator while allowing backing memory to be regular
> > system memory or device memory all
On Wed, Apr 22, 2015 at 01:17:58PM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Jerome Glisse wrote:
>
> > Now if you have the exact same address space then structure you have on
> > the CPU are exactly view in the same way on the GPU and you can start
> > porting library to leverage
On Wed, 22 Apr 2015, Jerome Glisse wrote:
> Now if you have the exact same address space then structure you have on
> the CPU are exactly view in the same way on the GPU and you can start
> porting library to leverage GPU without having to change a single line of
> code inside many many many
On Wed, 22 Apr 2015, Jerome Glisse wrote:
> Glibc hooks will not work, this is about having same address space on
> CPU and GPU/accelerator while allowing backing memory to be regular
> system memory or device memory all this in a transparent manner to
> userspace program and library.
If you
On Wed, Apr 22, 2015 at 11:16:49AM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Paul E. McKenney wrote:
>
> > I completely agree that some critically important use cases, such as
> > yours, will absolutely require that the application explicitly choose
> > memory placement and have the
On Wed, Apr 22, 2015 at 10:25:37AM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > Right, it doesn't look at all like what we want.
>
> Its definitely a way to map memory that is outside of the kernel managed
> pool into a user space process. For that
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
> I completely agree that some critically important use cases, such as
> yours, will absolutely require that the application explicitly choose
> memory placement and have the memory stay there.
Most of what you are trying to do here is already there
On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
> Right, it doesn't look at all like what we want.
Its definitely a way to map memory that is outside of the kernel managed
pool into a user space process. For that matter any device driver could be
doing this as well. The point is that we
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
> Ben will correct me if I am wrong, but I do not believe that we are
> looking for persistent memory in this case.
DAX is way of mapping special memory into user space. Persistance is one
possible use case. Its like the XIP that you IBMers know from
On Wed, Apr 22, 2015 at 11:01:26AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2015-04-21 at 19:50 -0500, Christoph Lameter wrote:
>
> > With a filesystem the migration can be controlled by the application. It
> > can copy stuff whenever it wants to.Having the OS do that behind my back
> > is
On Tue, Apr 21, 2015 at 07:50:02PM -0500, Christoph Lameter wrote:
> On Tue, 21 Apr 2015, Jerome Glisse wrote:
[ . . . ]
> > Paul is working on a platform that is more advance that the one HMM try
> > to address and i believe the x86 platform will not have functionality
> > such a CAPI, at least
On Tue, Apr 21, 2015 at 07:46:07PM -0400, Jerome Glisse wrote:
> On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
> > Hello!
> >
> > We have some interest in hardware on devices that is cache-coherent
> > with main memory, and in migrating memory between host memory and
> >
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
I completely agree that some critically important use cases, such as
yours, will absolutely require that the application explicitly choose
memory placement and have the memory stay there.
Most of what you are trying to do here is already there
On Wed, Apr 22, 2015 at 10:25:37AM -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
Right, it doesn't look at all like what we want.
Its definitely a way to map memory that is outside of the kernel managed
pool into a user space process. For that matter
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Ben will correct me if I am wrong, but I do not believe that we are
looking for persistent memory in this case.
DAX is way of mapping special memory into user space. Persistance is one
possible use case. Its like the XIP that you IBMers know from
On Tue, Apr 21, 2015 at 07:46:07PM -0400, Jerome Glisse wrote:
On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
Hello!
We have some interest in hardware on devices that is cache-coherent
with main memory, and in migrating memory between host memory and
device memory.
On Wed, Apr 22, 2015 at 11:01:26AM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2015-04-21 at 19:50 -0500, Christoph Lameter wrote:
With a filesystem the migration can be controlled by the application. It
can copy stuff whenever it wants to.Having the OS do that behind my back
is not
On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
Right, it doesn't look at all like what we want.
Its definitely a way to map memory that is outside of the kernel managed
pool into a user space process. For that matter any device driver could be
doing this as well. The point is that we
On Wed, Apr 22, 2015 at 11:16:49AM -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
I completely agree that some critically important use cases, such as
yours, will absolutely require that the application explicitly choose
memory placement and have the memory
On Wed, 22 Apr 2015, Jerome Glisse wrote:
Glibc hooks will not work, this is about having same address space on
CPU and GPU/accelerator while allowing backing memory to be regular
system memory or device memory all this in a transparent manner to
userspace program and library.
If you control
On Wed, 22 Apr 2015, Jerome Glisse wrote:
Now if you have the exact same address space then structure you have on
the CPU are exactly view in the same way on the GPU and you can start
porting library to leverage GPU without having to change a single line of
code inside many many many
On Wed, Apr 22, 2015 at 01:17:58PM -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Jerome Glisse wrote:
Now if you have the exact same address space then structure you have on
the CPU are exactly view in the same way on the GPU and you can start
porting library to leverage GPU
On Wed, Apr 22, 2015 at 12:14:50PM -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Jerome Glisse wrote:
Glibc hooks will not work, this is about having same address space on
CPU and GPU/accelerator while allowing backing memory to be regular
system memory or device memory all this in
On Wed, 2015-04-22 at 10:25 -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
Right, it doesn't look at all like what we want.
Its definitely a way to map memory that is outside of the kernel managed
pool into a user space process. For that matter any
On Wed, 2015-04-22 at 13:17 -0500, Christoph Lameter wrote:
But again let me stress that application that want to be in control will
stay in control. If you want to make the decission yourself about where
things should end up then nothing in all we are proposing will preclude
you from
On Wed, 2015-04-22 at 12:14 -0500, Christoph Lameter wrote:
Bottom line is we want today anonymous, share or file mapped memory
to stay the only kind of memory that exist and we want to choose the
backing store of each of those kind for better placement depending
on how memory is use
On Wed, 2015-04-22 at 11:16 -0500, Christoph Lameter wrote:
On Wed, 22 Apr 2015, Paul E. McKenney wrote:
I completely agree that some critically important use cases, such as
yours, will absolutely require that the application explicitly choose
memory placement and have the memory stay
On Tue, Apr 21, 2015 at 07:50:02PM -0500, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Jerome Glisse wrote:
[ . . . ]
Paul is working on a platform that is more advance that the one HMM try
to address and i believe the x86 platform will not have functionality
such a CAPI, at least it is
On Tue, 2015-04-21 at 17:57 -0700, Paul E. McKenney wrote:
> On Wed, Apr 22, 2015 at 10:42:52AM +1000, Benjamin Herrenschmidt wrote:
> > On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
> > > On Tue, 21 Apr 2015, Paul E. McKenney wrote:
> > >
> > > > Thoughts?
> > >
> > > Use DAX for
On Tue, 2015-04-21 at 19:50 -0500, Christoph Lameter wrote:
> With a filesystem the migration can be controlled by the application. It
> can copy stuff whenever it wants to.Having the OS do that behind my back
> is not something that feels safe and secure.
But this is not something the user
On Wed, Apr 22, 2015 at 10:42:52AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
> > On Tue, 21 Apr 2015, Paul E. McKenney wrote:
> >
> > > Thoughts?
> >
> > Use DAX for memory instead of the other approaches? That way it is
> > explicitly
On Tue, 21 Apr 2015, Jerome Glisse wrote:
> Memory on this device should not be considered as something special
> (even if it is). More below.
Uhh?
> So big use case here, let say you have an application that rely on a
> scientific library that do matrix computation. Your application simply
>
On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
> On Tue, 21 Apr 2015, Paul E. McKenney wrote:
>
> > Thoughts?
>
> Use DAX for memory instead of the other approaches? That way it is
> explicitly clear what information is put on the CAPI device.
Care to elaborate on what DAX is ?
>
On Tue, 2015-04-21 at 19:46 -0400, Jerome Glisse wrote:
> On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
> > Hello!
> >
> > We have some interest in hardware on devices that is cache-coherent
> > with main memory, and in migrating memory between host memory and
> > device
On Tue, Apr 21, 2015 at 06:49:29PM -0500, Christoph Lameter wrote:
> On Tue, 21 Apr 2015, Paul E. McKenney wrote:
>
> > Thoughts?
>
> Use DAX for memory instead of the other approaches? That way it is
> explicitly clear what information is put on the CAPI device.
>
Memory on this device should
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
> Thoughts?
Use DAX for memory instead of the other approaches? That way it is
explicitly clear what information is put on the CAPI device.
> Although such a device will provide CPU's with cache-coherent
Maybe call this coprocessor like IBM
On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
> Hello!
>
> We have some interest in hardware on devices that is cache-coherent
> with main memory, and in migrating memory between host memory and
> device memory. We believe that we might not be the only ones looking
> ahead to
Hello!
We have some interest in hardware on devices that is cache-coherent
with main memory, and in migrating memory between host memory and
device memory. We believe that we might not be the only ones looking
ahead to hardware like this, so please see below for a draft of some
approaches that
Hello!
We have some interest in hardware on devices that is cache-coherent
with main memory, and in migrating memory between host memory and
device memory. We believe that we might not be the only ones looking
ahead to hardware like this, so please see below for a draft of some
approaches that
On Tue, Apr 21, 2015 at 06:49:29PM -0500, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Thoughts?
Use DAX for memory instead of the other approaches? That way it is
explicitly clear what information is put on the CAPI device.
Memory on this device should not be
On Wed, Apr 22, 2015 at 10:42:52AM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Thoughts?
Use DAX for memory instead of the other approaches? That way it is
explicitly clear what
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Thoughts?
Use DAX for memory instead of the other approaches? That way it is
explicitly clear what information is put on the CAPI device.
Although such a device will provide CPU's with cache-coherent
Maybe call this coprocessor like IBM
On Tue, 2015-04-21 at 19:46 -0400, Jerome Glisse wrote:
On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
Hello!
We have some interest in hardware on devices that is cache-coherent
with main memory, and in migrating memory between host memory and
device memory. We
On Tue, 2015-04-21 at 17:57 -0700, Paul E. McKenney wrote:
On Wed, Apr 22, 2015 at 10:42:52AM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Thoughts?
Use DAX for memory instead of
On Tue, 2015-04-21 at 18:49 -0500, Christoph Lameter wrote:
On Tue, 21 Apr 2015, Paul E. McKenney wrote:
Thoughts?
Use DAX for memory instead of the other approaches? That way it is
explicitly clear what information is put on the CAPI device.
Care to elaborate on what DAX is ?
On Tue, 2015-04-21 at 19:50 -0500, Christoph Lameter wrote:
With a filesystem the migration can be controlled by the application. It
can copy stuff whenever it wants to.Having the OS do that behind my back
is not something that feels safe and secure.
But this is not something the user wants.
On Tue, Apr 21, 2015 at 02:44:45PM -0700, Paul E. McKenney wrote:
Hello!
We have some interest in hardware on devices that is cache-coherent
with main memory, and in migrating memory between host memory and
device memory. We believe that we might not be the only ones looking
ahead to
On Tue, 21 Apr 2015, Jerome Glisse wrote:
Memory on this device should not be considered as something special
(even if it is). More below.
Uhh?
So big use case here, let say you have an application that rely on a
scientific library that do matrix computation. Your application simply
use
101 - 194 of 194 matches
Mail list logo