RE: Implement svm without BO concept in xe driver

2023-08-22 Thread Zeng, Oak

> -Original Message-
> From: Ruhl, Michael J 
> Sent: August 22, 2023 7:44 AM
> To: Felix Kuehling ; Zeng, Oak ;
> Dave Airlie 
> Cc: Brost, Matthew ; Thomas Hellström
> ; Philip Yang ;
> Welty, Brian ; dri-devel@lists.freedesktop.org; 
> Christian
> König ; Vishwanathapura, Niranjana
> ; intel...@lists.freedesktop.org
> Subject: RE: Implement svm without BO concept in xe driver
> 
> >-Original Message-
> >From: Felix Kuehling 
> >Sent: Monday, August 21, 2023 4:57 PM
> >To: Zeng, Oak ; Dave Airlie 
> >Cc: Brost, Matthew ; Thomas Hellström
> >; Philip Yang ;
> >Welty, Brian ; dri-devel@lists.freedesktop.org;
> >Christian König ; Vishwanathapura, Niranjana
> >; intel...@lists.freedesktop.org;
> >Ruhl, Michael J 
> >Subject: Re: Implement svm without BO concept in xe driver
> >
> >
> >On 2023-08-21 15:41, Zeng, Oak wrote:
> >>> I have thought about emulating BO allocation APIs on top of system SVM.
> >>> This was in the context of KFD where memory management is not tied into
> >>> command submissions APIs, which would add a whole other layer of
> >>> complexity. The main unsolved (unsolvable?) problem I ran into was, that
> >>> there is no way to share SVM memory as DMABufs. So there is no good
> >way
> >>> to support applications that expect to share memory in that way.
> >> Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf 
> >> is a
> >particular technology created specially for the BO driver (and other driver) 
> >to
> >share buffer b/t devices. Hmm/system SVM doesn't need this technology:
> >malloc'ed memory by the nature is already shared b/t different devices (in
> >one process) and CPU. We just can simply submit GPU kernel to all devices
> >with malloc'ed memory and let kmd decide the memory placement (such as
> >map in place or migrate). No need of buffer export/import in hmm/system
> >SVM world.
> >
> >I disagree. DMABuf can be used for sharing memory between processes. And
> >it can be used for sharing memory with 3rd-party devices via PCIe P2P
> >(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory.
> >POSIX IPC requires that you know that you'll be sharing the memory at
> >allocation time. It adds overhead. And because it's file-backed, it's
> >currently incompatible with migration. And HMM currently doesn't have a
> >solution for P2P. Any access by a different device causes a migration to
> >system memory.
> 
> Hey Oak,
> 
> I think we were discussing this solution in the context of using the P2P_DMA
> feature.  This has an allocation path and a device 2 device capabilities.


I was thinking sharing malloc'ed memory b/t CPU and multiple devices inside one 
process. I thought this should work. With Felix's words above, I looked more 
details. Now I agree with Felix this doesn't work with hmm.

And as Felix pointed out, POSIX IPC also doesn't work with hmm. Theoretically 
driver can do similar migration b/t device memory and file-backed memory, just 
as what we did with anonymous memory. But I am not sure whether people want to 
do that.

Anyway, buffer sharing with hmm/system SVM seems a big open. I will not try to 
solve this problem for now.

Cheers,
Oak

> 
> Mike
> 
> 
> >Regards,
> >   Felix
> >
> >
> >>
> >> So yes from buffer sharing perspective, the design philosophy is also very
> >different.
> >>
> >> Thanks,
> >> Oak
> >>


RE: Implement svm without BO concept in xe driver

2023-08-22 Thread Ruhl, Michael J
>-Original Message-
>From: Felix Kuehling 
>Sent: Monday, August 21, 2023 4:57 PM
>To: Zeng, Oak ; Dave Airlie 
>Cc: Brost, Matthew ; Thomas Hellström
>; Philip Yang ;
>Welty, Brian ; dri-devel@lists.freedesktop.org;
>Christian König ; Vishwanathapura, Niranjana
>; intel...@lists.freedesktop.org;
>Ruhl, Michael J 
>Subject: Re: Implement svm without BO concept in xe driver
>
>
>On 2023-08-21 15:41, Zeng, Oak wrote:
>>> I have thought about emulating BO allocation APIs on top of system SVM.
>>> This was in the context of KFD where memory management is not tied into
>>> command submissions APIs, which would add a whole other layer of
>>> complexity. The main unsolved (unsolvable?) problem I ran into was, that
>>> there is no way to share SVM memory as DMABufs. So there is no good
>way
>>> to support applications that expect to share memory in that way.
>> Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf is a
>particular technology created specially for the BO driver (and other driver) to
>share buffer b/t devices. Hmm/system SVM doesn't need this technology:
>malloc'ed memory by the nature is already shared b/t different devices (in
>one process) and CPU. We just can simply submit GPU kernel to all devices
>with malloc'ed memory and let kmd decide the memory placement (such as
>map in place or migrate). No need of buffer export/import in hmm/system
>SVM world.
>
>I disagree. DMABuf can be used for sharing memory between processes. And
>it can be used for sharing memory with 3rd-party devices via PCIe P2P
>(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory.
>POSIX IPC requires that you know that you'll be sharing the memory at
>allocation time. It adds overhead. And because it's file-backed, it's
>currently incompatible with migration. And HMM currently doesn't have a
>solution for P2P. Any access by a different device causes a migration to
>system memory.

Hey Oak,

I think we were discussing this solution in the context of using the P2P_DMA
feature.  This has an allocation path and a device 2 device capabilities.

Mike


>Regards,
>   Felix
>
>
>>
>> So yes from buffer sharing perspective, the design philosophy is also very
>different.
>>
>> Thanks,
>> Oak
>>


Re: Implement svm without BO concept in xe driver

2023-08-21 Thread Felix Kuehling



On 2023-08-21 15:41, Zeng, Oak wrote:

I have thought about emulating BO allocation APIs on top of system SVM.
This was in the context of KFD where memory management is not tied into
command submissions APIs, which would add a whole other layer of
complexity. The main unsolved (unsolvable?) problem I ran into was, that
there is no way to share SVM memory as DMABufs. So there is no good way
to support applications that expect to share memory in that way.

Great point. I also discussed the dmabuf thing with Mike (cc'ed). dmabuf is a 
particular technology created specially for the BO driver (and other driver) to 
share buffer b/t devices. Hmm/system SVM doesn't need this technology: 
malloc'ed memory by the nature is already shared b/t different devices (in one 
process) and CPU. We just can simply submit GPU kernel to all devices with 
malloc'ed memory and let kmd decide the memory placement (such as map in place 
or migrate). No need of buffer export/import in hmm/system SVM world.


I disagree. DMABuf can be used for sharing memory between processes. And 
it can be used for sharing memory with 3rd-party devices via PCIe P2P 
(e.g. a Mellanox NIC). You cannot easily do that with malloc'ed memory. 
POSIX IPC requires that you know that you'll be sharing the memory at 
allocation time. It adds overhead. And because it's file-backed, it's 
currently incompatible with migration. And HMM currently doesn't have a 
solution for P2P. Any access by a different device causes a migration to 
system memory.


Regards,
  Felix




So yes from buffer sharing perspective, the design philosophy is also very 
different.

Thanks,
Oak



RE: Implement svm without BO concept in xe driver

2023-08-21 Thread Zeng, Oak

> -Original Message-
> From: dri-devel  On Behalf Of Felix
> Kuehling
> Sent: August 21, 2023 3:18 PM
> To: Zeng, Oak ; Dave Airlie 
> Cc: Brost, Matthew ; Thomas Hellström
> ; Philip Yang ;
> Welty, Brian ; dri-devel@lists.freedesktop.org; 
> Christian
> König ; Vishwanathapura, Niranjana
> ; intel...@lists.freedesktop.org
> Subject: Re: Implement svm without BO concept in xe driver
> 
> 
> On 2023-08-21 11:10, Zeng, Oak wrote:
> > Accidently deleted Brian. Add back.
> >
> > Thanks,
> > Oak
> >
> >> -Original Message-
> >> From: Zeng, Oak
> >> Sent: August 21, 2023 11:07 AM
> >> To: Dave Airlie 
> >> Cc: Brost, Matthew ; Thomas Hellström
> >> ; Philip Yang ;
> Felix
> >> Kuehling ; dri-devel@lists.freedesktop.org; intel-
> >> x...@lists.freedesktop.org; Vishwanathapura, Niranjana
> >> ; Christian König
> >> 
> >> Subject: RE: Implement svm without BO concept in xe driver
> >>
> >>> -Original Message-
> >>> From: dri-devel  On Behalf Of
> Dave
> >>> Airlie
> >>> Sent: August 20, 2023 6:21 PM
> >>> To: Zeng, Oak 
> >>> Cc: Brost, Matthew ; Thomas Hellström
> >>> ; Philip Yang ;
> >> Felix
> >>> Kuehling ; Welty, Brian ;
> >> dri-
> >>> de...@lists.freedesktop.org; intel...@lists.freedesktop.org;
> Vishwanathapura,
> >>> Niranjana ; Christian König
> >>> 
> >>> Subject: Re: Implement svm without BO concept in xe driver
> >>>
> >>> On Thu, 17 Aug 2023 at 12:13, Zeng, Oak  wrote:
> >>>>> -Original Message-
> >>>>> From: Dave Airlie 
> >>>>> Sent: August 16, 2023 6:52 PM
> >>>>> To: Felix Kuehling 
> >>>>> Cc: Zeng, Oak ; Christian König
> >>>>> ; Thomas Hellström
> >>>>> ; Brost, Matthew
> >>>>> ; maarten.lankho...@linux.intel.com;
> >>>>> Vishwanathapura, Niranjana ;
> >> Welty,
> >>>>> Brian ; Philip Yang ;
> intel-
> >>>>> x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> >>>>> Subject: Re: Implement svm without BO concept in xe driver
> >>>>>
> >>>>> On Thu, 17 Aug 2023 at 08:15, Felix Kuehling 
> >>> wrote:
> >>>>>> On 2023-08-16 13:30, Zeng, Oak wrote:
> >>>>>>> I spoke with Thomas. We discussed two approaches:
> >>>>>>>
> >>>>>>> 1) make ttm_resource a central place for vram management functions
> >>> such as
> >>>>> eviction, cgroup memory accounting. Both the BO-based driver and BO-
> less
> >>> SVM
> >>>>> codes call into ttm_resource_alloc/free functions for vram 
> >>>>> allocation/free.
> >>>>>>>   *This way BO driver and SVM driver shares the eviction/cgroup 
> >>>>>>> logic,
> >> no
> >>>>> need to reimplment LRU eviction list in SVM driver. Cgroup logic should 
> >>>>> be
> >> in
> >>>>> ttm_resource layer. +Maarten.
> >>>>>>>   *ttm_resource is not a perfect match for SVM to allocate vram. 
> >>>>>>> It is
> >> still
> >>> a
> >>>>> big overhead. The *bo* member of ttm_resource is not needed for SVM -
> >>> this
> >>>>> might end up with invasive changes to ttm...need to look into more 
> >>>>> details
> >>>>>> Overhead is a problem. We'd want to be able to allocate, free and evict
> >>>>>> memory at a similar granularity as our preferred migration and page
> >>>>>> fault granularity, which defaults to 2MB in our SVM implementation.
> >>>>>>
> >>>>>>
> >>>>>>> 2) svm code allocate memory directly from drm-buddy allocator, and
> >>> expose
> >>>>> memory eviction functions from both ttm and svm so they can evict
> >> memory
> >>>>> from each other. For example, expose the ttm_mem_evict_first function
> >>> from
> >>>>> ttm side so hmm/svm code can call it; expose a similar function from svm
> >> side
> >>> so
> >>>>> ttm can evict hmm memory.
> >>>>>> I like this option. One thing that needs some thou

Re: Implement svm without BO concept in xe driver

2023-08-21 Thread Felix Kuehling



On 2023-08-21 11:10, Zeng, Oak wrote:

Accidently deleted Brian. Add back.

Thanks,
Oak


-Original Message-
From: Zeng, Oak
Sent: August 21, 2023 11:07 AM
To: Dave Airlie 
Cc: Brost, Matthew ; Thomas Hellström
; Philip Yang ; Felix
Kuehling ; dri-devel@lists.freedesktop.org; intel-
x...@lists.freedesktop.org; Vishwanathapura, Niranjana
; Christian König

Subject: RE: Implement svm without BO concept in xe driver


-Original Message-
From: dri-devel  On Behalf Of Dave
Airlie
Sent: August 20, 2023 6:21 PM
To: Zeng, Oak 
Cc: Brost, Matthew ; Thomas Hellström
; Philip Yang ;

Felix

Kuehling ; Welty, Brian ;

dri-

de...@lists.freedesktop.org; intel...@lists.freedesktop.org; Vishwanathapura,
Niranjana ; Christian König

Subject: Re: Implement svm without BO concept in xe driver

On Thu, 17 Aug 2023 at 12:13, Zeng, Oak  wrote:

-Original Message-
From: Dave Airlie 
Sent: August 16, 2023 6:52 PM
To: Felix Kuehling 
Cc: Zeng, Oak ; Christian König
; Thomas Hellström
; Brost, Matthew
; maarten.lankho...@linux.intel.com;
Vishwanathapura, Niranjana ;

Welty,

Brian ; Philip Yang ; intel-
x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

On Thu, 17 Aug 2023 at 08:15, Felix Kuehling 

wrote:

On 2023-08-16 13:30, Zeng, Oak wrote:

I spoke with Thomas. We discussed two approaches:

1) make ttm_resource a central place for vram management functions

such as

eviction, cgroup memory accounting. Both the BO-based driver and BO-less

SVM

codes call into ttm_resource_alloc/free functions for vram allocation/free.

  *This way BO driver and SVM driver shares the eviction/cgroup logic,

no

need to reimplment LRU eviction list in SVM driver. Cgroup logic should be

in

ttm_resource layer. +Maarten.

  *ttm_resource is not a perfect match for SVM to allocate vram. It is

still

a

big overhead. The *bo* member of ttm_resource is not needed for SVM -

this

might end up with invasive changes to ttm...need to look into more details

Overhead is a problem. We'd want to be able to allocate, free and evict
memory at a similar granularity as our preferred migration and page
fault granularity, which defaults to 2MB in our SVM implementation.



2) svm code allocate memory directly from drm-buddy allocator, and

expose

memory eviction functions from both ttm and svm so they can evict

memory

from each other. For example, expose the ttm_mem_evict_first function

from

ttm side so hmm/svm code can call it; expose a similar function from svm

side

so

ttm can evict hmm memory.

I like this option. One thing that needs some thought with this is how
to get some semblance of fairness between the two types of clients.
Basically how to choose what to evict. And what share of the available
memory does each side get to use on average. E.g. an idle client may get
all its memory evicted while a busy client may get a bigger share of the
available memory.

I'd also like to suggest we try to write any management/generic code
in driver agnostic way as much as possible here. I don't really see
much hw difference should be influencing it.

I do worry about having effectively 2 LRUs here, you can't really have
two "leasts".

Like if we hit the shrinker paths who goes first? do we shrink one
object from each side in turn?

One way to solve this fairness problem is to create a driver agnostic

drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory
eviction/cgroups memory accounting logic from ttm_resource manager to
drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr

to

allocate/free memory.

I am not sure whether this meets the 2M allocate/free/evict granularity

requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO
driver should be able to allocate any arbitrary sized blocks - So the eviction 
is

also

arbitrary size.

Also will we have systems where we can expose system SVM but userspace
may choose to not use the fine grained SVM and use one of the older
modes, will that path get emulated on top of SVM or use the BO paths?

If by "older modes" you meant the gem_bo_create (such as xe_gem_create

or

amdgpu_gem_create), then today both amd and intel implement those
interfaces using BO path. We don't have a plan to emulate that old mode on

tope

of SVM, afaict.

I'm not sure how the older modes manifest in the kernel I assume as bo
creates (but they may use userptr), SVM isn't a specific thing, it's a
group of 3 things.

1) coarse-grained SVM which I think is BO
2) fine-grained SVM which is page level
3) fine-grained system SVM which is HMM

I suppose I'm asking about the previous versions and how they would
operate in a system SVM capable system.

I got your question now.

As I understand it, the system SVM provides similar functionality as BO-based
SVM (i.e., share virtual address space b/t cpu and gpu program, no explicit
memory placement for gpu program), but they

RE: Implement svm without BO concept in xe driver

2023-08-21 Thread Zeng, Oak
Accidently deleted Brian. Add back.

Thanks,
Oak

> -Original Message-
> From: Zeng, Oak
> Sent: August 21, 2023 11:07 AM
> To: Dave Airlie 
> Cc: Brost, Matthew ; Thomas Hellström
> ; Philip Yang ; Felix
> Kuehling ; dri-devel@lists.freedesktop.org; intel-
> x...@lists.freedesktop.org; Vishwanathapura, Niranjana
> ; Christian König
> 
> Subject: RE: Implement svm without BO concept in xe driver
> 
> > -Original Message-
> > From: dri-devel  On Behalf Of Dave
> > Airlie
> > Sent: August 20, 2023 6:21 PM
> > To: Zeng, Oak 
> > Cc: Brost, Matthew ; Thomas Hellström
> > ; Philip Yang ;
> Felix
> > Kuehling ; Welty, Brian ;
> dri-
> > de...@lists.freedesktop.org; intel...@lists.freedesktop.org; 
> > Vishwanathapura,
> > Niranjana ; Christian König
> > 
> > Subject: Re: Implement svm without BO concept in xe driver
> >
> > On Thu, 17 Aug 2023 at 12:13, Zeng, Oak  wrote:
> > >
> > > > -Original Message-
> > > > From: Dave Airlie 
> > > > Sent: August 16, 2023 6:52 PM
> > > > To: Felix Kuehling 
> > > > Cc: Zeng, Oak ; Christian König
> > > > ; Thomas Hellström
> > > > ; Brost, Matthew
> > > > ; maarten.lankho...@linux.intel.com;
> > > > Vishwanathapura, Niranjana ;
> Welty,
> > > > Brian ; Philip Yang ; intel-
> > > > x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> > > > Subject: Re: Implement svm without BO concept in xe driver
> > > >
> > > > On Thu, 17 Aug 2023 at 08:15, Felix Kuehling 
> > wrote:
> > > > >
> > > > > On 2023-08-16 13:30, Zeng, Oak wrote:
> > > > > > I spoke with Thomas. We discussed two approaches:
> > > > > >
> > > > > > 1) make ttm_resource a central place for vram management functions
> > such as
> > > > eviction, cgroup memory accounting. Both the BO-based driver and BO-less
> > SVM
> > > > codes call into ttm_resource_alloc/free functions for vram 
> > > > allocation/free.
> > > > > >  *This way BO driver and SVM driver shares the eviction/cgroup 
> > > > > > logic,
> no
> > > > need to reimplment LRU eviction list in SVM driver. Cgroup logic should 
> > > > be
> in
> > > > ttm_resource layer. +Maarten.
> > > > > >  *ttm_resource is not a perfect match for SVM to allocate vram. 
> > > > > > It is
> still
> > a
> > > > big overhead. The *bo* member of ttm_resource is not needed for SVM -
> > this
> > > > might end up with invasive changes to ttm...need to look into more 
> > > > details
> > > > >
> > > > > Overhead is a problem. We'd want to be able to allocate, free and 
> > > > > evict
> > > > > memory at a similar granularity as our preferred migration and page
> > > > > fault granularity, which defaults to 2MB in our SVM implementation.
> > > > >
> > > > >
> > > > > >
> > > > > > 2) svm code allocate memory directly from drm-buddy allocator, and
> > expose
> > > > memory eviction functions from both ttm and svm so they can evict
> memory
> > > > from each other. For example, expose the ttm_mem_evict_first function
> > from
> > > > ttm side so hmm/svm code can call it; expose a similar function from svm
> side
> > so
> > > > ttm can evict hmm memory.
> > > > >
> > > > > I like this option. One thing that needs some thought with this is how
> > > > > to get some semblance of fairness between the two types of clients.
> > > > > Basically how to choose what to evict. And what share of the available
> > > > > memory does each side get to use on average. E.g. an idle client may 
> > > > > get
> > > > > all its memory evicted while a busy client may get a bigger share of 
> > > > > the
> > > > > available memory.
> > > >
> > > > I'd also like to suggest we try to write any management/generic code
> > > > in driver agnostic way as much as possible here. I don't really see
> > > > much hw difference should be influencing it.
> > > >
> > > > I do worry about having effectively 2 LRUs here, you can't really have
> > > > two "leasts".
> > > >
> > > > Like if we hit the shrinker paths wh

RE: Implement svm without BO concept in xe driver

2023-08-21 Thread Zeng, Oak
> -Original Message-
> From: dri-devel  On Behalf Of Dave
> Airlie
> Sent: August 20, 2023 6:21 PM
> To: Zeng, Oak 
> Cc: Brost, Matthew ; Thomas Hellström
> ; Philip Yang ; Felix
> Kuehling ; Welty, Brian ; dri-
> de...@lists.freedesktop.org; intel...@lists.freedesktop.org; Vishwanathapura,
> Niranjana ; Christian König
> 
> Subject: Re: Implement svm without BO concept in xe driver
> 
> On Thu, 17 Aug 2023 at 12:13, Zeng, Oak  wrote:
> >
> > > -Original Message-
> > > From: Dave Airlie 
> > > Sent: August 16, 2023 6:52 PM
> > > To: Felix Kuehling 
> > > Cc: Zeng, Oak ; Christian König
> > > ; Thomas Hellström
> > > ; Brost, Matthew
> > > ; maarten.lankho...@linux.intel.com;
> > > Vishwanathapura, Niranjana ; Welty,
> > > Brian ; Philip Yang ; intel-
> > > x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> > > Subject: Re: Implement svm without BO concept in xe driver
> > >
> > > On Thu, 17 Aug 2023 at 08:15, Felix Kuehling 
> wrote:
> > > >
> > > > On 2023-08-16 13:30, Zeng, Oak wrote:
> > > > > I spoke with Thomas. We discussed two approaches:
> > > > >
> > > > > 1) make ttm_resource a central place for vram management functions
> such as
> > > eviction, cgroup memory accounting. Both the BO-based driver and BO-less
> SVM
> > > codes call into ttm_resource_alloc/free functions for vram 
> > > allocation/free.
> > > > >  *This way BO driver and SVM driver shares the eviction/cgroup 
> > > > > logic, no
> > > need to reimplment LRU eviction list in SVM driver. Cgroup logic should 
> > > be in
> > > ttm_resource layer. +Maarten.
> > > > >  *ttm_resource is not a perfect match for SVM to allocate vram. 
> > > > > It is still
> a
> > > big overhead. The *bo* member of ttm_resource is not needed for SVM -
> this
> > > might end up with invasive changes to ttm...need to look into more details
> > > >
> > > > Overhead is a problem. We'd want to be able to allocate, free and evict
> > > > memory at a similar granularity as our preferred migration and page
> > > > fault granularity, which defaults to 2MB in our SVM implementation.
> > > >
> > > >
> > > > >
> > > > > 2) svm code allocate memory directly from drm-buddy allocator, and
> expose
> > > memory eviction functions from both ttm and svm so they can evict memory
> > > from each other. For example, expose the ttm_mem_evict_first function
> from
> > > ttm side so hmm/svm code can call it; expose a similar function from svm 
> > > side
> so
> > > ttm can evict hmm memory.
> > > >
> > > > I like this option. One thing that needs some thought with this is how
> > > > to get some semblance of fairness between the two types of clients.
> > > > Basically how to choose what to evict. And what share of the available
> > > > memory does each side get to use on average. E.g. an idle client may get
> > > > all its memory evicted while a busy client may get a bigger share of the
> > > > available memory.
> > >
> > > I'd also like to suggest we try to write any management/generic code
> > > in driver agnostic way as much as possible here. I don't really see
> > > much hw difference should be influencing it.
> > >
> > > I do worry about having effectively 2 LRUs here, you can't really have
> > > two "leasts".
> > >
> > > Like if we hit the shrinker paths who goes first? do we shrink one
> > > object from each side in turn?
> >
> > One way to solve this fairness problem is to create a driver agnostic
> drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory
> eviction/cgroups memory accounting logic from ttm_resource manager to
> drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to
> allocate/free memory.
> >
> > I am not sure whether this meets the 2M allocate/free/evict granularity
> requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO
> driver should be able to allocate any arbitrary sized blocks - So the 
> eviction is also
> arbitrary size.
> >
> > >
> > > Also will we have systems where we can expose system SVM but userspace
> > > may choose to not use the fine grained SVM and use one of the older
> > > modes, will that path get emulated on top of SVM or use the BO paths?

Re: Implement svm without BO concept in xe driver

2023-08-20 Thread Dave Airlie
On Thu, 17 Aug 2023 at 12:13, Zeng, Oak  wrote:
>
> > -Original Message-
> > From: Dave Airlie 
> > Sent: August 16, 2023 6:52 PM
> > To: Felix Kuehling 
> > Cc: Zeng, Oak ; Christian König
> > ; Thomas Hellström
> > ; Brost, Matthew
> > ; maarten.lankho...@linux.intel.com;
> > Vishwanathapura, Niranjana ; Welty,
> > Brian ; Philip Yang ; intel-
> > x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> > Subject: Re: Implement svm without BO concept in xe driver
> >
> > On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  wrote:
> > >
> > > On 2023-08-16 13:30, Zeng, Oak wrote:
> > > > I spoke with Thomas. We discussed two approaches:
> > > >
> > > > 1) make ttm_resource a central place for vram management functions such 
> > > > as
> > eviction, cgroup memory accounting. Both the BO-based driver and BO-less SVM
> > codes call into ttm_resource_alloc/free functions for vram allocation/free.
> > > >  *This way BO driver and SVM driver shares the eviction/cgroup 
> > > > logic, no
> > need to reimplment LRU eviction list in SVM driver. Cgroup logic should be 
> > in
> > ttm_resource layer. +Maarten.
> > > >  *ttm_resource is not a perfect match for SVM to allocate vram. It 
> > > > is still a
> > big overhead. The *bo* member of ttm_resource is not needed for SVM - this
> > might end up with invasive changes to ttm...need to look into more details
> > >
> > > Overhead is a problem. We'd want to be able to allocate, free and evict
> > > memory at a similar granularity as our preferred migration and page
> > > fault granularity, which defaults to 2MB in our SVM implementation.
> > >
> > >
> > > >
> > > > 2) svm code allocate memory directly from drm-buddy allocator, and 
> > > > expose
> > memory eviction functions from both ttm and svm so they can evict memory
> > from each other. For example, expose the ttm_mem_evict_first function from
> > ttm side so hmm/svm code can call it; expose a similar function from svm 
> > side so
> > ttm can evict hmm memory.
> > >
> > > I like this option. One thing that needs some thought with this is how
> > > to get some semblance of fairness between the two types of clients.
> > > Basically how to choose what to evict. And what share of the available
> > > memory does each side get to use on average. E.g. an idle client may get
> > > all its memory evicted while a busy client may get a bigger share of the
> > > available memory.
> >
> > I'd also like to suggest we try to write any management/generic code
> > in driver agnostic way as much as possible here. I don't really see
> > much hw difference should be influencing it.
> >
> > I do worry about having effectively 2 LRUs here, you can't really have
> > two "leasts".
> >
> > Like if we hit the shrinker paths who goes first? do we shrink one
> > object from each side in turn?
>
> One way to solve this fairness problem is to create a driver agnostic 
> drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory 
> eviction/cgroups memory accounting logic from ttm_resource manager to 
> drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to 
> allocate/free memory.
>
> I am not sure whether this meets the 2M allocate/free/evict granularity 
> requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO 
> driver should be able to allocate any arbitrary sized blocks - So the 
> eviction is also arbitrary size.
>
> >
> > Also will we have systems where we can expose system SVM but userspace
> > may choose to not use the fine grained SVM and use one of the older
> > modes, will that path get emulated on top of SVM or use the BO paths?
>
>
> If by "older modes" you meant the gem_bo_create (such as xe_gem_create or 
> amdgpu_gem_create), then today both amd and intel implement those interfaces 
> using BO path. We don't have a plan to emulate that old mode on tope of SVM, 
> afaict.

I'm not sure how the older modes manifest in the kernel I assume as bo
creates (but they may use userptr), SVM isn't a specific thing, it's a
group of 3 things.

coarse-grained SVM which I think is BO
fine-grained SVM which is page level
fine-grained system SVM which is HMM

I suppose I'm asking about the previous versions and how they would
operate in a system SVM capable system.

Dave.
>
> Thanks,
> Oak
>
> >
> > Dave.


Re: Implement svm without BO concept in xe driver

2023-08-18 Thread Felix Kuehling



On 2023-08-18 12:10, Zeng, Oak wrote:

Thanks Thomas. I will then look into more details of option 3:

* create a lean drm layer vram manager, a central control place for vram 
eviction and cgroup accounting. Single LRU for eviction fairness.
* pretty much move the current ttm_resource eviction/cgroups logic to drm 
layer
* the eviction/allocation granularity should be flexible so svm can do 2M 
while ttm can do arbitrary size


SVM will need smaller sizes too, for VMAs that are smaller or not 
aligned to 2MB size.


Regards,
  Felix



* both ttm_resource and svm code should call the new drm_vram_manager for 
eviction/accounting

I will come back with some RFC proof of concept codes later.

Cheers,
Oak


-Original Message-
From: Thomas Hellström 
Sent: August 18, 2023 3:36 AM
To: Zeng, Oak ; Dave Airlie ; Felix
Kuehling 
Cc: Christian König ; Brost, Matthew
; maarten.lankho...@linux.intel.com;
Vishwanathapura, Niranjana ; Welty,
Brian ; Philip Yang ; intel-
x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver


On 8/17/23 04:12, Zeng, Oak wrote:

-Original Message-
From: Dave Airlie 
Sent: August 16, 2023 6:52 PM
To: Felix Kuehling 
Cc: Zeng, Oak ; Christian König
; Thomas Hellström
; Brost, Matthew
; maarten.lankho...@linux.intel.com;
Vishwanathapura, Niranjana ; Welty,
Brian ; Philip Yang ; intel-
x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  wrote:

On 2023-08-16 13:30, Zeng, Oak wrote:

I spoke with Thomas. We discussed two approaches:

1) make ttm_resource a central place for vram management functions such

as

eviction, cgroup memory accounting. Both the BO-based driver and BO-less

SVM

codes call into ttm_resource_alloc/free functions for vram allocation/free.

   *This way BO driver and SVM driver shares the eviction/cgroup logic, no

need to reimplment LRU eviction list in SVM driver. Cgroup logic should be in
ttm_resource layer. +Maarten.

   *ttm_resource is not a perfect match for SVM to allocate vram. It is 
still a

big overhead. The *bo* member of ttm_resource is not needed for SVM - this
might end up with invasive changes to ttm...need to look into more details

Overhead is a problem. We'd want to be able to allocate, free and evict
memory at a similar granularity as our preferred migration and page
fault granularity, which defaults to 2MB in our SVM implementation.



2) svm code allocate memory directly from drm-buddy allocator, and

expose

memory eviction functions from both ttm and svm so they can evict memory
from each other. For example, expose the ttm_mem_evict_first function

from

ttm side so hmm/svm code can call it; expose a similar function from svm side

so

ttm can evict hmm memory.

I like this option. One thing that needs some thought with this is how
to get some semblance of fairness between the two types of clients.
Basically how to choose what to evict. And what share of the available
memory does each side get to use on average. E.g. an idle client may get
all its memory evicted while a busy client may get a bigger share of the
available memory.

I'd also like to suggest we try to write any management/generic code
in driver agnostic way as much as possible here. I don't really see
much hw difference should be influencing it.

I do worry about having effectively 2 LRUs here, you can't really have
two "leasts".

Like if we hit the shrinker paths who goes first? do we shrink one
object from each side in turn?

One way to solve this fairness problem is to create a driver agnostic

drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory
eviction/cgroups memory accounting logic from ttm_resource manager to
drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to
allocate/free memory.

I am not sure whether this meets the 2M allocate/free/evict granularity

requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO
driver should be able to allocate any arbitrary sized blocks - So the eviction 
is also
arbitrary size.

This is not far from what a TTM resource manager does with TTM
resources, only made generic at the drm level, and making the "resource"
as lean as possible. With 2M granularity this seems plausible.


Also will we have systems where we can expose system SVM but userspace
may choose to not use the fine grained SVM and use one of the older
modes, will that path get emulated on top of SVM or use the BO paths?

If by "older modes" you meant the gem_bo_create (such as xe_gem_create or

amdgpu_gem_create), then today both amd and intel implement those
interfaces using BO path. We don't have a plan to emulate that old mode on tope
of SVM, afaict.

I think we might end up emulating "older modes" on top of SVM at some
point, not to far out, although what imm

RE: Implement svm without BO concept in xe driver

2023-08-18 Thread Zeng, Oak
Thanks Thomas. I will then look into more details of option 3:

   * create a lean drm layer vram manager, a central control place for vram 
eviction and cgroup accounting. Single LRU for eviction fairness.
   * pretty much move the current ttm_resource eviction/cgroups logic to drm 
layer
   * the eviction/allocation granularity should be flexible so svm can do 2M 
while ttm can do arbitrary size
   * both ttm_resource and svm code should call the new drm_vram_manager for 
eviction/accounting

I will come back with some RFC proof of concept codes later.

Cheers,
Oak

> -Original Message-
> From: Thomas Hellström 
> Sent: August 18, 2023 3:36 AM
> To: Zeng, Oak ; Dave Airlie ; Felix
> Kuehling 
> Cc: Christian König ; Brost, Matthew
> ; maarten.lankho...@linux.intel.com;
> Vishwanathapura, Niranjana ; Welty,
> Brian ; Philip Yang ; intel-
> x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Subject: Re: Implement svm without BO concept in xe driver
> 
> 
> On 8/17/23 04:12, Zeng, Oak wrote:
> >> -Original Message-
> >> From: Dave Airlie 
> >> Sent: August 16, 2023 6:52 PM
> >> To: Felix Kuehling 
> >> Cc: Zeng, Oak ; Christian König
> >> ; Thomas Hellström
> >> ; Brost, Matthew
> >> ; maarten.lankho...@linux.intel.com;
> >> Vishwanathapura, Niranjana ; Welty,
> >> Brian ; Philip Yang ; intel-
> >> x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> >> Subject: Re: Implement svm without BO concept in xe driver
> >>
> >> On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  
> >> wrote:
> >>> On 2023-08-16 13:30, Zeng, Oak wrote:
> >>>> I spoke with Thomas. We discussed two approaches:
> >>>>
> >>>> 1) make ttm_resource a central place for vram management functions such
> as
> >> eviction, cgroup memory accounting. Both the BO-based driver and BO-less
> SVM
> >> codes call into ttm_resource_alloc/free functions for vram allocation/free.
> >>>>   *This way BO driver and SVM driver shares the eviction/cgroup 
> >>>> logic, no
> >> need to reimplment LRU eviction list in SVM driver. Cgroup logic should be 
> >> in
> >> ttm_resource layer. +Maarten.
> >>>>   *ttm_resource is not a perfect match for SVM to allocate vram. It 
> >>>> is still a
> >> big overhead. The *bo* member of ttm_resource is not needed for SVM - this
> >> might end up with invasive changes to ttm...need to look into more details
> >>> Overhead is a problem. We'd want to be able to allocate, free and evict
> >>> memory at a similar granularity as our preferred migration and page
> >>> fault granularity, which defaults to 2MB in our SVM implementation.
> >>>
> >>>
> >>>> 2) svm code allocate memory directly from drm-buddy allocator, and
> expose
> >> memory eviction functions from both ttm and svm so they can evict memory
> >> from each other. For example, expose the ttm_mem_evict_first function
> from
> >> ttm side so hmm/svm code can call it; expose a similar function from svm 
> >> side
> so
> >> ttm can evict hmm memory.
> >>> I like this option. One thing that needs some thought with this is how
> >>> to get some semblance of fairness between the two types of clients.
> >>> Basically how to choose what to evict. And what share of the available
> >>> memory does each side get to use on average. E.g. an idle client may get
> >>> all its memory evicted while a busy client may get a bigger share of the
> >>> available memory.
> >> I'd also like to suggest we try to write any management/generic code
> >> in driver agnostic way as much as possible here. I don't really see
> >> much hw difference should be influencing it.
> >>
> >> I do worry about having effectively 2 LRUs here, you can't really have
> >> two "leasts".
> >>
> >> Like if we hit the shrinker paths who goes first? do we shrink one
> >> object from each side in turn?
> > One way to solve this fairness problem is to create a driver agnostic
> drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory
> eviction/cgroups memory accounting logic from ttm_resource manager to
> drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to
> allocate/free memory.
> >
> > I am not sure whether this meets the 2M allocate/free/evict granularity
> requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO
> driver should be able to allocat

Re: Implement svm without BO concept in xe driver

2023-08-18 Thread Thomas Hellström



On 8/17/23 04:12, Zeng, Oak wrote:

-Original Message-
From: Dave Airlie 
Sent: August 16, 2023 6:52 PM
To: Felix Kuehling 
Cc: Zeng, Oak ; Christian König
; Thomas Hellström
; Brost, Matthew
; maarten.lankho...@linux.intel.com;
Vishwanathapura, Niranjana ; Welty,
Brian ; Philip Yang ; intel-
x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  wrote:

On 2023-08-16 13:30, Zeng, Oak wrote:

I spoke with Thomas. We discussed two approaches:

1) make ttm_resource a central place for vram management functions such as

eviction, cgroup memory accounting. Both the BO-based driver and BO-less SVM
codes call into ttm_resource_alloc/free functions for vram allocation/free.

  *This way BO driver and SVM driver shares the eviction/cgroup logic, no

need to reimplment LRU eviction list in SVM driver. Cgroup logic should be in
ttm_resource layer. +Maarten.

  *ttm_resource is not a perfect match for SVM to allocate vram. It is 
still a

big overhead. The *bo* member of ttm_resource is not needed for SVM - this
might end up with invasive changes to ttm...need to look into more details

Overhead is a problem. We'd want to be able to allocate, free and evict
memory at a similar granularity as our preferred migration and page
fault granularity, which defaults to 2MB in our SVM implementation.



2) svm code allocate memory directly from drm-buddy allocator, and expose

memory eviction functions from both ttm and svm so they can evict memory
from each other. For example, expose the ttm_mem_evict_first function from
ttm side so hmm/svm code can call it; expose a similar function from svm side so
ttm can evict hmm memory.

I like this option. One thing that needs some thought with this is how
to get some semblance of fairness between the two types of clients.
Basically how to choose what to evict. And what share of the available
memory does each side get to use on average. E.g. an idle client may get
all its memory evicted while a busy client may get a bigger share of the
available memory.

I'd also like to suggest we try to write any management/generic code
in driver agnostic way as much as possible here. I don't really see
much hw difference should be influencing it.

I do worry about having effectively 2 LRUs here, you can't really have
two "leasts".

Like if we hit the shrinker paths who goes first? do we shrink one
object from each side in turn?

One way to solve this fairness problem is to create a driver agnostic 
drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory 
eviction/cgroups memory accounting logic from ttm_resource manager to 
drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to 
allocate/free memory.

I am not sure whether this meets the 2M allocate/free/evict granularity 
requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO 
driver should be able to allocate any arbitrary sized blocks - So the eviction 
is also arbitrary size.


This is not far from what a TTM resource manager does with TTM 
resources, only made generic at the drm level, and making the "resource" 
as lean as possible. With 2M granularity this seems plausible.





Also will we have systems where we can expose system SVM but userspace
may choose to not use the fine grained SVM and use one of the older
modes, will that path get emulated on top of SVM or use the BO paths?


If by "older modes" you meant the gem_bo_create (such as xe_gem_create or 
amdgpu_gem_create), then today both amd and intel implement those interfaces using BO 
path. We don't have a plan to emulate that old mode on tope of SVM, afaict.


I think we might end up emulating "older modes" on top of SVM at some 
point, not to far out, although what immediately comes to mind would be 
eviction based on something looking like NUMA- and CGROUP aware 
shrinkers for integrated bo drivers if that turns out to be sufficient 
from a memory usage starvation POW. This is IMHO indeed something to 
start thinking about, but for the current situation trying to solve a 
mutual SVM-TTM fair eviction problem would be a reasonable scope.


Thanks,

Thomas




Thanks,
Oak


Dave.


RE: Implement svm without BO concept in xe driver

2023-08-16 Thread Zeng, Oak
> -Original Message-
> From: Dave Airlie 
> Sent: August 16, 2023 6:52 PM
> To: Felix Kuehling 
> Cc: Zeng, Oak ; Christian König
> ; Thomas Hellström
> ; Brost, Matthew
> ; maarten.lankho...@linux.intel.com;
> Vishwanathapura, Niranjana ; Welty,
> Brian ; Philip Yang ; intel-
> x...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Subject: Re: Implement svm without BO concept in xe driver
> 
> On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  wrote:
> >
> > On 2023-08-16 13:30, Zeng, Oak wrote:
> > > I spoke with Thomas. We discussed two approaches:
> > >
> > > 1) make ttm_resource a central place for vram management functions such as
> eviction, cgroup memory accounting. Both the BO-based driver and BO-less SVM
> codes call into ttm_resource_alloc/free functions for vram allocation/free.
> > >  *This way BO driver and SVM driver shares the eviction/cgroup logic, 
> > > no
> need to reimplment LRU eviction list in SVM driver. Cgroup logic should be in
> ttm_resource layer. +Maarten.
> > >  *ttm_resource is not a perfect match for SVM to allocate vram. It is 
> > > still a
> big overhead. The *bo* member of ttm_resource is not needed for SVM - this
> might end up with invasive changes to ttm...need to look into more details
> >
> > Overhead is a problem. We'd want to be able to allocate, free and evict
> > memory at a similar granularity as our preferred migration and page
> > fault granularity, which defaults to 2MB in our SVM implementation.
> >
> >
> > >
> > > 2) svm code allocate memory directly from drm-buddy allocator, and expose
> memory eviction functions from both ttm and svm so they can evict memory
> from each other. For example, expose the ttm_mem_evict_first function from
> ttm side so hmm/svm code can call it; expose a similar function from svm side 
> so
> ttm can evict hmm memory.
> >
> > I like this option. One thing that needs some thought with this is how
> > to get some semblance of fairness between the two types of clients.
> > Basically how to choose what to evict. And what share of the available
> > memory does each side get to use on average. E.g. an idle client may get
> > all its memory evicted while a busy client may get a bigger share of the
> > available memory.
> 
> I'd also like to suggest we try to write any management/generic code
> in driver agnostic way as much as possible here. I don't really see
> much hw difference should be influencing it.
> 
> I do worry about having effectively 2 LRUs here, you can't really have
> two "leasts".
> 
> Like if we hit the shrinker paths who goes first? do we shrink one
> object from each side in turn?

One way to solve this fairness problem is to create a driver agnostic 
drm_vram_mgr. Maintain a single LRU in drm_vram_mgr. Move the memory 
eviction/cgroups memory accounting logic from ttm_resource manager to 
drm_vram_mgr. Both BO-based driver and SVM driver calls to drm_vram_mgr to 
allocate/free memory.

I am not sure whether this meets the 2M allocate/free/evict granularity 
requirement Felix mentioned above. SVM can allocate 2M size blocks. But BO 
driver should be able to allocate any arbitrary sized blocks - So the eviction 
is also arbitrary size.

> 
> Also will we have systems where we can expose system SVM but userspace
> may choose to not use the fine grained SVM and use one of the older
> modes, will that path get emulated on top of SVM or use the BO paths?


If by "older modes" you meant the gem_bo_create (such as xe_gem_create or 
amdgpu_gem_create), then today both amd and intel implement those interfaces 
using BO path. We don't have a plan to emulate that old mode on tope of SVM, 
afaict.

Thanks,
Oak

> 
> Dave.


Re: Implement svm without BO concept in xe driver

2023-08-16 Thread Dave Airlie
On Thu, 17 Aug 2023 at 08:15, Felix Kuehling  wrote:
>
> On 2023-08-16 13:30, Zeng, Oak wrote:
> > I spoke with Thomas. We discussed two approaches:
> >
> > 1) make ttm_resource a central place for vram management functions such as 
> > eviction, cgroup memory accounting. Both the BO-based driver and BO-less 
> > SVM codes call into ttm_resource_alloc/free functions for vram 
> > allocation/free.
> >  *This way BO driver and SVM driver shares the eviction/cgroup logic, 
> > no need to reimplment LRU eviction list in SVM driver. Cgroup logic should 
> > be in ttm_resource layer. +Maarten.
> >  *ttm_resource is not a perfect match for SVM to allocate vram. It is 
> > still a big overhead. The *bo* member of ttm_resource is not needed for SVM 
> > - this might end up with invasive changes to ttm...need to look into more 
> > details
>
> Overhead is a problem. We'd want to be able to allocate, free and evict
> memory at a similar granularity as our preferred migration and page
> fault granularity, which defaults to 2MB in our SVM implementation.
>
>
> >
> > 2) svm code allocate memory directly from drm-buddy allocator, and expose 
> > memory eviction functions from both ttm and svm so they can evict memory 
> > from each other. For example, expose the ttm_mem_evict_first function from 
> > ttm side so hmm/svm code can call it; expose a similar function from svm 
> > side so ttm can evict hmm memory.
>
> I like this option. One thing that needs some thought with this is how
> to get some semblance of fairness between the two types of clients.
> Basically how to choose what to evict. And what share of the available
> memory does each side get to use on average. E.g. an idle client may get
> all its memory evicted while a busy client may get a bigger share of the
> available memory.

I'd also like to suggest we try to write any management/generic code
in driver agnostic way as much as possible here. I don't really see
much hw difference should be influencing it.

I do worry about having effectively 2 LRUs here, you can't really have
two "leasts".

Like if we hit the shrinker paths who goes first? do we shrink one
object from each side in turn?

Also will we have systems where we can expose system SVM but userspace
may choose to not use the fine grained SVM and use one of the older
modes, will that path get emulated on top of SVM or use the BO paths?

Dave.


Re: Implement svm without BO concept in xe driver

2023-08-16 Thread Felix Kuehling

On 2023-08-16 13:30, Zeng, Oak wrote:

I spoke with Thomas. We discussed two approaches:

1) make ttm_resource a central place for vram management functions such as 
eviction, cgroup memory accounting. Both the BO-based driver and BO-less SVM 
codes call into ttm_resource_alloc/free functions for vram allocation/free.
 *This way BO driver and SVM driver shares the eviction/cgroup logic, no 
need to reimplment LRU eviction list in SVM driver. Cgroup logic should be in 
ttm_resource layer. +Maarten.
 *ttm_resource is not a perfect match for SVM to allocate vram. It is still 
a big overhead. The *bo* member of ttm_resource is not needed for SVM - this 
might end up with invasive changes to ttm...need to look into more details


Overhead is a problem. We'd want to be able to allocate, free and evict 
memory at a similar granularity as our preferred migration and page 
fault granularity, which defaults to 2MB in our SVM implementation.





2) svm code allocate memory directly from drm-buddy allocator, and expose 
memory eviction functions from both ttm and svm so they can evict memory from 
each other. For example, expose the ttm_mem_evict_first function from ttm side 
so hmm/svm code can call it; expose a similar function from svm side so ttm can 
evict hmm memory.


I like this option. One thing that needs some thought with this is how 
to get some semblance of fairness between the two types of clients. 
Basically how to choose what to evict. And what share of the available 
memory does each side get to use on average. E.g. an idle client may get 
all its memory evicted while a busy client may get a bigger share of the 
available memory.


Regards,
  Felix





Today we don't know which approach is better. I will work on some prove of 
concept codes, starting with #1 approach firstly.

Btw, I talked with application engineers and they said most applications 
actually use a mixture of gem_bo create and malloc, so we definitely need to 
solve this problem.

Cheers,
Oak


-Original Message-
From: Christian König 
Sent: August 16, 2023 2:06 AM
To: Zeng, Oak ; Felix Kuehling ;
Thomas Hellström ; Brost, Matthew
; Vishwanathapura, Niranjana
; Welty, Brian ;
Philip Yang ; intel...@lists.freedesktop.org; dri-
de...@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

Hi Oak,

yeah, I completely agree with you and Felix. The main problem here is
getting the memory pressure visible on both sides.

At the moment I have absolutely no idea how to handle that, maybe
something like the ttm_resource object shared between TTM and HMM?

Regards,
Christian.

Am 16.08.23 um 05:47 schrieb Zeng, Oak:

Hi Felix,

It is great to hear from you!

When I implement the HMM-based SVM for intel devices, I found this

interesting problem: HMM uses struct page based memory management scheme
which is completely different against the BO/TTM style memory management
philosophy. Writing SVM code upon the BO/TTM concept seems overkill and
awkward. So I thought we better make the SVM code BO-less and TTM-less. But
on the other hand, currently vram eviction and cgroup memory accounting are all
hooked to the TTM layer, which means a TTM-less SVM driver won't be able to
evict vram allocated through TTM/gpu_vram_mgr.

Ideally HMM migration should use drm-buddy for vram allocation, but we need

to solve this TTM/HMM mutual eviction problem as you pointed out (I am
working with application engineers to figure out whether mutual eviction can
truly benefit applications). Maybe we can implement a TTM-less vram
management block which can be shared b/t the HMM-based driver and the BO-
based driver:

 * allocate/free memory from drm-buddy, buddy-block based
 * memory eviction logics, allow driver to specify which allocation is 
evictable
 * memory accounting, cgroup logic

Maybe such a block can be placed at drm layer (say, call it drm_vram_mgr for

now), so it can be shared b/t amd and intel. So I involved amd folks. Today both
amd and intel-xe driver implemented a TTM-based vram manager which doesn't
serve above design goal. Once the drm_vram_mgr is implemented, both amd
and intel's BO-based/TTM-based vram manager, and the HMM-based vram
manager can call into this drm-vram-mgr.

Thanks again,
Oak


-Original Message-
From: Felix Kuehling 
Sent: August 15, 2023 6:17 PM
To: Zeng, Oak ; Thomas Hellström
; Brost, Matthew
; Vishwanathapura, Niranjana
; Welty, Brian

;

Christian König ; Philip Yang
; intel...@lists.freedesktop.org; dri-
de...@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

Hi Oak,

I'm not sure what you're looking for from AMD? Are we just CC'ed FYI? Or
are you looking for comments about

* Our plans for VRAM management with HMM
* Our experience with BO-based VRAM management
* Something else?

IMO, having separate memory pools for HMM and TTM is a non-starter for
AMD. We need access to the full VRAM in either of the APIs

RE: Implement svm without BO concept in xe driver

2023-08-16 Thread Zeng, Oak
I spoke with Thomas. We discussed two approaches:

1) make ttm_resource a central place for vram management functions such as 
eviction, cgroup memory accounting. Both the BO-based driver and BO-less SVM 
codes call into ttm_resource_alloc/free functions for vram allocation/free.
*This way BO driver and SVM driver shares the eviction/cgroup logic, no 
need to reimplment LRU eviction list in SVM driver. Cgroup logic should be in 
ttm_resource layer. +Maarten.
*ttm_resource is not a perfect match for SVM to allocate vram. It is still 
a big overhead. The *bo* member of ttm_resource is not needed for SVM - this 
might end up with invasive changes to ttm...need to look into more details

2) svm code allocate memory directly from drm-buddy allocator, and expose 
memory eviction functions from both ttm and svm so they can evict memory from 
each other. For example, expose the ttm_mem_evict_first function from ttm side 
so hmm/svm code can call it; expose a similar function from svm side so ttm can 
evict hmm memory.


Today we don't know which approach is better. I will work on some prove of 
concept codes, starting with #1 approach firstly.

Btw, I talked with application engineers and they said most applications 
actually use a mixture of gem_bo create and malloc, so we definitely need to 
solve this problem. 

Cheers,
Oak

> -Original Message-
> From: Christian König 
> Sent: August 16, 2023 2:06 AM
> To: Zeng, Oak ; Felix Kuehling ;
> Thomas Hellström ; Brost, Matthew
> ; Vishwanathapura, Niranjana
> ; Welty, Brian ;
> Philip Yang ; intel...@lists.freedesktop.org; dri-
> de...@lists.freedesktop.org
> Subject: Re: Implement svm without BO concept in xe driver
> 
> Hi Oak,
> 
> yeah, I completely agree with you and Felix. The main problem here is
> getting the memory pressure visible on both sides.
> 
> At the moment I have absolutely no idea how to handle that, maybe
> something like the ttm_resource object shared between TTM and HMM?
> 
> Regards,
> Christian.
> 
> Am 16.08.23 um 05:47 schrieb Zeng, Oak:
> > Hi Felix,
> >
> > It is great to hear from you!
> >
> > When I implement the HMM-based SVM for intel devices, I found this
> interesting problem: HMM uses struct page based memory management scheme
> which is completely different against the BO/TTM style memory management
> philosophy. Writing SVM code upon the BO/TTM concept seems overkill and
> awkward. So I thought we better make the SVM code BO-less and TTM-less. But
> on the other hand, currently vram eviction and cgroup memory accounting are 
> all
> hooked to the TTM layer, which means a TTM-less SVM driver won't be able to
> evict vram allocated through TTM/gpu_vram_mgr.
> >
> > Ideally HMM migration should use drm-buddy for vram allocation, but we need
> to solve this TTM/HMM mutual eviction problem as you pointed out (I am
> working with application engineers to figure out whether mutual eviction can
> truly benefit applications). Maybe we can implement a TTM-less vram
> management block which can be shared b/t the HMM-based driver and the BO-
> based driver:
> > * allocate/free memory from drm-buddy, buddy-block based
> > * memory eviction logics, allow driver to specify which allocation is 
> > evictable
> > * memory accounting, cgroup logic
> >
> > Maybe such a block can be placed at drm layer (say, call it drm_vram_mgr for
> now), so it can be shared b/t amd and intel. So I involved amd folks. Today 
> both
> amd and intel-xe driver implemented a TTM-based vram manager which doesn't
> serve above design goal. Once the drm_vram_mgr is implemented, both amd
> and intel's BO-based/TTM-based vram manager, and the HMM-based vram
> manager can call into this drm-vram-mgr.
> >
> > Thanks again,
> > Oak
> >
> >> -Original Message-
> >> From: Felix Kuehling 
> >> Sent: August 15, 2023 6:17 PM
> >> To: Zeng, Oak ; Thomas Hellström
> >> ; Brost, Matthew
> >> ; Vishwanathapura, Niranjana
> >> ; Welty, Brian
> ;
> >> Christian König ; Philip Yang
> >> ; intel...@lists.freedesktop.org; dri-
> >> de...@lists.freedesktop.org
> >> Subject: Re: Implement svm without BO concept in xe driver
> >>
> >> Hi Oak,
> >>
> >> I'm not sure what you're looking for from AMD? Are we just CC'ed FYI? Or
> >> are you looking for comments about
> >>
> >>* Our plans for VRAM management with HMM
> >>* Our experience with BO-based VRAM management
> >>* Something else?
> >>
> >> IMO, having separate memory pools for HMM and TTM is a non-starter for
> >> AMD. We need access to the full VRAM in

Re: Implement svm without BO concept in xe driver

2023-08-16 Thread Christian König

Hi Oak,

yeah, I completely agree with you and Felix. The main problem here is 
getting the memory pressure visible on both sides.


At the moment I have absolutely no idea how to handle that, maybe 
something like the ttm_resource object shared between TTM and HMM?


Regards,
Christian.

Am 16.08.23 um 05:47 schrieb Zeng, Oak:

Hi Felix,

It is great to hear from you!

When I implement the HMM-based SVM for intel devices, I found this interesting 
problem: HMM uses struct page based memory management scheme which is 
completely different against the BO/TTM style memory management philosophy. 
Writing SVM code upon the BO/TTM concept seems overkill and awkward. So I 
thought we better make the SVM code BO-less and TTM-less. But on the other 
hand, currently vram eviction and cgroup memory accounting are all hooked to 
the TTM layer, which means a TTM-less SVM driver won't be able to evict vram 
allocated through TTM/gpu_vram_mgr.

Ideally HMM migration should use drm-buddy for vram allocation, but we need to 
solve this TTM/HMM mutual eviction problem as you pointed out (I am working 
with application engineers to figure out whether mutual eviction can truly 
benefit applications). Maybe we can implement a TTM-less vram management block 
which can be shared b/t the HMM-based driver and the BO-based driver:
* allocate/free memory from drm-buddy, buddy-block based
* memory eviction logics, allow driver to specify which allocation is 
evictable
* memory accounting, cgroup logic

Maybe such a block can be placed at drm layer (say, call it drm_vram_mgr for 
now), so it can be shared b/t amd and intel. So I involved amd folks. Today 
both amd and intel-xe driver implemented a TTM-based vram manager which doesn't 
serve above design goal. Once the drm_vram_mgr is implemented, both amd and 
intel's BO-based/TTM-based vram manager, and the HMM-based vram manager can 
call into this drm-vram-mgr.

Thanks again,
Oak


-Original Message-
From: Felix Kuehling 
Sent: August 15, 2023 6:17 PM
To: Zeng, Oak ; Thomas Hellström
; Brost, Matthew
; Vishwanathapura, Niranjana
; Welty, Brian ;
Christian König ; Philip Yang
; intel...@lists.freedesktop.org; dri-
de...@lists.freedesktop.org
Subject: Re: Implement svm without BO concept in xe driver

Hi Oak,

I'm not sure what you're looking for from AMD? Are we just CC'ed FYI? Or
are you looking for comments about

   * Our plans for VRAM management with HMM
   * Our experience with BO-based VRAM management
   * Something else?

IMO, having separate memory pools for HMM and TTM is a non-starter for
AMD. We need access to the full VRAM in either of the APIs for it to be
useful. That also means, we need to handle memory pressure in both
directions. That's one of the main reasons we went with the BO-based
approach initially. I think in the long run, using the buddy allocator,
or the amdgpu_vram_mgr directly for HMM migrations would be better,
assuming we can handle memory pressure in both directions between HMM
and TTM sharing the same pool of physical memory.

Regards,
    Felix


On 2023-08-15 16:34, Zeng, Oak wrote:

Also + Christian

Thanks,

Oak

*From:*Intel-xe  *On Behalf Of
*Zeng, Oak
*Sent:* August 14, 2023 11:38 PM
*To:* Thomas Hellström ; Brost,
Matthew ; Vishwanathapura, Niranjana
; Welty, Brian
; Felix Kuehling ;
Philip Yang ; intel...@lists.freedesktop.org;
dri-devel@lists.freedesktop.org
*Subject:* [Intel-xe] Implement svm without BO concept in xe driver

Hi Thomas, Matt and all,

This came up when I port i915 svm codes to xe driver. In i915
implementation, we have i915_buddy manage gpu vram and svm codes
directly call i915_buddy layer to allocate/free vram. There is no
gem_bo/ttm bo concept involved in the svm implementation.

In xe driver,  we have drm_buddy, xe_ttm_vram_mgr and ttm layer to
manage vram. Drm_buddy is initialized during xe_ttm_vram_mgr
initialization. Vram allocation/free is done through xe_ttm_vram_mgr
functions which call into drm_buddy layer to allocate vram blocks.

I plan to implement xe svm driver the same way as we did in i915,
which means there will not be bo concept in the svm implementation.
Drm_buddy will be passed to svm layer during vram initialization and
svm will allocate/free memory directly from drm_buddy, bypassing
ttm/xee vram manager. Here are a few considerations/things we are
aware of:

  1. This approach seems match hmm design better than bo concept. Our
 svm implementation will be based on hmm. In hmm design, each vram
 page is backed by a struct page. It is very easy to perform page
 granularity migrations (b/t vram and system memory). If BO concept
 is involved, we will have to split/remerge BOs during page
 granularity migrations.

  2. We have a prove of concept of this approach in i915, originally
 implemented by Niranjana. It seems work but it only has basic
 functionalities for now. We don’t have advanced features such as
 memory eviction etc.

  3. With this approach

RE: Implement svm without BO concept in xe driver

2023-08-15 Thread Zeng, Oak
Hi Felix,

It is great to hear from you!

When I implement the HMM-based SVM for intel devices, I found this interesting 
problem: HMM uses struct page based memory management scheme which is 
completely different against the BO/TTM style memory management philosophy. 
Writing SVM code upon the BO/TTM concept seems overkill and awkward. So I 
thought we better make the SVM code BO-less and TTM-less. But on the other 
hand, currently vram eviction and cgroup memory accounting are all hooked to 
the TTM layer, which means a TTM-less SVM driver won't be able to evict vram 
allocated through TTM/gpu_vram_mgr.

Ideally HMM migration should use drm-buddy for vram allocation, but we need to 
solve this TTM/HMM mutual eviction problem as you pointed out (I am working 
with application engineers to figure out whether mutual eviction can truly 
benefit applications). Maybe we can implement a TTM-less vram management block 
which can be shared b/t the HMM-based driver and the BO-based driver:
   * allocate/free memory from drm-buddy, buddy-block based
   * memory eviction logics, allow driver to specify which allocation is 
evictable
   * memory accounting, cgroup logic

Maybe such a block can be placed at drm layer (say, call it drm_vram_mgr for 
now), so it can be shared b/t amd and intel. So I involved amd folks. Today 
both amd and intel-xe driver implemented a TTM-based vram manager which doesn't 
serve above design goal. Once the drm_vram_mgr is implemented, both amd and 
intel's BO-based/TTM-based vram manager, and the HMM-based vram manager can 
call into this drm-vram-mgr.

Thanks again,
Oak

> -Original Message-
> From: Felix Kuehling 
> Sent: August 15, 2023 6:17 PM
> To: Zeng, Oak ; Thomas Hellström
> ; Brost, Matthew
> ; Vishwanathapura, Niranjana
> ; Welty, Brian ;
> Christian König ; Philip Yang
> ; intel...@lists.freedesktop.org; dri-
> de...@lists.freedesktop.org
> Subject: Re: Implement svm without BO concept in xe driver
> 
> Hi Oak,
> 
> I'm not sure what you're looking for from AMD? Are we just CC'ed FYI? Or
> are you looking for comments about
> 
>   * Our plans for VRAM management with HMM
>   * Our experience with BO-based VRAM management
>   * Something else?
> 
> IMO, having separate memory pools for HMM and TTM is a non-starter for
> AMD. We need access to the full VRAM in either of the APIs for it to be
> useful. That also means, we need to handle memory pressure in both
> directions. That's one of the main reasons we went with the BO-based
> approach initially. I think in the long run, using the buddy allocator,
> or the amdgpu_vram_mgr directly for HMM migrations would be better,
> assuming we can handle memory pressure in both directions between HMM
> and TTM sharing the same pool of physical memory.
> 
> Regards,
>    Felix
> 
> 
> On 2023-08-15 16:34, Zeng, Oak wrote:
> >
> > Also + Christian
> >
> > Thanks,
> >
> > Oak
> >
> > *From:*Intel-xe  *On Behalf Of
> > *Zeng, Oak
> > *Sent:* August 14, 2023 11:38 PM
> > *To:* Thomas Hellström ; Brost,
> > Matthew ; Vishwanathapura, Niranjana
> > ; Welty, Brian
> > ; Felix Kuehling ;
> > Philip Yang ; intel...@lists.freedesktop.org;
> > dri-devel@lists.freedesktop.org
> > *Subject:* [Intel-xe] Implement svm without BO concept in xe driver
> >
> > Hi Thomas, Matt and all,
> >
> > This came up when I port i915 svm codes to xe driver. In i915
> > implementation, we have i915_buddy manage gpu vram and svm codes
> > directly call i915_buddy layer to allocate/free vram. There is no
> > gem_bo/ttm bo concept involved in the svm implementation.
> >
> > In xe driver,  we have drm_buddy, xe_ttm_vram_mgr and ttm layer to
> > manage vram. Drm_buddy is initialized during xe_ttm_vram_mgr
> > initialization. Vram allocation/free is done through xe_ttm_vram_mgr
> > functions which call into drm_buddy layer to allocate vram blocks.
> >
> > I plan to implement xe svm driver the same way as we did in i915,
> > which means there will not be bo concept in the svm implementation.
> > Drm_buddy will be passed to svm layer during vram initialization and
> > svm will allocate/free memory directly from drm_buddy, bypassing
> > ttm/xee vram manager. Here are a few considerations/things we are
> > aware of:
> >
> >  1. This approach seems match hmm design better than bo concept. Our
> > svm implementation will be based on hmm. In hmm design, each vram
> > page is backed by a struct page. It is very easy to perform page
> > granularity migrations (b/t vram and system memory). If BO concept
> > is involved, we will have to split/remerge BOs during page
> > granularity migrations.
&g

Re: Implement svm without BO concept in xe driver

2023-08-15 Thread Felix Kuehling

Hi Oak,

I'm not sure what you're looking for from AMD? Are we just CC'ed FYI? Or 
are you looking for comments about


 * Our plans for VRAM management with HMM
 * Our experience with BO-based VRAM management
 * Something else?

IMO, having separate memory pools for HMM and TTM is a non-starter for 
AMD. We need access to the full VRAM in either of the APIs for it to be 
useful. That also means, we need to handle memory pressure in both 
directions. That's one of the main reasons we went with the BO-based 
approach initially. I think in the long run, using the buddy allocator, 
or the amdgpu_vram_mgr directly for HMM migrations would be better, 
assuming we can handle memory pressure in both directions between HMM 
and TTM sharing the same pool of physical memory.


Regards,
  Felix


On 2023-08-15 16:34, Zeng, Oak wrote:


Also + Christian

Thanks,

Oak

*From:*Intel-xe  *On Behalf Of 
*Zeng, Oak

*Sent:* August 14, 2023 11:38 PM
*To:* Thomas Hellström ; Brost, 
Matthew ; Vishwanathapura, Niranjana 
; Welty, Brian 
; Felix Kuehling ; 
Philip Yang ; intel...@lists.freedesktop.org; 
dri-devel@lists.freedesktop.org

*Subject:* [Intel-xe] Implement svm without BO concept in xe driver

Hi Thomas, Matt and all,

This came up when I port i915 svm codes to xe driver. In i915 
implementation, we have i915_buddy manage gpu vram and svm codes 
directly call i915_buddy layer to allocate/free vram. There is no 
gem_bo/ttm bo concept involved in the svm implementation.


In xe driver,  we have drm_buddy, xe_ttm_vram_mgr and ttm layer to 
manage vram. Drm_buddy is initialized during xe_ttm_vram_mgr 
initialization. Vram allocation/free is done through xe_ttm_vram_mgr 
functions which call into drm_buddy layer to allocate vram blocks.


I plan to implement xe svm driver the same way as we did in i915, 
which means there will not be bo concept in the svm implementation. 
Drm_buddy will be passed to svm layer during vram initialization and 
svm will allocate/free memory directly from drm_buddy, bypassing 
ttm/xee vram manager. Here are a few considerations/things we are 
aware of:


 1. This approach seems match hmm design better than bo concept. Our
svm implementation will be based on hmm. In hmm design, each vram
page is backed by a struct page. It is very easy to perform page
granularity migrations (b/t vram and system memory). If BO concept
is involved, we will have to split/remerge BOs during page
granularity migrations.

 2. We have a prove of concept of this approach in i915, originally
implemented by Niranjana. It seems work but it only has basic
functionalities for now. We don’t have advanced features such as
memory eviction etc.

 3. With this approach, vram will divided into two separate pools: one
for xe_gem_created BOs and one for vram used by svm. Those two
pools are not connected: memory pressure from one pool won’t be
able to evict vram from another pool. At this point, we don’t
whether this aspect is good or not.

 4. Amdkfd svm went different approach which is BO based. The benefit
of this approach is a lot of existing driver facilities (such as
memory eviction/cgroup/accounting) can be reused

Do you have any comment to this approach? Should I come back with a 
RFC of some POC codes?


Thanks,

Oak



RE: Implement svm without BO concept in xe driver

2023-08-15 Thread Zeng, Oak
Also + Christian

Thanks,
Oak

From: Intel-xe  On Behalf Of Zeng, Oak
Sent: August 14, 2023 11:38 PM
To: Thomas Hellström ; Brost, Matthew 
; Vishwanathapura, Niranjana 
; Welty, Brian ; 
Felix Kuehling ; Philip Yang ; 
intel...@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: [Intel-xe] Implement svm without BO concept in xe driver

Hi Thomas, Matt and all,

This came up when I port i915 svm codes to xe driver. In i915 implementation, 
we have i915_buddy manage gpu vram and svm codes directly call i915_buddy layer 
to allocate/free vram. There is no gem_bo/ttm bo concept involved in the svm 
implementation.

In xe driver,  we have drm_buddy, xe_ttm_vram_mgr and ttm layer to manage vram. 
Drm_buddy is initialized during xe_ttm_vram_mgr initialization. Vram 
allocation/free is done through xe_ttm_vram_mgr functions which call into 
drm_buddy layer to allocate vram blocks.

I plan to implement xe svm driver the same way as we did in i915, which means 
there will not be bo concept in the svm implementation. Drm_buddy will be 
passed to svm layer during vram initialization and svm will allocate/free 
memory directly from drm_buddy, bypassing ttm/xee vram manager. Here are a few 
considerations/things we are aware of:


  1.  This approach seems match hmm design better than bo concept. Our svm 
implementation will be based on hmm. In hmm design, each vram page is backed by 
a struct page. It is very easy to perform page granularity migrations (b/t vram 
and system memory). If BO concept is involved, we will have to split/remerge 
BOs during page granularity migrations.



  1.  We have a prove of concept of this approach in i915, originally 
implemented by Niranjana. It seems work but it only has basic functionalities 
for now. We don't have advanced features such as memory eviction etc.



  1.  With this approach, vram will divided into two separate pools: one for 
xe_gem_created BOs and one for vram used by svm. Those two pools are not 
connected: memory pressure from one pool won't be able to evict vram from 
another pool. At this point, we don't whether this aspect is good or not.



  1.  Amdkfd svm went different approach which is BO based. The benefit of this 
approach is a lot of existing driver facilities (such as memory 
eviction/cgroup/accounting) can be reused



Do you have any comment to this approach? Should I come back with a RFC of some 
POC codes?

Thanks,
Oak



Implement svm without BO concept in xe driver

2023-08-14 Thread Zeng, Oak
Hi Thomas, Matt and all,

This came up when I port i915 svm codes to xe driver. In i915 implementation, 
we have i915_buddy manage gpu vram and svm codes directly call i915_buddy layer 
to allocate/free vram. There is no gem_bo/ttm bo concept involved in the svm 
implementation.

In xe driver,  we have drm_buddy, xe_ttm_vram_mgr and ttm layer to manage vram. 
Drm_buddy is initialized during xe_ttm_vram_mgr initialization. Vram 
allocation/free is done through xe_ttm_vram_mgr functions which call into 
drm_buddy layer to allocate vram blocks.

I plan to implement xe svm driver the same way as we did in i915, which means 
there will not be bo concept in the svm implementation. Drm_buddy will be 
passed to svm layer during vram initialization and svm will allocate/free 
memory directly from drm_buddy, bypassing ttm/xee vram manager. Here are a few 
considerations/things we are aware of:


  1.  This approach seems match hmm design better than bo concept. Our svm 
implementation will be based on hmm. In hmm design, each vram page is backed by 
a struct page. It is very easy to perform page granularity migrations (b/t vram 
and system memory). If BO concept is involved, we will have to split/remerge 
BOs during page granularity migrations.



  1.  We have a prove of concept of this approach in i915, originally 
implemented by Niranjana. It seems work but it only has basic functionalities 
for now.



  1.  With this approach, vram will divided into two separate pools: one for 
xe_gem_created BOs and one for vram used by svm. Those two pools are not 
connected: memory pressure from one pool won't be able to evict vram from 
another pool. At this point, we don't whether this aspect is good or not.



  1.  Amdkfd svm went different approach which is BO based. The benefit of this 
approach is a lot of existing driver facilities can be reused.



Do you have any comment to this approach? Should I come back with a RFC of some 
POC codes?

Thanks,
Oak