Quoting r. Roland Dreier <[EMAIL PROTECTED]>:
> Subject: Re: [openib-general] [PATCH] fmr support in mthca
> 
> Thanks for implementing this.  You saved me a lot of work and Libor
> will be very happy when he gets back next week.

Good, glad to help. I will try to address your comments next week
(its already weekend here).

> Some comments from my first read through:
> 
>  > This patch implements FMR support. I also rolled into it two fixes
>  > for regular mrs that I posed previously, let me know if its a problem.
> 
> No problem although I'll apply them separately.
> 
>  > This seems to be working fine for me, although I only did relatively basic
>  > tests. Both Tavor and Arbel Native modes are supported. I made some 
> tradeoffs
>  > for simplicity, let me know what do you think:
>  > - for tavor, I keep for each fmr two pages mapped: for mpt and one for
>  >   mtt access. This spends more kernel virtual memory than could be used,
>  >   since many mpts could share one page. Alternatives are:
>  >   map/unmap io memory on each fmr map/unmap request, or
>  >   keep and intermediate table tomap each page only once.
> 
> I don't think this is acceptable.  Each ioremap has to map at least
> one page plus a guard page.  With two ioremaps per FMR, every FMR is
> using 16K (or more) of vmalloc space.  On 64 bit archs, this doesn't
> matter, but on a large memory i386 machine, there's less than 128 MB
> of vmalloc space available (possibly a lot less if someone is using a
> video card with a big frame buffer or something).  That means we're
> limited to a few thousand FMRs, which isn't enough.
> 
> What if we just reserve something like 64K MPTs and MTTs for FMRs and
> ioremap everything at driver startup?  That would only use a few MB of
> vmalloc space and probably simplify the code too.

I dont like these pre-allocations - if someone is only using SDP and IP
over IB, it seems he wont need almost any regular regions.
64K MTTs with 4K page size cover up to 200MByte of memory.

My other problem with this approach was implementational: existing allocator
and table code can be passed reserved parameter, but dont have the ability
to allocate out of that pool. So we'd have to allocate out of a separate
allocator, and take care so that keys do not conflict. This gets a bit
complicated.

Maybe do something separate for 32 bit kernels (like - disable FMR
support)?

>  > - icm that has the mpts/mtts is linearly scanned and this is repeated
>  >   for each mtt on each fmr map. This may be improved somewhat
>  >   with some kind of an iterator, but to really speed things up
>  >   the icm data structure (list of arrays) would have to
>  >   be replaced by some kind of tree.
> 
> I don't understand this.  I'm probably missing something but the
> addresses don't change after we allocate the FMR, right?  It seems we
> could just store the MPT/MTT address in the FMR structure the same way
> we do for Tavor mode.

Yes but for mtts the addresses may not be physically contigious,
unless we want to limit FMRs to PAGE_SIZE/8 MTTs, which means
512 MTTs, that is 2MByte with 4K FMR page size.
And is it seems possible that even with this limitation MTTs for a
specific FMR start at non page aligned boundary.

So we'd need an array of pages per FMR, unlike Tavor.
Do you think its a good idea?

> Some more nitpicky comments below...
> 
>  > +/* Nonblocking. Callers must make sure the object exists by serializing 
> against
>  > + * callers of get/put. */
>  > +void *mthca_table_find(struct mthca_dev *dev, struct mthca_icm_table 
> *table,
>  > +                 int obj);
> 
> Can we just make this use the table mutex and only call it when
> allocating an FMR?

See above. But the restriction doesnt matter much for FMRs
because the icm ref count is incremented when FMR is created,
so they satisfy this constraint.

Other comments need to be addressed. I'll start working on them
when I am back on Sunday.

-- 
MST - Michael S. Tsirkin
_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to