Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Tamminen, Eero T
Hi,

On Tue, 2020-05-12 at 14:36 +, Jose Fonseca wrote:
> From: mesa-dev  on behalf of
> Tamminen, Eero T 
> I've done a lot of resource usage analysis at former job[1], but I've
> never had needed anything like that.  malloc etc all reside in a
> separate shared library from Mesa, so calls to them always cross
> dynamic library boundary and therefore all of them can be caught with
> the dynamic linker features (LD_PRELOAD, LD_AUDIT...).
> 
> True, one can easily intercept all mallocs using that sort of dynamic
> linking tricks, when doing full application interception.  (I even
> have done some of the sort on https://github.com/jrfonseca/memtrail ,
> mostly to hunt down memory leaks on LLVM.) But the goal here is to
> intercept the OpenGL/Vulkan driver malloc calls alone.  Not the
> application mallocs.  Which is difficult to segregate when doing
> whole application interception.

Only reason to do this that I can think of would be to report some Mesa
specific metrics to an application at run-time.  But that will need
careful thought on how *applications* are supposed to *use* that data,
otherwise it's just "garbage in, garbage out".

What are the exact use-cases for Mesa specific allocation data?


> For simplicity imagine you have only these shared objects:
>application (not controlled by us)
>libVulkan.so
>libstdcc++.so
>libc.so
> 
> Now imagine you're intercepting malloc doing some sort of LD_PRELOAD
> interception, and malloc is called.  How do you know if it's a call
> done by the Vulkan driver, hence should call the callback, or one
> done by the application, hence not call the Vulkan allocation
> callback.
> 
> One can look at the caller IP address, but what if the caller is in
> libstdc++ which is used both by Vulkan and the app, is not
> immediately clear which to bill the memory.  One would need to walk
> back the stack completely, which is complicated and not very
> reliable.

Over decade ago it was unreliable, but not anymore.  Even stripped
binaries contain the frame information section (I think it's needed
e.g. for handling C++ exceptions).  So nowadays it's simple to use, and
Glibc has a function for it.

For analysis, you don't do filtering while tracing, as that's too
limiting, you do it in post-processing phase.


If you really need to do run-time per-library filtering, you could do
it based on the backtrace addresses, and whether they fall within the
address range where the library you're interested about is mapped.

For that, you need to overload library loading, so you can catch when
Mesa gets loaded, and find out its (address-space-randomized) load
address range.


> Imagine one guesses wrong -- the malloc interceptor believes the
> malloc call is done by the Vulkan driver, and calls the application
> callback, which then calls malloc again, and the interceptor guesses
> wrong again, therefore an infinite recursion loop.

If you find your own code addresses from the backtrace, STOP. :-)


> Could you be confusing this with trying to catch some Mesa specific
> function, where dynamic linker can catch only calls from application
> to
> Mesa, but not calls within Mesa library itself (as they don't cross
> dynamic library boundary)?
> 
> My goal from the beginning is intercepting all mallocs/frees done by
> Mesa OpenGL/Vulkan driver, and only those.

If this is for analysis, I would do it in post-processing phase (with
sp-rtrace).  Just ask post-processor to filter-in allocation backtraces
going through the library I'm interested about.


...
> Yes, indeed Linux has much better tooling for this than
> Windows.  Note that memory debugging on Windows is just one of our
> needs.  The other being able to run Mesa driver on an embedded system
> with fixed amount of memory (a separate budget for Mesa mallocs.)

If that embedded target is still running something Linux-like, but
doesn't have enough memory for collecting the data, I would just stream
the data out of the device, or store it to a file.  And do analysis in
post-processing phase.

If that's not an option, I would do memory analysis on a system which
has reasonable tools, and "emulate" relevant embedded device functional
differences on it.   If the issue isn't reproducible that way, it may
be thread / data race, which needs different tools (valgrind, mutrace
etc).


- Eero

PS. In the maemo-tools project, there's also tool called "functracer"
which can on ARM & x86 attach to an already running process and start
collecting resource information. sp-rtrace tooling can post-process its
output.

(It basically does in user-space with ptrace syscall, what ftrace does
in kernel space, which requires some arch-specific assembly, so
unfortunately it's not very portable.)

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Jose Fonseca
On 05/11/2020 10:13 AM, Jose Fonseca wrote:
> Hi,
>
> To give everybody a bit of background context, this email comes from
> https://gitlab.freedesktop.org/mesa/mesa/-/issues/2911 .
>
> The short story is that Gallium components (but not Mesa) used to have
> their malloc/free calls intercepted, to satisfy certain needs: 1) memory
> debugging on Windows, 2) memory accounting on embedded systems.  But
> with the unification of Gallium into Mesa, the gallium vs non-gallium
> division got blurred, leading to some mallocs being intercepted but not
> the respective frees, and vice-versa.
>
>
> I admit that trying to intercept mallocs/frees for some components and
> not others is error prone.  We could get this going on again, it's
> doable, but it's possible it would keep come causing troubles, for us or
> others, over and over again.
>
>
> The two needs mentioned above were mostly VMware's needs.  So I've
> reevaluated, and I /think/ that with some trickery we satisfy those two
> needs differently.  (Without wide spread source code changes.)
>
>
> On the other hand, VMware is probably not the only one to have such
> needs.  In fact Vulkan spec added memory callbacks precisely with the
> same use cases as ours, as seen
> https://www.khronos.org/registry/vulkan/specs/1.2/html/chap10.html#memory-host
>  which
> states:
>
> /Vulkan provides applications the opportunity to perform host memory
> allocations on behalf of the Vulkan implementation. If this feature
> is not used, the implementation will perform its own memory
> allocations. Since most memory allocations are off the critical
> path, this is not meant as a performance feature. *Rather, this can
> be useful for certain embedded systems, for debugging purposes (e.g.
> putting a guard page after all host allocations), or for memory
> allocation logging.*/
>
>
> And I know there were people interested in having Mesa drivers on
> embedded devices on the past (the old Tunsten Graphics having even been
> multiple times hired to do so), and I'm pretty sure they exist again.
>
>
>
> Therefore, rather than shying away from memory allocation abstractions
> now, I wonder if now it's not the time to actually double down on them
> and ensure we do so comprehensively throughout the whole mesa, all drivers?
> *
> *
> After all Mesa traditionally always had MALLOC*/CALLOC*/FREE wrappers
> around malloc/free.  As so many other projects do.
>
>
>
> More concretely, I'd like to propose that we:
>
>   * ensure all components use MALLOC*/CALLOC*/FREE and never
> malloc/calloc/free directly (unless interfacing with a 3rd party
> which expects memory to be allocated/freed with malloc/free directly)
>   * Perhaps consider renaming MALLOC -> _mesa_malloc etc while we're at it
>   * introduce a mechanisms to quickly catch any mistaken use of
> malloc/calloc/free, regardless compiler/OS used:
>   o #define malloc/free/calloc as malloc_do_not_use/free_do_not_use
> to trigger compilation errors, except on files which explicely
> opt out of this (source files which need to interface with 3rd
> party, or source files which implement the callbacks)
>   o Add a cookie to MALLOC/CALLOC/FREE memory to ensure it's not
> inadvertently mixed with malloc/calloc/free
>
> The end goal is that we should be able to have a memory allocation
> abstraction which can be used for all the needs above: memory debugging,
> memory accounting, and satisfying Vulkan host memory callbacks.
>
>
> Some might retort: why not just play some tricks with the linker, and
> intercept all malloc/free calls, without actually having to modify any
> source code?
>
> Yes, that's indeed technically feasible.  And is precisely the sort of
> trick I was planing to resort to satisfy VMware needs without having to
> further modify the source code.  But for these tricks to work, it is
> absolutely /imperative/ that one statically links C++ library and STL.
> The problem being that if one doesn't then there's an imbalance: the
> malloc/free/new/delete calls done in inline code on C++ headers will be
> intercepted, where as malloc/free/new/delete calls done in code from the
> shared object which is not inlined will not, causing havoc.  This is OK
> for us VMware (we do it already for many other reasons, including
> avoiding DLL hell.)  But I doubt it will be palatable for everybody
> else, particularly Linux distros, to have everything statically linked.
>
> So effectively, if one really wants to implement Vulkan host memory
> callbacks, the best way is to explicitly use malloc/free abstractions,
> instead of the malloc/free directly.
>
>
> So before we put more time on pursuing either the "all" or "nothing"
> approaches, I'd like to get a feel for where people's preferences are.
>
> Jose
>


I was tinkering with this on Friday.  My initial idea is to use an
opt-in approach for memory tracking/debugging.  That is, where we care
about 

Re: [Mesa-dev] New Mesa3D.org website proposal

2020-05-12 Thread Erik Faye-Lund
On Tue, 2020-05-12 at 15:11 +0200, Erik Faye-Lund wrote:
> On Tue, 2020-05-12 at 06:42 -0600, Brian Paul wrote:
> > On 05/12/2020 04:17 AM, Erik Faye-Lund wrote:
> > > On Thu, 2020-05-07 at 11:03 -0600, Brian Paul wrote:
> > > > On 05/07/2020 04:33 AM, Erik Faye-Lund wrote:
> > > > > Hey Brian
> > > > > 
> > > > > TLDR; are you OK with me moving forward with the rework of
> > > > > mesa3d.org?
> > > > 
> > > > Yes...
> > > > 
> > > 
> > > Cool! We've now set up a repo here:
> > > 
> > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa3d.orgdata=02%7C01%7Cbrianp%40vmware.com%7Cbec2daa6980541c021b808d7f65dc01e%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248754836107148sdata=jJhAo5v%2F28EhqaFiu6HiElJMYO1amsLeIqMf5F%2B8kpQ%3Dreserved=0
> > > 
> > > I pushed the bare minimum (ish) there so far, and sent a MR for
> > > the
> > > importing of the news and article-redirects. This is now being
> > > served
> > > here:
> > > 
> > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmesa.pages.freedesktop.org%2Fmesa3d.org%2Fdata=02%7C01%7Cbrianp%40vmware.com%7Cbec2daa6980541c021b808d7f65dc01e%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248754836107148sdata=vchFG3oWUyBzi9lo2TH1keLAyNNn75Hsa226ggB2Mws%3Dreserved=0
> > 
> > The formatting of the text under "Featured APIs" is
> > inconsistent.  For 
> > example, the size of "OpenGL", "Vulkan", "EGL", etc. is much
> > smaller 
> > than for "OpenCL".
> 
> That's because we're seeing the "alt"-text instead of the logos. This
> will be fixed when we're no longer hosted from a subdir, or 
> https://gitlab.freedesktop.org/mesa/mesa3d.org/-/merge_requests/2
> lands.
> 
> > Maybe under "Supported Drivers" the drivers which are "no longer 
> > actively developed" should go to the bottom of the list under a 
> > sub-heading.  It seems a bit pessimistic to start the list with 
> > unsupported hardware. :)
> 
> Sure, sounds like a good idea. But which drivers exactly do you want
> me
> to move out to a legacy-list?

Never mind, I just realized we spell that out in the description. So
yeah, here you go:

https://gitlab.freedesktop.org/mesa/mesa3d.org/-/merge_requests/3

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Jose Fonseca



From: mesa-dev  on behalf of Tamminen, 
Eero T 
Sent: Monday, May 11, 2020 21:19
To: mesa-dev@lists.freedesktop.org 
Subject: Re: [Mesa-dev] RFC: Memory allocation on Mesa

Hi,

On Mon, 2020-05-11 at 16:13 +, Jose Fonseca wrote:
> Some might retort: why not just play some tricks with the linker, and
> intercept all malloc/free calls, without actually having to modify
> any source code?
>
> Yes, that's indeed technically feasible.  And is precisely the sort
> of trick I was planing to resort to satisfy VMware needs without
> having to further modify the source code.  But for these tricks to
> work, it is absolutely imperative that one statically links C++
> library and STL.  The problem being that if one doesn't then there's
> an imbalance: the malloc/free/new/delete calls done in inline code on
> C++ headers will be intercepted, where as malloc/free/new/delete
> calls done in code from the shared object which is not inlined will
> not, causing havoc.  This is OK for us VMware (we do it already for
> many other reasons, including avoiding DLL hell.)  But I doubt it
> will be palatable for everybody else, particularly Linux distros, to
> have everything statically linked.

Huh?

I've done a lot of resource usage analysis at former job[1], but I've
never had needed anything like that.  malloc etc all reside in a
separate shared library from Mesa, so calls to them always cross
dynamic library boundary and therefore all of them can be caught with
the dynamic linker features (LD_PRELOAD, LD_AUDIT...).

True, one can easily intercept all mallocs using that sort of dynamic linking 
tricks, when doing full application interception.  (I even have done some of 
the sort on https://github.com/jrfonseca/memtrail , mostly to hunt down memory 
leaks on LLVM.) But the goal here is to intercept the OpenGL/Vulkan driver 
malloc calls alone.  Not the application mallocs.  Which is difficult to 
segregate when doing whole application interception.

For simplicity imagine you have only these shared objects:

   application (not controlled by us)
   libVulkan.so
   libstdcc++.so
   libc.so

Now imagine you're intercepting malloc doing some sort of LD_PRELOAD 
interception, and malloc is called.  How do you know if it's a call done by the 
Vulkan driver, hence should call the callback, or one done by the application, 
hence not call the Vulkan allocation callback.

One can look at the caller IP address, but what if the caller is in libstdc++ 
which is used both by Vulkan and the app, is not immediately clear which to 
bill the memory.  One would need to walk back the stack completely, which is 
complicated and not very reliable.

Imagine one guesses wrong -- the malloc interceptor believes the malloc call is 
done by the Vulkan driver, and calls the application callback, which then calls 
malloc again, and the interceptor guesses wrong again, therefore an infinite 
recursion loop.

Could you be confusing this with trying to catch some Mesa specific
function, where dynamic linker can catch only calls from application to
Mesa, but not calls within Mesa library itself (as they don't cross
dynamic library boundary)?

My goal from the beginning is intercepting all mallocs/frees done by Mesa 
OpenGL/Vulkan driver, and only those.

Note: at least earlier, new & delete typically called malloc & free (in
addition to calling ctor & dtor), in which case you don't even need to
track them separately.  You see their usage directly from the
allocation callgraph.


- Eero

PS. Your XDot tool was a really nice tool for viewing those call-
graphs. :-)

Thanks! :)

[1] Linux has several ready-made tools for tracking resource
allocations (several Valgrind tools, ElectricFence, Duma etc), and we
added few more at Nokia, with main one being:
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmaemo-tools-old%2Fsp-rtracedata=02%7C01%7Cjfonseca%40vmware.com%7Cc77b32234e7c4e59af7f08d7f5e89d09%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248251736102860sdata=4ZD66qaYmgCBv%2FQqRqamCWh1wYdFxqvuz0sqDZEnC3Y%3Dreserved=0

(Most memorable thing was early Qt/C++ application version doing
~10(!) allocation frees while it was initializing itself, due to
redundantly creating, localizing and removing one user view.)
Yes, indeed Linux has much better tooling for this than Windows.  Note that 
memory debugging on Windows is just one of our needs.  The other being able to 
run Mesa driver on an embedded system with fixed amount of memory (a separate 
budget for Mesa mallocs.)

Jose


___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Jason Ekstrand
I have given some thought as to how we could do it in our compiler
stack.  It basically comes down to adding a new type of ralloc context
which takes a Vulkan allocator struct.  If the parent context has such
a struct, that allocator gets used.  It wouldn't be that hard; I've
just not gone to the effort of wiring it up.

--Jason

On Tue, May 12, 2020 at 9:14 AM Jose Fonseca  wrote:
>
> You raise a good point about LLVM.  It can easily be the biggest memory 
> consumer (at least transiently) for any driver that uses it, so the value of 
> implementing Vulkan allocation callbacks without is indeed dubious.
>
> Jose
>
> 
> From: Jason Ekstrand 
> Sent: Monday, May 11, 2020 17:29
> To: Jose Fonseca 
> Cc: ML mesa-dev ; 
> erik.faye-l...@collabora.com 
> Subject: Re: [Mesa-dev] RFC: Memory allocation on Mesa
>
> Sorry for the top-post.
>
> Very quick comment:  If part of your objective is to fulfill Vulkan's
> requirements, we need a LOT more plumbing than just
> MALLOC/CALLOC/FREE.  The Vulkan callbacks aren't set at a global level
> when the driver is loaded but are provided to every call that
> allocates anything and we're expected to track these sorts of
> "domains" that things are allocated from.  The reality, however, is
> that the moment you get into the compiler, all bets are off.  This is
> also true on other drivers; I don't think anyone has plumbed the
> Vulkan allocation callbacks into LLVM. :-)
>
> --Jason
>
> On Mon, May 11, 2020 at 11:13 AM Jose Fonseca  wrote:
> >
> > Hi,
> >
> > To give everybody a bit of background context, this email comes from 
> > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fissues%2F2911data=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798sdata=hYmA5dMivC0jGjAx9cA9MwF81FjQSoo5plQBvHEDYes%3Dreserved=0
> >  .
> >
> > The short story is that Gallium components (but not Mesa) used to have 
> > their malloc/free calls intercepted, to satisfy certain needs: 1) memory 
> > debugging on Windows, 2) memory accounting on embedded systems.  But with 
> > the unification of Gallium into Mesa, the gallium vs non-gallium division 
> > got blurred, leading to some mallocs being intercepted but not the 
> > respective frees, and vice-versa.
> >
> >
> > I admit that trying to intercept mallocs/frees for some components and not 
> > others is error prone.  We could get this going on again, it's doable, but 
> > it's possible it would keep come causing troubles, for us or others, over 
> > and over again.
> >
> >
> > The two needs mentioned above were mostly VMware's needs.  So I've 
> > reevaluated, and I think that with some trickery we satisfy those two needs 
> > differently.  (Without wide spread source code changes.)
> >
> >
> > On the other hand, VMware is probably not the only one to have such needs.  
> > In fact Vulkan spec added memory callbacks precisely with the same use 
> > cases as ours, as seen 
> > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khronos.org%2Fregistry%2Fvulkan%2Fspecs%2F1.2%2Fhtml%2Fchap10.html%23memory-hostdata=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798sdata=SarQxeuRUMOm%2FZHogUQKo64rh7K7uLn5UOlIRPDe1jM%3Dreserved=0
> >  which states:
> >
> > Vulkan provides applications the opportunity to perform host memory 
> > allocations on behalf of the Vulkan implementation. If this feature is not 
> > used, the implementation will perform its own memory allocations. Since 
> > most memory allocations are off the critical path, this is not meant as a 
> > performance feature. Rather, this can be useful for certain embedded 
> > systems, for debugging purposes (e.g. putting a guard page after all host 
> > allocations), or for memory allocation logging.
> >
> >
> > And I know there were people interested in having Mesa drivers on embedded 
> > devices on the past (the old Tunsten Graphics having even been multiple 
> > times hired to do so), and I'm pretty sure they exist again.
> >
> >
> >
> > Therefore, rather than shying away from memory allocation abstractions now, 
> > I wonder if now it's not the time to actually double down on them and 
> > ensure we do so comprehensively throughout the whole mesa, all drivers?
> >
> > After all Mesa traditionally always had MALLOC*/CALLOC*/FREE wrappers 
> > around malloc/free.  As so many other projects do.
> >
> >
> >
> > More concretely, I'd like to propose that we:
> >
> > ensure all components use MALLOC*/CALLOC*/FREE and never malloc/calloc/free 
> > directly (unless interfacing with a 3rd party which expects memory to be 
> > allocated/freed with malloc/free directly)
> > Perhaps consider renaming MALLOC -> _mesa_malloc etc while we're at it
> > introduce a mechanisms to quickly catch any mistaken use of 
> > malloc/calloc/free, 

Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Jose Fonseca



From: Timur Kristóf 
Sent: Monday, May 11, 2020 18:06
To: Jose Fonseca ; ML mesa-dev 

Cc: erik.faye-l...@collabora.com 
Subject: Re: [Mesa-dev] RFC: Memory allocation on Mesa

On Mon, 2020-05-11 at 16:13 +, Jose Fonseca wrote:
> Some might retort: why not just play some tricks with the linker, and
> intercept all malloc/free calls, without actually having to modify
> any source code?
>
> Yes, that's indeed technically feasible.  And is precisely the sort
> of trick I was planing to resort to satisfy VMware needs without
> having to further modify the source code.  But for these tricks to
> work, it is absolutely imperative that one statically links C++
> library and STL.  The problem being that if one doesn't then there's
> an imbalance: the malloc/free/new/delete calls done in inline code on
> C++ headers will be intercepted, where as malloc/free/new/delete
> calls  done in code from the shared object which is not inlined will
> not, causing havoc.

Wouldn't you face the same issue if you chose to wrap all calls to
malloc and free in mesa, instead of relying on the linker? Any
dynamically linked or 3rd party library, including the C++ standard
library, will have no way of knowing about our wrapped malloc and free.

Timur

Indeed 3rd part libraries aren't tracked either way.   But I wasn't talking 
about 3rd party libraries, but rather Mesa itself.

Mesa is mostly written in C but some C++ code (ever more in fact.)  My point is 
that even if we ignore 3rd party libraries, if one takes the linker approach 
without statically linking, mesa new/delete calls would go unbalanced and the 
consequence would be crashes.

With explicit malloc/free, the consequent at most is untracked mallocs/frees, 
but there should never be any unbalanced mallocs frees, hence no crashes.

To be clear, there are two kinds of problems here:
1. allocate memory with one allocator and free with another -- this is 
catastrophic and will lead to a segfault
2. not intercept every single malloc/free pair -- this is not ideal -- but is 
inescapable to some extent .  One always need some memory reservation to plain 
old malloc/free, but the less the better.

Jose
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Jose Fonseca
You raise a good point about LLVM.  It can easily be the biggest memory 
consumer (at least transiently) for any driver that uses it, so the value of 
implementing Vulkan allocation callbacks without is indeed dubious.

Jose


From: Jason Ekstrand 
Sent: Monday, May 11, 2020 17:29
To: Jose Fonseca 
Cc: ML mesa-dev ; erik.faye-l...@collabora.com 

Subject: Re: [Mesa-dev] RFC: Memory allocation on Mesa

Sorry for the top-post.

Very quick comment:  If part of your objective is to fulfill Vulkan's
requirements, we need a LOT more plumbing than just
MALLOC/CALLOC/FREE.  The Vulkan callbacks aren't set at a global level
when the driver is loaded but are provided to every call that
allocates anything and we're expected to track these sorts of
"domains" that things are allocated from.  The reality, however, is
that the moment you get into the compiler, all bets are off.  This is
also true on other drivers; I don't think anyone has plumbed the
Vulkan allocation callbacks into LLVM. :-)

--Jason

On Mon, May 11, 2020 at 11:13 AM Jose Fonseca  wrote:
>
> Hi,
>
> To give everybody a bit of background context, this email comes from 
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fissues%2F2911data=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798sdata=hYmA5dMivC0jGjAx9cA9MwF81FjQSoo5plQBvHEDYes%3Dreserved=0
>  .
>
> The short story is that Gallium components (but not Mesa) used to have their 
> malloc/free calls intercepted, to satisfy certain needs: 1) memory debugging 
> on Windows, 2) memory accounting on embedded systems.  But with the 
> unification of Gallium into Mesa, the gallium vs non-gallium division got 
> blurred, leading to some mallocs being intercepted but not the respective 
> frees, and vice-versa.
>
>
> I admit that trying to intercept mallocs/frees for some components and not 
> others is error prone.  We could get this going on again, it's doable, but 
> it's possible it would keep come causing troubles, for us or others, over and 
> over again.
>
>
> The two needs mentioned above were mostly VMware's needs.  So I've 
> reevaluated, and I think that with some trickery we satisfy those two needs 
> differently.  (Without wide spread source code changes.)
>
>
> On the other hand, VMware is probably not the only one to have such needs.  
> In fact Vulkan spec added memory callbacks precisely with the same use cases 
> as ours, as seen 
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khronos.org%2Fregistry%2Fvulkan%2Fspecs%2F1.2%2Fhtml%2Fchap10.html%23memory-hostdata=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798sdata=SarQxeuRUMOm%2FZHogUQKo64rh7K7uLn5UOlIRPDe1jM%3Dreserved=0
>  which states:
>
> Vulkan provides applications the opportunity to perform host memory 
> allocations on behalf of the Vulkan implementation. If this feature is not 
> used, the implementation will perform its own memory allocations. Since most 
> memory allocations are off the critical path, this is not meant as a 
> performance feature. Rather, this can be useful for certain embedded systems, 
> for debugging purposes (e.g. putting a guard page after all host 
> allocations), or for memory allocation logging.
>
>
> And I know there were people interested in having Mesa drivers on embedded 
> devices on the past (the old Tunsten Graphics having even been multiple times 
> hired to do so), and I'm pretty sure they exist again.
>
>
>
> Therefore, rather than shying away from memory allocation abstractions now, I 
> wonder if now it's not the time to actually double down on them and ensure we 
> do so comprehensively throughout the whole mesa, all drivers?
>
> After all Mesa traditionally always had MALLOC*/CALLOC*/FREE wrappers around 
> malloc/free.  As so many other projects do.
>
>
>
> More concretely, I'd like to propose that we:
>
> ensure all components use MALLOC*/CALLOC*/FREE and never malloc/calloc/free 
> directly (unless interfacing with a 3rd party which expects memory to be 
> allocated/freed with malloc/free directly)
> Perhaps consider renaming MALLOC -> _mesa_malloc etc while we're at it
> introduce a mechanisms to quickly catch any mistaken use of 
> malloc/calloc/free, regardless compiler/OS used:
>
> #define malloc/free/calloc as malloc_do_not_use/free_do_not_use to trigger 
> compilation errors, except on files which explicely opt out of this (source 
> files which need to interface with 3rd party, or source files which implement 
> the callbacks)
> Add a cookie to MALLOC/CALLOC/FREE memory to ensure it's not inadvertently 
> mixed with malloc/calloc/free
>
> The end goal is that we should be able to have a memory allocation 
> abstraction which can be used for all the needs above: memory debugging, 
> memory accounting, and 

Re: [Mesa-dev] New Mesa3D.org website proposal

2020-05-12 Thread Brian Paul
On Tue, May 12, 2020 at 4:04 AM Daniel Stone  wrote:

> Hi Brian,
>
> On Fri, 8 May 2020 at 15:30, Brian Paul  wrote:
> > Done.  easydns says it may take up to 3 hours to go into effect.
>
> Thanks for doing this! Could you please add the following TXT records
> as well (note that they're FQDN, so you might want to chop the
> trailing '.mesa3d.org' from the first segment:
> _gitlab-pages-verification-code.mesa3d.org TXT
> gitlab-pages-verification-code=e8a33eb47c034b08afd339cb3a801d88
> _gitlab-pages-verification-code.www.mesa3d.org TXT
> gitlab-pages-verification-code=28d457c44c75d4a1e460243ded2b4311
> _gitlab-pages-verification-code.docs.mesa3d.org TXT
> gitlab-pages-verification-code=6045fd12252cc01f5891fa3f81b140fe
>

Done.

-Brian
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] New Mesa3D.org website proposal

2020-05-12 Thread Erik Faye-Lund
On Thu, 2020-05-07 at 11:03 -0600, Brian Paul wrote:
> On 05/07/2020 04:33 AM, Erik Faye-Lund wrote:
> > Hey Brian
> > 
> > TLDR; are you OK with me moving forward with the rework of
> > mesa3d.org?
> 
> Yes...
> 

Cool! We've now set up a repo here:

https://gitlab.freedesktop.org/mesa/mesa3d.org

I pushed the bare minimum (ish) there so far, and sent a MR for the
importing of the news and article-redirects. This is now being served
here:

https://mesa.pages.freedesktop.org/mesa3d.org/

(there's a problem with some URLs, due to me not setting --baseURL,
sent an MR for that as well)

I have a work-in-progress version of a rewritten webmaster.html (turned
into a more of an "about mesa3d.org"-page), describing how the site is
(well, will be once we're done) structured, and how to send changes.

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] New Mesa3D.org website proposal

2020-05-12 Thread Daniel Stone
Hi Brian,

On Fri, 8 May 2020 at 15:30, Brian Paul  wrote:
> Done.  easydns says it may take up to 3 hours to go into effect.

Thanks for doing this! Could you please add the following TXT records
as well (note that they're FQDN, so you might want to chop the
trailing '.mesa3d.org' from the first segment:
_gitlab-pages-verification-code.mesa3d.org TXT
gitlab-pages-verification-code=e8a33eb47c034b08afd339cb3a801d88
_gitlab-pages-verification-code.www.mesa3d.org TXT
gitlab-pages-verification-code=28d457c44c75d4a1e460243ded2b4311
_gitlab-pages-verification-code.docs.mesa3d.org TXT
gitlab-pages-verification-code=6045fd12252cc01f5891fa3f81b140fe

archive.mesa3d.org is now live, but I'll be in touch with you to cut
over the main website once we've done some more infrastructure
transitional things: moving websites with live TLS is somewhat
difficult ...

Cheers,
Daniel
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Timur Kristóf
On Tue, 2020-05-12 at 08:51 +, Tamminen, Eero T wrote:
> Hi,
> 
> On Tue, 2020-05-12 at 10:36 +0200, Timur Kristóf wrote:
> > On Mon, 2020-05-11 at 20:19 +, Tamminen, Eero T wrote:
> > > I've done a lot of resource usage analysis at former job[1], but
> > > I've
> > > never had needed anything like that.  malloc etc all reside in a
> > > separate shared library from Mesa, so calls to them always cross
> > > dynamic library boundary and therefore all of them can be caught
> > > with
> > > the dynamic linker features (LD_PRELOAD, LD_AUDIT...).
> > 
> > I think he meant something like GCC's --wrap option, and wasn't
> > talking
> > about the dynamic linker.
> 
> Using it requires relinking the SW to use, so I would use it only on
> some embedded platform which dynamic linker doesn't support better
> interception methods.

Yes, I agree that the dynamic linker is probably the best way to deal
with this kind of problem.

> (Note: "--wrap" isn't GCC options, but one for binutils ld.)

Indeed. Thanks for the correction.

Timur

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Tamminen, Eero T
Hi,

On Tue, 2020-05-12 at 10:36 +0200, Timur Kristóf wrote:
> On Mon, 2020-05-11 at 20:19 +, Tamminen, Eero T wrote:
> > I've done a lot of resource usage analysis at former job[1], but
> > I've
> > never had needed anything like that.  malloc etc all reside in a
> > separate shared library from Mesa, so calls to them always cross
> > dynamic library boundary and therefore all of them can be caught
> > with
> > the dynamic linker features (LD_PRELOAD, LD_AUDIT...).
> 
> I think he meant something like GCC's --wrap option, and wasn't
> talking
> about the dynamic linker.

Using it requires relinking the SW to use, so I would use it only on
some embedded platform which dynamic linker doesn't support better
interception methods.

(Note: "--wrap" isn't GCC options, but one for binutils ld.)


- Eero

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] RFC: Memory allocation on Mesa

2020-05-12 Thread Timur Kristóf
On Mon, 2020-05-11 at 20:19 +, Tamminen, Eero T wrote:
> Hi,
> 
> On Mon, 2020-05-11 at 16:13 +, Jose Fonseca wrote:
> > Some might retort: why not just play some tricks with the linker,
> > and
> > intercept all malloc/free calls, without actually having to modify
> > any source code?
> > 
> > Yes, that's indeed technically feasible.  And is precisely the sort
> > of trick I was planing to resort to satisfy VMware needs without
> > having to further modify the source code.  But for these tricks to
> > work, it is absolutely imperative that one statically links C++
> > library and STL.  The problem being that if one doesn't then
> > there's
> > an imbalance: the malloc/free/new/delete calls done in inline code
> > on
> > C++ headers will be intercepted, where as malloc/free/new/delete
> > calls done in code from the shared object which is not inlined will
> > not, causing havoc.  This is OK for us VMware (we do it already for
> > many other reasons, including avoiding DLL hell.)  But I doubt it
> > will be palatable for everybody else, particularly Linux distros,
> > to
> > have everything statically linked.
> 
> Huh?
> 
> I've done a lot of resource usage analysis at former job[1], but I've
> never had needed anything like that.  malloc etc all reside in a
> separate shared library from Mesa, so calls to them always cross
> dynamic library boundary and therefore all of them can be caught with
> the dynamic linker features (LD_PRELOAD, LD_AUDIT...).

I think he meant something like GCC's --wrap option, and wasn't talking
about the dynamic linker.

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev