I guessed mempool and eina_trash did that 

--
Gustavo Sverzut Barbieri

> Em 3 de nov de 2016, às 05:53, Carsten Haitzler (The Rasterman) 
> <[email protected]> escreveu:
> 
> On Thu, 03 Nov 2016 09:35:21 +0200 Daniel Zaoui <[email protected]> 
> said:
> 
>> Well, my Lord, I hate that idea. Do you want to make all EFL asynchronous?
> 
> this isn't async. it's just deferred. we already do this for evas objects with
> delete_me. we do it for timers/animators and mark them for deletion later. 
> it's
> nothing new. this is just more generic/extensive
> 
>>> From my point of view, seems to be like a hack cause some problems (e.g Eo)
>>> are hard to solve.
>> 
>> My comments below.
>> 
>> On Thu, 03 Nov 2016 16:11:24 +0900
>> Carsten Haitzler <[email protected]> (The Rasterman) wrote:
>> 
>>> here's an idea. it's very very very very simple
>>> 
>>> create an eina_freeq(). instead of calling free() or whatever free
>>> function on something immediately, call:
>>> 
>>>    fq = eina_freeq_main_get();
>>>    eina_freeq_ptr_add(fq, pointer, size, free);
>>> 
>>> or
>>> 
>>>    fq = eina_freeq_global_get();
>>>    eina_freeq_ptr_add(fq, l, sizeof(Eina_List),
>>> _eina_list_mempool_list_free);
>>> 
>>> etc.
>>> 
>>> and the free queue will "add this to the end" to be freed some time
>>> later. the idea of size is so it could make intelligent choice like
>>> to free very large chunks earlier than smaller allocations. the
>>> mainloop would drive this actual freeing. or more specifically your
>>> LOCAL loop would. need to add some kind of loop method that returns
>>> the "free queue" FOR your loop/thread. or wherever you want it
>>> actually freed. probably have a main free queue driven by the
>>> mainloop (and cleared up on eina_shutdown) etc.
>>> 
>>> why?
>>> 
>>> 1. move overhead of doing actual frees out of critical code into idle
>>> time
>>> 2. improve stability by keeping memory eg for eo objects or eina
>>> list nodes in a "free queue purgatory" for a while so if someone does
>>> bad things like "use after free" the IMPACT is far smaller as they
>>> mess with memory in the free queue not being used.
>> 
>> Stability has to be improved with refs and other design technics, not with
>> delay. More, we can't use anymore Valgrind. And this will be PITA to debug.
>> The same kind of debug with async events where the frame before is
>> ecore_loop...
> 
> we can use valgrind. just have freeq free immediately. env vars can switch
> behavior around. :) so valgrind - can work trivially.
> 
>> btw, the third point didn't leave your head ;-)
> 
> oh yeah.. it got lost on the way to the kbd. :)
> 
>>> 4. be able to fill memory about to be freed with patterns (eg 0x55 or
>>> 0xaa) so after free the memory is guaranteed to have a pattern to
>>> know it was freed (optional debug only for mem regions of size > 0
>>> and maybe less than some max size).
>> 
>> meow (I think this is what you say when you don't know if it is a good
>> feature or not :-).
>> 
>>> 5. optional - checksum the memory when added to free queue then check
>>> checksum on actual free to warn of some code "being bad" and
>>> scribbling over freed memory. at least we get warnings of lurking
>>> bugs if we turn this on...
>> 
>> Valgrind does it better.
> 
> the problem is we have people who will NOT RUN STUFF UNDER VALGRIND.
> 
> 1. for example valgrind doesnt work on openbsd. at all.
> 2. good luck with valgrind on something like an rpi ... go and make lunch 
> while
> you wait for your app to start. make coffee in between clicks.the bug you were
> looking for likely disappeared because timing changed so drastically you cant
> catch it. i've seen this happen before.
> 3. people run/test and they do not want to slow things down to 1/50th of the
> speed. they CAN'T, so having a pattern means its a very low cost and coredumps
> can tell us far more information on what is going on. you cant force testers 
> in
> qa to "run it under valgrind". they dont even know what it is, nor can they
> even do it. the speed impact along vetoes it. the impact of memset() for
> smallish things (eg < 1k) is going to be MASSIVELY less.
> 4. this doesn't replace valgrind. it augments it for when valgrind is just not
> viable. it at least gives us a CLUE. JP just was telling me of an issue where 
> a
> Eina_List * ptr in an evas object is 0x1 ... it should never be 0x1. it should
> be some valid ptr value or NULL. something scribbled to this memory when it
> should not have. LIKELY something like using a ptr after free and that ptr
> HAPPENED to point to this object memory. we have no clue who did it and
> valgrind can't catch this as its not freed ... YET. but if that memory WAS
> handled by a free queue this would be far less likely to happen as the "write
> to unused memory" would be less likely to affect a live real object. you want
> things to be as robust as possible with minimal if not zero cost when you are
> NOT running under valgrind. in fact i can detect if running under valgrind and
> switch behaviour to insta-free thus changing nothing when running under
> valgrind vs today, but buying more debug/tracking info when not.
> 
> what if the bug is not something WE can fix? some app using efl uses a ptr
> after free? the crashes happen in efl code as something is scribbled over. we
> mostly cant reproduce as the app is not freely available. so it's not even
> viable to do so. being more robust when there is nothing we can do simply by
> deferring the free (at the expense of holding on to memory for a little 
> longer)
> is not a bad trade-off.
> 
>>> this doesn't solve everything, but it is an improvement and it's easy
>>> to slide in and begin using very quickly - eg eo object data, eina
>>> lists and a few other things. the free queue itself would be a very
>>> very very low overhead buffer - not a linked list. maybe a mmaped or
>>> malloced buffer/array (ring buffer) with a start and end point so we
>>> dont do anything but write the above ptr, free ptr and maybe size to
>>> the next array slot and move the ring buffer next slot one down.
>>> 
>>> adding the freeq code in eina is a cakewalk. i'd spend more time
>>> writing docs and tests than the code itself. :( adding usage of it
>>> should be trivial.
>>> 
>>> any comments?
>>> 
>> 
>> 
>> ------------------------------------------------------------------------------
>> Developer Access Program for Intel Xeon Phi Processors
>> Access to Intel Xeon Phi processor-based developer platforms.
>> With one year of Intel Parallel Studio XE.
>> Training and support from Colfax.
>> Order your platform today. http://sdm.link/xeonphi
>> _______________________________________________
>> enlightenment-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>> 
> 
> 
> -- 
> ------------- Codito, ergo sum - "I code, therefore I am" --------------
> The Rasterman (Carsten Haitzler)    [email protected]
> 
> 
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> enlightenment-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
enlightenment-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to