Sorry, I was too hasty. Obviously, one doesn't need the Isolate lock to 
remove PerIsolateThreadData from the linked list, one needs 
the thread_data_table_mutex_ lock. FWIW, it strikes me as very odd that 
ThreadDataTable is process-wide with a static pointer. Given that its lone 
data member is the chain anchor for PerIsolateThreadData objects it seems 
that a ThreadDataTable object could be inside the Isolate and not take up 
any more space and probably be protected by the Isolate lock so eliminate 
the need for a separate mutex. 

Neverthless, I was wrong and, in theory, one could have a totally 
Locker-independent call to free a thread's PerIsolateThreadData. But, since 
one could not free the current thread's PerIsolateThreadData while there is 
a Locker on the thread's stack, freeing PerIsolateThreadData still seems 
pretty tightly associated with the Locker class so I think I'd stick with 
my proposals.

Also, FWIW, I did a bit more research and the only thing of substance that 
seems to survive a top_level_ && has_lock_ Locker destruction is a thread's 
Simulator. And while, I admit I don't fully understand the Simulator code 
it seems unlikely that a thread's Simulator would hold state that would 
need to survive an unlock/lock. Admittedly, deleting and then 
reconstructing a Simulator would not be cheap but I would assume someone 
using a Simulator would not be expecting particularly high performance, 
anyway?   

On Sunday, October 18, 2015 at 11:06:27 AM UTC-7, Alex Kodat wrote:
>
> Jakob,
>
> Thanks for that. I might just take a swing at an unintrusive patch. Along 
> those lines it seems that thread resource cleanup would be closely tied to 
> the Locker as one would need the isolate lock while freeing 
> PerIsolateThreadData but would presumably want to release the lock 
> immediately after. So it seems the most logical approach would be a Locker 
> class variable called say cleanup_ that's set either with an extra 
> parameter on the Locker constructor or a Locker method to clean up at 
> destruction time (both with appropriate semantics if not top_level_ or 
> has_lock_).
>
> But just want to make sure that this couldn't be even made less intrusive 
> by just always cleaning up if top_level_ and has_lock_ in the Locker 
> destructor. As it is, in this case ThreadManager::FreeThreadResources ends 
> up being called which doesn't seem to leave a heck a lot of useful stuff 
> around for the thread so leaving a few crumbs around seems mainly an 
> efficiency issue -- if someone has a lot of top level Locker brackets in a 
> loop, there might be a bit of extra object creation/destruction if 
> ThreadManager::FreeThreadResources always does a full thread cleanup but 
> it's hard to imagine that being significant for anyone. In the unlikely 
> event that it is, an embedder can create an outer Locker with nested 
> Unlockers, an approach that would have no v8 release dependencies. 
>
> So which approach would you prefer: just do it, Locker class method, or 
> Locker constructor parameter?
>
> Thanks 
>
> On Sunday, October 18, 2015 at 4:07:07 AM UTC-7, Jakob Kummerow wrote:
>>
>> On Sun, Oct 18, 2015 at 8:16 AM, Alex Kodat <alex...@gmail.com> wrote:
>>
>>> If I have an app that steadily creates and joins threads, is there a 
>>> good way of cleaning up the thread-specific data when a thread terminates? 
>>> Looking at the v8 code, it seems that ThreadManager::FreeThreadResources in 
>>> v8threads.cc would be a place this might happen when called from the Locker 
>>> destructor when top_level_ is set. But maybe for good reason not all 
>>> thread-specific data seems to be cleaned up?
>>>
>>
>> Just because a thread releases a lock doesn't mean that thread is about 
>> to die. It might come back to V8 later. So any cleanup would have to be 
>> triggered explicitly by the embedder, it can't just happen automatically.
>>  
>>
>>> More specifically, as I have lots of threads use an isolate and then 
>>> terminate, their Isolate::PerIsolateThreadData remains queued off 
>>> of Isolate::ThreadDataTable list_. While this is essentially a memory leak, 
>>> even worse, the queue just gets crazy long and the master (so first Isolate 
>>> user) thread's Isolate::PerIsolateThreadData (being at the end) takes crazy 
>>> long to find. And the entire massive queue has to be scanned each time a 
>>> new thread uses the Isolate to determine that the thread does not yet have 
>>> an Isolate::PerIsolateThreadData. Eventually v8 ends up spending most of 
>>> its time scanning the Isolate::PerIsolateThreadData queue. 
>>>
>>
>> Yes.
>>  
>>
>>> The solution would be to be able to get to 
>>> Isolate::ThreadDataTable::Remove for defunct threads but I didn't see a 
>>> path there from any embedder API functions. Is there one? And it looks like 
>>> might be a few other thread-specific bits and bobs that might be left 
>>> laying around, anyway.
>>>
>>> Maybe v8 just isn't built with apps that steadily create and destroy 
>>> thread's in mind and such apps should just use their own thread pool 
>>> managers and avoid allowing threads to terminate? 
>>>
>>
>> Precisely.
>>
>> I think we would probably accept a patch (as long as it's not overly 
>> intrusive) that adds the ability to discard thread-specific data. It's just 
>> never been high enough on our priority list, because as you already said, 
>> embedders can easily employ a thread pool to get by.
>>  
>>
>>> Easy enough to do that though it's even easier if we don't have to -- 
>>> the overhead of creating and joining threads is pretty tiny these days, 
>>> especially relative to the "real work" that likely happens on each thread. 
>>>
>>> Thanks
>>>
>>> -- 
>>> -- 
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to v8-users+u...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to