I wish I could help, but I don't know where to start. Thanks for your help.

I just have a question, is there any way to tell Dyninst to ignore the
newly created threads? (similarly as detaching after forking but even
sooner if it is possible).

Gerard

2015-02-18 21:17 GMT+01:00 Bill Williams <[email protected]>:

> On 02/18/2015 01:54 PM, Josh Stone wrote:
>
>> On 02/18/2015 11:42 AM, Bill Williams wrote:
>>
>>> On 02/18/2015 01:37 PM, Gerard wrote:
>>>
>>>> Ah ok, I didn't know that.
>>>>
>>>> About how reproducible is the error, I run it three times (without the
>>>> change you suggested) and every time stopped at around 32000 threads.
>>>> Now I added appProc->continueExecution() and it happened again after
>>>> creating 32322 threads, so it seems this is not the problem.
>>>>
>>>>  Then it's got to be that somewhere in here, we're messing up internal
>>> stop/continue state without that propagating out to the user level.
>>> Debug logs will tell me something eventually...sadly, they're verbose
>>> and time-consuming.
>>>
>>> Which kernel version/distribution are you using, by the way?
>>>
>>
>> TIDs usually wrap at 2^15, so they'll be reused in this test.
>> Perhaps this is confusing dyninst somewhere?
>>
>>  That's certainly a possibility; we're hitting our starvation case
> (theoretically running process generates no ptrace events) when the TID and
> PID once again are the same, which I would expect guarantees that we've
> recycled a LWPID.
>
> I've attached the tag end of a log that should reflect Gerard's problem;
> there's postponed syscall handling going on, but by initial mark 1 eyeball
> nothing's obviously broken (aside from the results)...
>
>
> --
> --bw
>
> Bill Williams
> Paradyn Project
> [email protected]
>
_______________________________________________
Dyninst-api mailing list
[email protected]
https://lists.cs.wisc.edu/mailman/listinfo/dyninst-api

Reply via email to