On Mon, 13 Jan 2014 17:55:46 +0900, Namhyung Kim wrote:
> On Sat, 11 Jan 2014 17:35:29 +0100, Jiri Olsa wrote:
>> On Wed, Jan 08, 2014 at 05:46:30PM +0900, Namhyung Kim wrote:
>>> Reuse hist_entry_iter__add() function to share the similar code with
>>> perf report.  Note that it needs to be called with hists.lock so tweak
>>> some internal functions not to deadlock or hold the lock too long.
>>> 
>>> Signed-off-by: Namhyung Kim <namhy...@kernel.org>
>>> ---
>>>  tools/perf/builtin-top.c | 75 
>>> ++++++++++++++++++++++++------------------------
>>>  1 file changed, 37 insertions(+), 38 deletions(-)
>>> 
>>> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
>>> index f0f55e6030cd..cf330c66bed7 100644
>>> --- a/tools/perf/builtin-top.c
>>> +++ b/tools/perf/builtin-top.c
>>> @@ -186,9 +186,6 @@ static void perf_top__record_precise_ip(struct perf_top 
>>> *top,
>>>     sym = he->ms.sym;
>>>     notes = symbol__annotation(sym);
>>>  
>>> -   if (pthread_mutex_trylock(&notes->lock))
>>> -           return;
>>> -
>>>     ip = he->ms.map->map_ip(he->ms.map, ip);
>>>     err = hist_entry__inc_addr_samples(he, counter, ip);
>>>  
>>> @@ -201,6 +198,8 @@ static void perf_top__record_precise_ip(struct perf_top 
>>> *top,
>>>                    sym->name);
>>>             sleep(1);
>>>     }
>>> +
>>> +   pthread_mutex_lock(&notes->lock);
>>>  }
>>
>> locking on function exit.. does not look right ;-)
>
> Yes, it looks weird.. but it's because of the change in locking.  After
> I changed perf top to use the hist_entry_iter, it needed to protect
> the whole hist_entry_iter__add() by hists->lock so the
> perf_top__record_precise_ip() should be called with the lock acquired.

Argh, it was a different lock..  But it seems still need to dance
anyway. ;-)

Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to