[lldb-dev] Deadlock loading DWARF symbols

2020-10-02 Thread Dmitry Antipov via lldb-dev

I'm observing the following deadlock:

One thread calls Module::PreloadSymbols() which takes m_mutex of this Module. 
Module::PreloadSymbols()
calls ManualDWARFIndex::Index(), which, in turn, creates thread pool and waits 
for all threads completion:

(gdb)
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x7f67f176914c) at 
../sysdeps/nptl/futex-internal.h:183
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7f67f17690c8, 
cond=0x7f67f1769120) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=0x7f67f1769120, mutex=0x7f67f17690c8) at 
pthread_cond_wait.c:638
#3  0x7f67f3974890 in 
std::condition_variable::wait(std::unique_lock&) () from 
/lib64/libstdc++.so.6
#4  0x7f67f4440c4b in 
std::condition_variable::wait > (__p=..., 
__lock=..., this=0x7f67f1769120)
at /usr/include/c++/10/condition_variable:108
#5  llvm::ThreadPool::wait (this=this@entry=0x7f67f1769060) at 
source/llvm/lib/Support/ThreadPool.cpp:72
#6  0x7f67fc6fa3a6 in lldb_private::ManualDWARFIndex::Index 
(this=0x7f66fe87e950)
at source/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp:94
#7  0x7f67fc6b3825 in SymbolFileDWARF::PreloadSymbols (this=0x7f67de7af6f0) 
at /usr/include/c++/10/bits/unique_ptr.h:421
#8  0x7f67fc1ee488 in lldb_private::Module::PreloadSymbols 
(this=0x7f67de79b620) at source/lldb/source/Core/Module.cpp:1397
#9  0x7f67fc397a37 in lldb_private::Target::GetOrCreateModule 
(this=this@entry=0x96c7a0, module_spec=..., notify=notify@entry=true, 
error_ptr=error_ptr@entry=0x0)
at /usr/include/c++/10/bits/shared_ptr_base.h:1324
...

OTOH one of pool threads makes an attempt to lock Module's mutex:

(gdb) bt
#0  __lll_lock_wait (futex=futex@entry=0x7f67de79b638, private=0) at 
lowlevellock.c:52
#1  0x7f67fcd907f1 in __GI___pthread_mutex_lock (mutex=0x7f67de79b638) at 
../nptl/pthread_mutex_lock.c:115
#2  0x7f67fc1ed922 in __gthread_mutex_lock (__mutex=0x7f67de79b638) at 
/usr/include/c++/10/x86_64-redhat-linux/bits/gthr-default.h:749
#3  __gthread_recursive_mutex_lock (__mutex=0x7f67de79b638) at 
/usr/include/c++/10/x86_64-redhat-linux/bits/gthr-default.h:811
#4  std::recursive_mutex::lock (this=0x7f67de79b638) at 
/usr/include/c++/10/mutex:106
#5  std::lock_guard::lock_guard (__m=..., this=) at /usr/include/c++/10/bits/std_mutex.h:159
#6  lldb_private::Module::GetDescription (this=this@entry=0x7f67de79b620, 
s=..., level=level@entry=lldb::eDescriptionLevelBrief)
at source/lldb/source/Core/Module.cpp:1083
#7  0x7f67fc1f2070 in lldb_private::Module::ReportError (this=0x7f67de79b620, 
format=0x7f67fca03660 "DW_FORM_ref* DIE reference 0x%lx is outside of its CU")
at source/lldb/include/lldb/Utility/Stream.h:358
#8  0x7f67fc6adfb4 in DWARFFormValue::Reference 
(this=this@entry=0x7f66f5ff29c0) at 
/usr/include/c++/10/bits/shared_ptr_base.h:1324
#9  0x7f67fc6aaa77 in DWARFDebugInfoEntry::GetAttributes 
(this=this@entry=0x7f662e3580e0, cu=cu@entry=0x7f66ff6ebad0, attributes=...,
recurse=recurse@entry=DWARFBaseDIE::Recurse::yes, 
curr_depth=curr_depth@entry=0)
at source/lldb/source/Plugins/SymbolFile/DWARF/DWARFDebugInfoEntry.cpp:439
#10 0x7f67fc6f8f8f in DWARFDebugInfoEntry::GetAttributes 
(recurse=DWARFBaseDIE::Recurse::yes, attrs=..., cu=0x7f66ff6ebad0, 
this=0x7f662e3580e0)
at source/lldb/source/./Plugins/SymbolFile/DWARF/DWARFDebugInfoEntry.h:54
#11 lldb_private::ManualDWARFIndex::IndexUnitImpl (unit=..., 
cu_language=cu_language@entry=lldb::eLanguageTypeRust, set=...)
at source/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp:180
#12 0x7f67fc6f96b7 in lldb_private::ManualDWARFIndex::IndexUnit 
(this=, unit=..., dwp=0x0, set=...)
at source/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp:126
...

So this is a deadlock because thread pool is created with module lock held, and 
one (or more,
I'm observing two) pool thread(s) might want to grab the same lock to issue an 
error message.

Commenting out the whole body of Module::GetDescription() makes this deadlock 
disappear.

I'm not an expert in this area, but the whole thing looks like the Module 
object should have more
fine-granted locking rather than the only std::recursive_mutex for all 
synchronization purposes.

Dmitry
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Weird results running lldb under Valgrind

2020-09-30 Thread Dmitry Antipov via lldb-dev

On 9/29/20 11:40 PM, Greg Clayton wrote:


How could LLDB even function then? We are using the standard std::mutex + 
std::condition workflow here. Not sure how LLDB could even function if it 
locking was nor working as expected.


Well, obviously this is an issue (and probably the same one) with debugging 
tools.


Doing a quick web search, this seems to be due to a mismatched libc++ and 
libstdc++:

https://github.com/google/sanitizers/issues/1259


Nice. So if your libstdc++ is new enough to use pthread_cond_clockwait(), both 
TSan and
valgrind produces weird results just because they can handle 
pthread_cond_timedwait() only.

Dmitry

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Weird results running lldb under Valgrind

2020-09-29 Thread Dmitry Antipov via lldb-dev

On 9/25/20 5:53 PM, Dmitry Antipov wrote:


On 9/24/20 9:14 PM, Greg Clayton wrote:

This must be a valgrind issue, there would be major problems if the OS isn't able to lock mutex objects correctly ("mutex is locked simultaneously by two threads"). It is getting confused by a 
recursive mutex? LLDB uses recursive mutexes.


LLDB's Predicate.h uses plain std::mutex, which is not recursive, and 
std::lock_guard/std::unique_lock
on top of it.

This needs more digging in because the latest Valgrind snapshot reports the same 
"impossible" condition.


To whom it may be interesting, thread sanitizer reports nearly the same:

WARNING: ThreadSanitizer: double lock of a mutex (pid=2049545)
#0 pthread_mutex_lock  (libtsan.so.0+0x528ac)
#1 __gthread_mutex_lock 
/usr/include/c++/10/x86_64-redhat-linux/bits/gthr-default.h:749 
(liblldb.so.12git+0xd725c0)
#2 std::mutex::lock() /usr/include/c++/10/bits/std_mutex.h:100 
(liblldb.so.12git+0xd725c0)
#3 std::lock_guard::lock_guard(std::mutex&) 
/usr/include/c++/10/bits/std_mutex.h:159 (liblldb.so.12git+0xd725c0)
#4 lldb_private::Predicate::SetValue(bool, 
lldb_private::PredicateBroadcastType) 
/home/antipov/llvm/source/lldb/include/lldb/Utility/Predicate.h:91 
(liblldb.so.12git+0xd725c0)
#5 lldb_private::EventDataReceipt::DoOnRemoval(lldb_private::Event*) 
/home/antipov/llvm/source/lldb/include/lldb/Utility/Event.h:121 
(liblldb.so.12git+0xd725c0)
#6 lldb_private::Event::DoOnRemoval() 
/home/antipov/llvm/source/lldb/source/Utility/Event.cpp:82 
(liblldb.so.12git+0xedb7da)
#7 lldb_private::Listener::FindNextEventInternal(std::unique_lock&, lldb_private::Broadcaster*, lldb_private::ConstString const*, unsigned int, unsigned int, 
std::shared_ptr&, bool) /home/antipov/llvm/source/lldb/source/Utility/Listener.cpp:309 (liblldb.so.12git+0xee6099)
#8 lldb_private::Listener::GetEventInternal(lldb_private::Timeout > const&, lldb_private::Broadcaster*, lldb_private::ConstString const*, unsigned int, unsigned int, 
std::shared_ptr&) /home/antipov/llvm/source/lldb/source/Utility/Listener.cpp:357 (liblldb.so.12git+0xee6b63)
#9 lldb_private::Listener::GetEventForBroadcaster(lldb_private::Broadcaster*, std::shared_ptr&, lldb_private::Timeout > const&) 
/home/antipov/llvm/source/lldb/source/Utility/Listener.cpp:395 (liblldb.so.12git+0xee6dea)
#10 lldb_private::Process::GetEventsPrivate(std::shared_ptr&, lldb_private::Timeout > const&, bool) 
/home/antipov/llvm/source/lldb/source/Target/Process.cpp:1139 (liblldb.so.12git+0xd7931d)

#11 lldb_private::Process::RunPrivateStateThread(bool) 
/home/antipov/llvm/source/lldb/source/Target/Process.cpp:3872 
(liblldb.so.12git+0xda3648)
#12 lldb_private::Process::PrivateStateThread(void*) 
/home/antipov/llvm/source/lldb/source/Target/Process.cpp:3857 
(liblldb.so.12git+0xda3f87)
#13 lldb_private::HostNativeThreadBase::ThreadCreateTrampoline(void*) 
/home/antipov/llvm/source/lldb/source/Host/common/HostNativeThreadBase.cpp:68 
(liblldb.so.12git+0xc2c0ea)
#14   (libtsan.so.0+0x2d33f)

Again, lldb_private::Predicate uses plain std::mutex, which is not recursive.

Dmitry
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Weird results running lldb under Valgrind

2020-09-25 Thread Dmitry Antipov via lldb-dev

On 9/24/20 9:14 PM, Greg Clayton wrote:


This must be a valgrind issue, there would be major problems if the OS isn't able to lock 
mutex objects correctly ("mutex is locked simultaneously by two threads"). It 
is getting confused by a recursive mutex? LLDB uses recursive mutexes.W


LLDB's Predicate.h uses plain std::mutex, which is not recursive, and 
std::lock_guard/std::unique_lock
on top of it.

This needs more digging in because the latest Valgrind snapshot reports the same 
"impossible" condition.

Dmitry
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Weird results running lldb under Valgrind

2020-09-24 Thread Dmitry Antipov via lldb-dev

Does anyone has an explanation of this weird run of 'valgrind --tool=drd':

==2715== drd, a thread error detector
==2715== Copyright (C) 2006-2017, and GNU GPL'd, by Bart Van Assche.
==2715== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==2715== Command: /home/antipov/.local/llvm-12.0.0/bin/lldb
==2715== Parent PID: 1702

In LLDB, do 'process attach --pid [PID of running Firefox]', then:

==2715== Thread 5:
==2715== The impossible happened: mutex is locked simultaneously by two 
threads: mutex 0xe907d10, recursion count 1, owner 1.
==2715==at 0x4841015: pthread_mutex_lock_intercept 
(drd_pthread_intercepts.c:893)
==2715==by 0x4841015: pthread_mutex_lock (drd_pthread_intercepts.c:903)
==2715==by 0x504FBEE: __gthread_mutex_lock (gthr-default.h:749)
==2715==by 0x504FBEE: lock (std_mutex.h:100)
==2715==by 0x504FBEE: lock_guard (std_mutex.h:159)
==2715==by 0x504FBEE: SetValue (Predicate.h:91)
==2715==by 0x504FBEE: 
lldb_private::EventDataReceipt::DoOnRemoval(lldb_private::Event*) (Event.h:121)
==2715==by 0x5113644: 
lldb_private::Listener::FindNextEventInternal(std::unique_lock&, 
lldb_private::Broadcaster*, lldb_private::ConstString const*, unsigned int, unsigned int, 
std::shared_ptr&, bool) (Listener.cpp:309)
==2715==by 0x5113DD1: 
lldb_private::Listener::GetEventInternal(lldb_private::Timeout > 
const&, lldb_private::Broadcaster*, lldb_private::ConstString const*, unsigned int, unsigned int, 
std::shared_ptr&) (Listener.cpp:357)
==2715==by 0x5113F4A: lldb_private::Listener::GetEventForBroadcaster(lldb_private::Broadcaster*, 
std::shared_ptr&, lldb_private::Timeout 
> const&) (Listener.cpp:395)
==2715==by 0x506ADD4: lldb_private::Process::RunPrivateStateThread(bool) 
(Process.cpp:3872)
==2715==by 0x506B3F5: lldb_private::Process::PrivateStateThread(void*) 
(Process.cpp:3857)
==2715==by 0x483DB9A: vgDrd_thread_wrapper (drd_pthread_intercepts.c:449)
==2715==by 0x488B3F8: start_thread (in /usr/lib64/libpthread-2.32.so)
==2715==by 0xDFCEA92: clone (in /usr/lib64/libc-2.32.so)
==2715== mutex 0xe907d10 was first observed at:
==2715==at 0x4840F55: pthread_mutex_lock_intercept 
(drd_pthread_intercepts.c:890)
==2715==by 0x4840F55: pthread_mutex_lock (drd_pthread_intercepts.c:903)
==2715==by 0x5058502: __gthread_mutex_lock (gthr-default.h:749)
==2715==by 0x5058502: lock (std_mutex.h:100)
==2715==by 0x5058502: lock (unique_lock.h:138)
==2715==by 0x5058502: unique_lock (unique_lock.h:68)
==2715==by 0x5058502: 
WaitFor::WaitForValueEqualTo:: > 
(Predicate.h:123)
==2715==by 0x5058502: WaitForValueEqualTo (Predicate.h:157)
==2715==by 0x5058502: WaitForEventReceived (Event.h:114)
==2715==by 0x5058502: 
lldb_private::Process::ControlPrivateStateThread(unsigned int) 
(Process.cpp:3698)
==2715==by 0x505BC61: lldb_private::Process::StartPrivateStateThread(bool) 
(Process.cpp:3647)
==2715==by 0x5065B96: 
lldb_private::Process::Attach(lldb_private::ProcessAttachInfo&) 
(Process.cpp:2961)
==2715==by 0x544DBB8: PlatformPOSIX::Attach(lldb_private::ProcessAttachInfo&, 
lldb_private::Debugger&, lldb_private::Target*, lldb_private::Status&) 
(PlatformPOSIX.cpp:401)
==2715==by 0x509F531: 
lldb_private::Target::Attach(lldb_private::ProcessAttachInfo&, 
lldb_private::Stream*) (Target.cpp:3008)
==2715==by 0x54C3F17: 
CommandObjectProcessAttach::DoExecute(lldb_private::Args&, 
lldb_private::CommandReturnObject&) (CommandObjectProcess.cpp:386)
==2715==by 0x4FC0ACD: lldb_private::CommandObjectParsed::Execute(char const*, 
lldb_private::CommandReturnObject&) (CommandObject.cpp:993)
==2715==by 0x4FBCBD7: lldb_private::CommandInterpreter::HandleCommand(char 
const*, lldb_private::LazyBool, lldb_private::CommandReturnObject&, 
lldb_private::ExecutionContext*, bool, bool) (CommandInterpreter.cpp:1803)
==2715==by 0x4FBDB96: 
lldb_private::CommandInterpreter::IOHandlerInputComplete(lldb_private::IOHandler&, 
std::__cxx11::basic_string, std::allocator >&) 
(CommandInterpreter.cpp:2838)
==2715==by 0x4EF21C0: lldb_private::IOHandlerEditline::Run() 
(IOHandler.cpp:579)
==2715==by 0x4ED02B0: lldb_private::Debugger::RunIOHandlers() 
(Debugger.cpp:861)

Hopefully this is an issue with valgrind and not lldb. But still curious 
whether someone else can reproduce something similar.

Dmitry
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb showing wrong type structure for virtual pointer type

2018-02-28 Thread Dmitry Antipov via lldb-dev

On 02/28/2018 11:31 AM, jonas echterhoff via lldb-dev wrote:


I'm using lldb-900.0.64.

^^
??
Latest official release is 5.0.1; also there are 6.0.0 (at -rc3, the next 
release)
and 7.0.0 (a.k.a SVN trunk). What's the 'version' output of your LLDB prompt?


Unfortunately, I have not yet succeeded in coming up with a small, independent 
repro case which shows this problem.


IIUC this is it:

struct A {
  int id0;
  A () { id0 = 111; }
  virtual int f (int x) { return x + 1; }
  int g (int x) { return x + 11; }
};

struct B: A {
  int id1;
  B () { id1 = 222; }
  virtual int f (int x) { return x + 2; }
  int g (int x) { return x + 12; }
};

namespace S {
  struct AS {
int id0;
AS () { id0 = 333; }
virtual int f (int x) { return x + 3; }
int g (int x) { return x + 13; }
  };
  struct B: AS {
int id1;
B () { id1 = 444; }
virtual int f (int x) { return x + 4; }
int g (int x) { return x + 14; }
  };
}

int main (int argc, char *argv[])
{
  B obj1;
  S::B obj2;
  return
obj1.f (argc) +
obj2.f (argc) +
obj1.g (argc) +
obj2.g (argc);
}

And in gdb, it is:

$ gdb -q t-class2
Reading symbols from t-class2...done.
(gdb) b S::B::f
Breakpoint 1 at 0x400775: file t-class2.cc, line 25.
(gdb) b S::B::g
Breakpoint 2 at 0x400789: file t-class2.cc, line 26.
(gdb) r
Starting program: /home/dantipov/tmp/t-class2

Breakpoint 1, S::B::f (this=0x7fffdb50, x=1) at t-class2.cc:25
25  virtual int f (int x) { return x + 4; }
(gdb) bt
#0  S::B::f (this=0x7fffdb50, x=1) at t-class2.cc:25
#1  0x00400643 in main (argc=1, argv=0x7fffdc68) at t-class2.cc:36
(gdb) p this
$1 = (S::B * const) 0x7fffdb50
(gdb) p *this
$2 = { = {_vptr.AS = 0x400840 , id0 = 333}, id1 = 
444}
(gdb) c
Continuing.

Breakpoint 2, S::B::g (this=0x7fffdb50, x=1) at t-class2.cc:26
26  int g (int x) { return x + 14; }
(gdb) bt
#0  S::B::g (this=0x7fffdb50, x=1) at t-class2.cc:26
#1  0x00400669 in main (argc=1, argv=0x7fffdc68) at t-class2.cc:38
(gdb) p this
$3 = (S::B * const) 0x7fffdb50
(gdb) p *this
$4 = { = {_vptr.AS = 0x400840 , id0 = 333}, id1 = 
444}

E.g. in calls to obj2.f () and obj2.g (), 'this' is 0x7fffdb50, and the 
object
itself is {333, 444}.

With lldb, it is:

$ /home/dantipov/.local/llvm-6.0.0/bin/lldb t-class2
(lldb) target create "t-class2"
Current executable set to 't-class2' (x86_64).
(lldb) breakpoint set -n S::B::f
Breakpoint 1: where = t-class2`S::B::f(int) at t-class2.cc:25, address = 
0x0040076a
(lldb) breakpoint set -n S::B::g
Breakpoint 2: where = t-class2`S::B::g(int) + 11 at t-class2.cc:26, address = 
0x00400789
(lldb) run
Process 5180 launched: '/home/dantipov/tmp/t-class2' (x86_64)
Process 5180 stopped
* thread #1, name = 't-class2', stop reason = breakpoint 1.1
frame #0: 0x0040076a t-class2`S::B::f(this=0x7fffdb50, x=1) 
at t-class2.cc:25
   22 struct B: AS {
   23   int id1;
   24   B () { id1 = 444; }
-> 25virtual int f (int x) { return x + 4; }
   26   int g (int x) { return x + 14; }
   27 };
   28   }
(lldb) bt
* thread #1, name = 't-class2', stop reason = breakpoint 1.1
  * frame #0: 0x0040076a t-class2`S::B::f(this=0x7fffdb50, x=1) 
at t-class2.cc:25
frame #1: 0x00400643 t-class2`main(argc=1, argv=0x7fffdc58) 
at t-class2.cc:36
frame #2: 0x7712000a libc.so.6`__libc_start_main(main=(t-class2`main at t-class2.cc:31), argc=1, argv=0x7fffdc58, init=, fini=, rtld_fini=, 
stack_end=0x7fffdc48) at libc-start.c:308

frame #3: 0x0040054a t-class2`_start + 42
(lldb) p this
(S::B *) $0 = 0x7fffdb50
(lldb) p *this
(S::B) $1 = {
  S::AS = (id0 = 111)
  id1 = 222
}
(lldb) c
Process 5180 resuming
Process 5180 stopped
* thread #1, name = 't-class2', stop reason = breakpoint 2.1
frame #0: 0x00400789 t-class2`S::B::g(this=0x7fffdb40, x=1) 
at t-class2.cc:26
   23   int id1;
   24   B () { id1 = 444; }
   25   virtual int f (int x) { return x + 4; }
-> 26int g (int x) { return x + 14; }
   27 };
   28   }
   29   
(lldb) bt
* thread #1, name = 't-class2', stop reason = breakpoint 2.1
  * frame #0: 0x00400789 t-class2`S::B::g(this=0x7fffdb40, x=1) 
at t-class2.cc:26
frame #1: 0x00400669 t-class2`main(argc=1, argv=0x7fffdc58) 
at t-class2.cc:38
frame #2: 0x7712000a libc.so.6`__libc_start_main(main=(t-class2`main at t-class2.cc:31), argc=1, argv=0x7fffdc58, init=, fini=, rtld_fini=, 
stack_end=0x7fffdc48) at libc-start.c:308

frame #3: 0x0040054a t-class2`_start + 42
(lldb) p this
(S::B *) $2 = 0x7fffdb40
(lldb) p *this
(S::B) $3 = {
  S::AS = (id0 = 333)
  id1 = 444
}

Here 'this' is different between calls to obj2.f () and obj2.g () 
(0x7fffdb50 vs.
0x7fffdb40), and objects are shown as 

[lldb-dev] 'breakpoint delete' vs. 'breakpoint disable'

2018-02-16 Thread Dmitry Antipov via lldb-dev

While operating on a breakpoints, is it correct to use 'breakpoint delete' 
without
previous 'breakpoint disable'? With this scenario, I'm observing 6.0.0-rc2 
crash with:

$ /home/dantipov/.local/llvm-6.0.0/bin/lldb t-thread2
(lldb) target create "t-thread2"
Current executable set to 't-thread2' (x86_64).
(lldb) breakpoint set -n g
Breakpoint 1: where = t-thread2`g(int) + 7 at t-thread2.cc:9, address = 
0x00400d0e
(lldb) run
Process 19195 launched: '/home/dantipov/tmp/t-thread2' (x86_64)
Process 19195 stopped
* thread #2, name = 't-thread2', stop reason = breakpoint 1.1
frame #0: 0x00400d0e t-thread2`g(v=0) at t-thread2.cc:9
   6g (int v)
   7{
   8  (void) v;
-> 9 }
   10   
   11   void
   12   f (int v)
(lldb) process continue
Process 19195 resuming
Process 19195 stopped
* thread #3, name = 't-thread2', stop reason = breakpoint 1.1
frame #0: 0x00400d0e t-thread2`g(v=1) at t-thread2.cc:9
   6g (int v)
   7{
   8  (void) v;
-> 9 }
   10   
   11   void
   12   f (int v)
(lldb) process continue
Process 19195 resuming
Process 19195 stopped
* thread #2, name = 't-thread2', stop reason = breakpoint 1.1
frame #0: 0x00400d0e t-thread2`g(v=0) at t-thread2.cc:9
   6g (int v)
   7{
   8  (void) v;
-> 9 }
   10   
   11   void
   12   f (int v)
(lldb) breakpoint delete
About to delete all breakpoints, do you want to do that?: [Y/n] Y
All breakpoints removed. (1 breakpoint)
(lldb) process continue
Process 19195 resuming
Segmentation fault (core dumped)

There is no crash if 'breakpoint disable' was issued before 'breakpoint delete'.
Sample program attached.

Dmitry

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
#include 
#include 
#include 

void
g (int v)
{
  (void) v;
}

void
f (int v)
{
  while (true)
{
  g (v);
  std::this_thread::sleep_for (std::chrono::milliseconds (100 + std::rand () % 100));
}
}

int
main (int argc, char *argv[])
{
  auto max = argc > 1 ? std::atoi (argv[1]) : 2;

  std::vector T;
  for (auto i = 0; i < max; i++)
T.push_back (new std::thread (f, i));
  for (auto : T)
t->join ();

  return 0;
}
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Pending breakpoints to dlsym()ed functions

2018-02-15 Thread Dmitry Antipov via lldb-dev

On 02/15/2018 02:21 PM, Pavel Labath wrote:


I've tried your sample, and I was indeed able to reproduce the
problem. What makes your case special is that "sin" and "cos" are
indirect functions (STT_GNU_IFUNC), so we have to do some extra work
(call the resolver function) to resolve them. 


I've changed my sample to dlsym() a regular function instead of an indirect
stub, and got a breakpoint hit, but:

(lldb) attach 16196
Process 16196 stopped
* thread #1, name = 'main', stop reason = signal SIGSTOP
frame #0: 0x00400798 main`main(argc=1, argv=0x7ffd6f662668) at 
main.c:16
   13 for (a = 0; a < DELAY + argc; a++)
   14   for (b = 0; b < DELAY + argc; b++)
   15 for (c = 0; c < DELAY + argc; c++)
-> 16z += a + b + c;
   17 while (1)
   18   {
   19 void *handle = dlopen ("libfoo.so", RTLD_LAZY);

Executable module set to "/home/dantipov/tmp/t-dl2/main".
Architecture set to: x86_64--linux.
(lldb) breakpoint set -n foo
Breakpoint 1: no locations (pending).
WARNING:  Unable to resolve breakpoint to any actual locations.
(lldb) process continue
Process 16196 resuming
1 location added to breakpoint 1
(lldb) error: ld-linux-x86-64.so.2 0x0005d207: adding range [0x14eea-0x14f5a) which has a base that is less than the function's low PC 0x15730. Please file a bug and attach the file at the start of 
this error message
error: ld-linux-x86-64.so.2 0x0005d207: adding range [0x14f70-0x14f76) which has a base that is less than the function's low PC 0x15730. Please file a bug and attach the file at the start of this 
error message
error: ld-linux-x86-64.so.2 0x0005d268: adding range [0x14eea-0x14f5a) which has a base that is less than the function's low PC 0x15730. Please file a bug and attach the file at the start of this 
error message
error: ld-linux-x86-64.so.2 0x0005d268: adding range [0x14f70-0x14f76) which has a base that is less than the function's low PC 0x15730. Please file a bug and attach the file at the start of this 
error message

Process 16196 stopped
* thread #1, name = 'main', stop reason = breakpoint 1.1
frame #0: 0x7f3b1a8536f7 
libfoo.so`foo(v=0.03907985046680551) at libfoo.c:6
   3double
   4foo (double v)
   5{
-> 6   return sin (v) + cos (v);
   7}

This seems to be an another bug, isn't it?

Dmitry

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
#include 

double
foo (double v)
{
  return sin (v) + cos (v);
}
#include 
#include 
#include 
#include 

#define DELAY 2000

int
main (int argc, char *argv[])
{
  /* Busy loop long enough to attach the debugger before dlopen happens.  */
  int a, b, c, z = 0;
  for (a = 0; a < DELAY + argc; a++)
for (b = 0; b < DELAY + argc; b++)
  for (c = 0; c < DELAY + argc; c++)
z += a + b + c;
  while (1)
{
  void *handle = dlopen ("libfoo.so", RTLD_LAZY);
  if (handle)
	{
	  int i;
	  double sum = 0.0;
	  double (*fooptr) (double) = dlsym (handle, "foo");
	  for (i = 0; i < 10; i++)
	sum += fooptr (drand48 ());
	  printf ("%lf\n", sum);
	  dlclose (handle);
	}
  else
	fprintf (stderr, "can't open shared object\n");
  sleep (1);
}
  return z;
}
all: libfoo.so main

libfoo.so: libfoo.c
gcc -fPIC -O0 -g3 -shared -o libfoo.so libfoo.c -lm

main: main.c
gcc -O0 -g3 -o main main.c -ldl

clean:
rm -f libfoo.so main
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Pending breakpoints to dlsym()ed functions

2018-02-14 Thread Dmitry Antipov via lldb-dev

I'm trying to setup a pending breakpoint for sin() and cos() which are 
dlsym()ed from libm.so
(sample attached), and an attempt to continue execution seems just hangs the 
debugger. For example:

(lldb) attach 17043
Process 17043 stopped
* thread #1, name = 't-dlopen', stop reason = signal SIGSTOP
frame #0: 0x00400728 t-dlopen`main(argc=1, argv=0x7ffd2b0a00c8) 
at t-dlopen.c:21
   18 for (a = 0; a < DELAY + argc; a++)
   19   for (b = 0; b < DELAY + argc; b++)
   20 for (c = 0; c < DELAY + argc; c++)
-> 21z += a + b + c;
   22 while (1)
   23   {
   24 void *handle = dlopen (LIBM_SO, RTLD_LAZY);

Executable module set to "/home/dantipov/tmp/t-dlopen".
Architecture set to: x86_64--linux.
(lldb) breakpoint set -n sin
Breakpoint 1: no locations (pending).
WARNING:  Unable to resolve breakpoint to any actual locations.
(lldb) breakpoint set -n cos
Breakpoint 2: no locations (pending).
WARNING:  Unable to resolve breakpoint to any actual locations.
(lldb) process continue  ;; After this, nothing 
happens for a long time
Process 17043 resuming
(lldb) process status;; After this, lldb hangs 
and have to be killed

I've tried 6.0.0-rc2 as well as 7.0.0 svn trunk 325127, with the same 
disappointing results.

Dmitry

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
#include 
#include 
#include 
#include 
#ifdef __ANDROID__
#define LIBM_SO "libm.so"
#define DELAY 1500
#else
#include 
#define DELAY 2000
#endif

int
main (int argc, char *argv[])
{
  /* Busy loop long enough to attach the debugger before dlopen happens.  */
  int a, b, c, z = 0;
  for (a = 0; a < DELAY + argc; a++)
for (b = 0; b < DELAY + argc; b++)
  for (c = 0; c < DELAY + argc; c++)
z += a + b + c;
  while (1)
{
  void *handle = dlopen (LIBM_SO, RTLD_LAZY);
  if (handle)
	{
	  int i;
	  double sum = 0.0;
	  double (*sinptr) (double) = dlsym (handle, "sin");
	  double (*cosptr) (double) = dlsym (handle, "cos");
	  for (i = 0; i < 10; i++)
	sum += sinptr (drand48 ()) + cosptr (drand48 ());
	  printf ("%lf\n", sum);
	  dlclose (handle);
	}
  sleep (1);
}
  return z;
}
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Crash in "intern-state" thread after removing breakpoints and continue

2018-02-06 Thread Dmitry Antipov via lldb-dev

Hello,

I'm facing the following 6.0.0-rc1 crash on Linux/X86:

#0  0x70e027b6 in std::__uniq_ptr_impl::_M_ptr (this=0x28)
at /usr/include/c++/7/bits/unique_ptr.h:147
#1  0x70e01cbe in std::unique_ptr::get (this=0x28) at 
/usr/include/c++/7/bits/unique_ptr.h:337
#2  0x70e00860 in 
lldb_private::BreakpointOptions::GetThreadSpecNoCreate (this=0x0)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Breakpoint/BreakpointOptions.cpp:524
#3  0x70df6474 in lldb_private::BreakpointLocation::ValidForThisThread 
(this=0x61ad90, thread=0x7fffd40018f0)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Breakpoint/BreakpointLocation.cpp:387
#4  0x70df8c2b in 
lldb_private::BreakpointLocationCollection::ValidForThisThread (this=0x55e020, 
thread=0x7fffd40018f0)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Breakpoint/BreakpointLocationCollection.cpp:152
#5  0x70e10dd8 in lldb_private::BreakpointSite::ValidForThisThread 
(this=0x55dfd0, thread=0x7fffd40018f0)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Breakpoint/BreakpointSite.cpp:146
#6  0x714d602c in 
lldb_private::process_gdb_remote::ProcessGDBRemote::SetThreadStopInfo 
(this=0x5f1a40, tid=27530, expedited_register_map=..., signo=5 '\005',
thread_name=..., reason=..., description=..., exc_type=0, exc_data=..., 
thread_dispatch_qaddr=18446744073709551615, queue_vars_valid=false,
associated_with_dispatch_queue=lldb_private::eLazyBoolCalculate, 
dispatch_queue_t=18446744073709551615, queue_name=..., 
queue_kind=lldb::eQueueKindUnknown, queue_serial=0)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp:1880
#7  0x714da439 in 
lldb_private::process_gdb_remote::ProcessGDBRemote::SetThreadStopInfo 
(this=0x5f1a40, stop_packet=...)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp:2371
#8  0x714da598 in 
lldb_private::process_gdb_remote::ProcessGDBRemote::RefreshStateAfterStop 
(this=0x5f1a40)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp:2407
#9  0x7110378c in lldb_private::Process::ShouldBroadcastEvent 
(this=0x5f1a40, event_ptr=0x7fffdc014a00)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Target/Process.cpp:3658
#10 0x7110411d in lldb_private::Process::HandlePrivateEvent 
(this=0x5f1a40, event_sp=...) at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Target/Process.cpp:3907
#11 0x71104959 in lldb_private::Process::RunPrivateStateThread 
(this=0x5f1a40, is_secondary_thread=false)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Target/Process.cpp:4106
#12 0x711044b2 in lldb_private::Process::PrivateStateThread 
(arg=0x614210) at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Target/Process.cpp:3999
#13 0x70f7a6e7 in 
lldb_private::HostNativeThreadBase::ThreadCreateTrampoline (arg=0x616250)
at 
/home/dantipov/llvm/6.0.0/source/tools/lldb/source/Host/common/HostNativeThreadBase.cpp:66
#14 0x77bbf36d in start_thread () from /lib64/libpthread.so.0
#15 0x7fffef3d6b4f in clone () from /lib64/libc.so.6

Test program (bug.cc) and recipe to reproduce (bug.txt) attached.
7.0.0 SVN trunk looks also affected, but stable 5.0.1 isn't.

I've also requested an account at https://bugs.llvm.org, and will
create bug report as soon as my registration will be approved.

Dmitry

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
/*
 * Compile with g++ -pthread -std=c++11 -O0 -g3 -o bug bug.cc
 */

#include 
#include 
#include 
#include 
#include 
#include 

int depth;

void
bp1 (int t, int level, int n)
{
  auto r = std::rand () % (t + level + n);
  std::this_thread::sleep_for (std::chrono::milliseconds (r));
}

void
bp2 (int t, int level, int n)
{
  auto r = std::rand () % (t + level + n);
  std::this_thread::sleep_for (std::chrono::milliseconds (r));
}

void
sleeper (int t, int level, int n)
{
  int loop = 0;

  if (++level < depth)
sleeper (t, level, n);
  else
while (true)
  {
	if (++loop % 2)
	  bp1 (t, level, n);
	else
	  bp2 (t, level, n);
  }
}

int
main (int argc, char *argv[])
{
  auto max = argc > 1 ? std::atoi (argv[1]) : 2;
  depth = argc > 2 ? std::atoi (argv[2]) : 8;

  std::vector T;
  for (auto i = 0; i < max; i++)
{
  auto t = new std::thread