[lldb-dev] [Bug 27806] New: thread step-over dlopen fails while running test_step_over_load (TestLoadUnload.py)

2016-05-18 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=27806

Bug ID: 27806
   Summary: thread step-over dlopen fails while running
test_step_over_load (TestLoadUnload.py)
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: omair.jav...@linaro.org
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Inferior does not stop as expected and continues to run and exits.

Log given below:


(lldb) settings set target.env-vars LD_LIBRARY_PATH=/home/omair/lldb/tmp
(lldb) file
/home/omair/work/lldb-dev/llvm/tools/lldb/packages/Python/lldbsuite/test/functionalities/load_unload/a.out
Current executable set to
'/home/omair/work/lldb-dev/llvm/tools/lldb/packages/Python/lldbsuite/test/functionalities/load_unload/a.out'
(arm).
(lldb) breakpoint set -f "main.cpp" -l 31
Breakpoint 1: where = a.out`main + 38 at main.cpp:31, address = 0x8802
(lldb) run
Process 25796 launched:
'/home/omair/work/lldb-dev/llvm/tools/lldb/packages/Python/lldbsuite/test/functionalities/load_unload/a.out'
(arm)
Process 25796 stopped
* thread #1: tid = 25796, 0x8802 a.out`main(argc=1, argv=0x7efffee4) + 38
at main.cpp:31, name = 'a.out', stop reason = breakpoint 1.1
frame #0: 0x8802 a.out`main(argc=1, argv=0x7efffee4) + 38 at
main.cpp:31
   28  void *c_dylib_handle = NULL;
   29  int (*a_function) (void);
   30  
-> 31  a_dylib_handle = dlopen (a_name, RTLD_NOW); // Set break point
at this line for test_lldb_process_load_and_unload_commands().
   32  if (a_dylib_handle == NULL)
   33  {
   34  fprintf (stderr, "%s\n", dlerror());
(lldb) image lookup -n a_function
(lldb) process load libloadunload_a.so --install
error: failed to load 'libloadunload_a.so': platform install doesn't handle non
file or directory items
(lldb) kill
Process 25796 exited with status = 9 (0x0009) 
(lldb) run
Process 25801 launched:
'/home/omair/work/lldb-dev/llvm/tools/lldb/packages/Python/lldbsuite/test/functionalities/load_unload/a.out'
(arm)
Process 25801 stopped
* thread #1: tid = 25801, 0x8802 a.out`main(argc=1, argv=0x7efffee4) + 38
at main.cpp:31, name = 'a.out', stop reason = breakpoint 1.1
frame #0: 0x8802 a.out`main(argc=1, argv=0x7efffee4) + 38 at
main.cpp:31
   28  void *c_dylib_handle = NULL;
   29  int (*a_function) (void);
   30  
-> 31  a_dylib_handle = dlopen (a_name, RTLD_NOW); // Set break point
at this line for test_lldb_process_load_and_unload_commands().
   32  if (a_dylib_handle == NULL)
   33  {
   34  fprintf (stderr, "%s\n", dlerror());
(lldb) thread list
Process 25801 stopped
* thread #1: tid = 25801, 0x8802 a.out`main(argc=1, argv=0x7efffee4) + 38
at main.cpp:31, name = 'a.out', stop reason = breakpoint 1.1
(lldb) thread step-over 
First time around, got: 500
Second time around, got: 500
d_function returns: 700
Process 25801 exited with status = 0 (0x)

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] bug in TestMiGdbSetShow.test_lldbmi_gdb_set_target_async_off?

2016-05-18 Thread Ted Woodward via lldb-dev
Packages/Python/lldbsuite/test/tools/lldb-mi/TestMiGdbSetShow.py, in
test_lldbmi_gdb_set_target_async_off we have this code:

 

self.runCmd("-gdb-set target-async off")

 

.

 

self.runCmd("-exec-run")

unexpected = [ "\*running" ] # "\*running" is async notification

it = self.expect(unexpected + [ "@\"argc=1rn" ])

if it < len(unexpected):

self.fail("unexpected found: %s" % unexpected[it])

 

But lldb-mi does the right thing, expect won't match "running", so the
self.expect command fails, which causes the test to error out. Shouldn't the
self.expect be in a try, with an except being a pass?

 

--

Qualcomm Innovation Center, Inc.

The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Enabling tests on the Windows LLVM buildbot

2016-05-18 Thread Ed Maste via lldb-dev
On 18 May 2016 at 05:08, Pavel Labath via lldb-dev
 wrote:
>
> Sounds reasonable. I'd like to add a clarifying point (2.5): If you
> have added a new test, and this test fails on some other platform AND
> there is no reason to believe that this is due to a problem in the
> test (like the python3 bytes thingy, etc.), then you can just xfail
> the test for the relevant architecture is fine.

That sounds reasonable to me.

I hope to re-enable tests on the FreeBSD buildbot shortly as well. I
have a "temporary" build-only buildbot I put into service when the
previous ones needed to be decommissioned.

Since FreeBSD's currently the only platform still using the old-style
POSIX in-process debug support it's quite likely we could run into a
failure when a test is added. I'd prefer to have the test marked XFAIL
on FreeBSD with a bug report (or at least a post to the mailing list)
than for it to be backed out pending investigation.

A bit of a tangent but for reference, on FreeBSD 10 I currently see
the following set of undesired test results:

ERROR: test_with_run_command_dwarf
(functionalities/data-formatter/data-formatter-stl/libstdcpp/string/TestDataFormatterStdString.py)
ERROR: test_with_run_command_dwarf
(functionalities/data-formatter/data-formatter-stl/libstdcpp/list/TestDataFormatterStdList.py)
ERROR: test_with_run_command_dwarf
(functionalities/data-formatter/data-formatter-stl/libstdcpp/iterator/TestDataFormatterStdIterator.py)
ERROR: [EXCEPTIONAL EXIT 10 (SIGBUS)] test_python_os_plugin_dwarf
(functionalities/plugins/python_os_plugin/TestPythonOSPlugin.py)
UNEXPECTED SUCCESS: test_and_run_command_dwarf
(lang/c/register_variables/TestRegisterVariables.py)
UNEXPECTED SUCCESS: test_and_run_command_dwarf
(lang/c/const_variables/TestConstVariables.py)
TIMEOUT: test_asm_int_3
(functionalities/breakpoint/debugbreak/TestDebugBreak.py)
TIMEOUT: test_with_dsym_and_python_api_dwarf
(lang/go/expressions/TestExpressions.py)
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 27803] New: Segmentation fault in lldb::SBValue::GetDescription

2016-05-18 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=27803

Bug ID: 27803
   Summary: Segmentation fault in lldb::SBValue::GetDescription
   Product: lldb
   Version: 3.8
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: sascha.ap...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Steps to reproduce:

1. git clone from https://github.com/eidheim/Simple-Web-Server
2. run cmake and make
3. 
lldb ./Simple-Web-Server/build/debug/http_examples
b http_examples.cpp:72
thread select 2
frame select 4
frame variable

-> Segfault

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Enabling tests on the Windows LLVM buildbot

2016-05-18 Thread Pavel Labath via lldb-dev
HI,

I am glad to see more automated testing of lldb. I think it's very
valuable as a lot of people don't have access to that platform.

On 17 May 2016 at 20:54, Zachary Turner via lldb-dev
 wrote:
> Hi all,
>
> I'm going to be submitting a change shortly to enable "ninja check-lldb" on
> the upstream Windows lldb buildbot.  For now this is an experiment to see
> how well this will go, but I would eventually like this to become permanent.
> As with build breakages, the bot needs to stay green, so here's what I'm
> thinking:
>
> 1. If your change breaks the Windows buildbot, please check to see if it's
> something obvious.  Did you use a Python string instead of a bytes?  Did you
> hardcode /dev/null instead of using a portable approach?  Did you call a
> function like getpid() in a test which doesn't exist on Windows?  Did you
> hardcode _Z at the beginning of a symbol name instead of using a
> mangling-aware approach?  Clear errors in patches should be fixed quickly or
> reverted and resubmitted after being fixed.
>
> 2. If you can't identify why it's broken and/or need help debugging and
> testing on Windows, please revert the patch in a timely manner and ask me or
> Adrian for help.
>
> 3. If the test cannot be written in a way that will work on Windows (e.g.
> requires pexpect, uses an unsupported debugger feature like watchpoints,
> etc), then xfail or skip the test.

Sounds reasonable. I'd like to add a clarifying point (2.5): If you
have added a new test, and this test fails on some other platform AND
there is no reason to believe that this is due to a problem in the
test (like the python3 bytes thingy, etc.), then you can just xfail
the test for the relevant architecture is fine. The typical situation
I'm thinking of here is person A fixing a bug in code specific to
platform X and adding a platform-agnostic test, which exposes a
similar bug in platform Y. If all the existing tests pass then the new
patch is definitely not making the situation worse, while taking the
patch out would leave platform X broken (and we do want to encourage
people to write tests for bugs they fix). In this case, I think a more
appropriate course of action would be notifying the platform
maintainer (email, filing a bug, ...) and providing the background on
what is the test attempting to do and any other insight you might have
into why it could be broken.

What do you think?


>
> 4. In some cases the test might be flaky.  If your patch appears to have
> nothing to do with the failure message you're seeing in the log file, it
> might be flaky.  Let it run again and see if it clears up.

I'm curious if you have done any measurements about what the ratio of
flaky builds for your platform is. I am currently inching towards
doing the same thing for the linux buildbot as well (*). I've gotten
it down to about 2--3 flaky builds per week, which I consider an
acceptable state, given the circumstances, but I'm going to continue
tracking down all the other issues as well. So, I'm asking this, as I
think we should have some common standard of what is considered to be
acceptable buildbot behaviour. In any case, I'm interested to see how
the experiment turns out.

(*) My current plan for this is end of june, when I get back from
holiday, so I can keep a close eye on it.

cheers,
pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev