[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-03 Thread Jim Ingham via Phabricator via lldb-commits
This revision was automatically updated to reflect the committed changes.
Closed by commit rG1893065d7bf5: Allow the ThreadPlanStackMap to hold the 
thread plans for threads that were not… (authored by jingham).

Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Target.h
  lldb/include/lldb/Target/Thread.h
  lldb/include/lldb/Target/ThreadPlan.h
  lldb/include/lldb/Target/ThreadPlanStack.h
  lldb/source/Commands/CommandObjectThread.cpp
  lldb/source/Commands/Options.td
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/source/Target/TargetProperties.td
  lldb/source/Target/Thread.cpp
  lldb/source/Target/ThreadList.cpp
  lldb/source/Target/ThreadPlan.cpp
  lldb/source/Target/ThreadPlanStack.cpp
  lldb/source/Target/ThreadPlanStepOut.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/Makefile
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/TestOSPluginStepping.py
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/operating_system.py
  lldb/test/API/functionalities/thread_plan/Makefile
  lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
  lldb/test/API/functionalities/thread_plan/main.c

Index: lldb/test/API/functionalities/thread_plan/main.c
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/main.c
@@ -0,0 +1,16 @@
+#include 
+
+void
+call_me(int value) {
+  printf("called with %d\n", value); // Set another here.
+}
+
+int
+main(int argc, char **argv)
+{
+  call_me(argc); // Set a breakpoint here.
+  printf("This just spaces the two calls\n");
+  call_me(argc); // Run here to step over again.
+  printf("More spacing\n");
+  return 0; // Make sure we get here on last continue
+}
Index: lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
@@ -0,0 +1,164 @@
+"""
+Test that thread plan listing, and deleting works.
+"""
+
+
+
+import lldb
+import lldbsuite.test.lldbutil as lldbutil
+from lldbsuite.test.lldbtest import *
+
+
+class TestThreadPlanCommands(TestBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+NO_DEBUG_INFO_TESTCASE = True
+
+def test_thread_plan_actions(self):
+self.build()
+self.main_source_file = lldb.SBFileSpec("main.c")
+self.thread_plan_test()
+
+def check_list_output(self, command, active_plans = [], completed_plans = [], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & completed and discarded plans.
+# If all three check arrays are empty, that means the command is expected to fail. 
+
+interp = self.dbg.GetCommandInterpreter()
+result = lldb.SBCommandReturnObject()
+
+num_active = len(active_plans)
+num_completed = len(completed_plans)
+num_discarded = len(discarded_plans)
+
+interp.HandleCommand(command, result)
+if self.TraceOn():
+print("Command: %s"%(command))
+print(result.GetOutput())
+
+if num_active == 0 and num_completed == 0 and num_discarded == 0:
+self.assertFalse(result.Succeeded(), "command: '%s' succeeded when it should have failed: '%s'"%
+ (command, result.GetError()))
+return
+
+self.assertTrue(result.Succeeded(), "command: '%s' failed: '%s'"%(command, result.GetError()))
+result_arr = result.GetOutput().splitlines()
+num_results = len(result_arr)
+
+# Match the expected number of elements.
+# Adjust the count for the number of header lines we aren't matching:
+fudge = 0
+
+if num_completed == 0 and num_discarded == 0:
+# The fudge is 3: Thread header, Active Plan header and base plan
+fudge = 3
+elif num_completed == 0 or num_discarded == 0:
+# The fudge is 4: The above plus either the Completed or Discarded Plan header:
+fudge = 4
+else:
+# The fudge is 5 since we have both headers:
+fudge = 5
+
+self.assertEqual(num_results, num_active + num_completed + num_discarded + fudge,
+ "Too many elements in match arrays")
+
+# Now iterate through the results array and pick out the results.
+result_idx = 0
+self.assertIn("thread #", result_arr[result_idx], "Found thread header") ; result_idx += 1
+self.assertIn("Active plan stack", result_arr[result_idx], "Found active header") ; result_idx += 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-03 Thread Jim Ingham via Phabricator via lldb-commits
jingham added a comment.

In D76814#1958988 , @labath wrote:

> Looks fine to me, though I can't say I'm much of a thread plan expert (but 
> then again, I don't know if anyone except you is).


That's not a great situation, but it is where we currently are...


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-03 Thread Pavel Labath via Phabricator via lldb-commits
labath accepted this revision.
labath added a comment.
This revision is now accepted and ready to land.

Looks fine to me, though I can't say I'm much of a thread plan expert (but then 
again, I don't know if anyone except you is).


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-02 Thread Jim Ingham via Phabricator via lldb-commits
jingham updated this revision to Diff 254664.
jingham added a comment.

I changed target.process.plugin-reports-all-threads to 
target.experimental.os-plugin-reports-all-threads.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Target.h
  lldb/include/lldb/Target/Thread.h
  lldb/include/lldb/Target/ThreadPlan.h
  lldb/include/lldb/Target/ThreadPlanStack.h
  lldb/source/Commands/CommandObjectThread.cpp
  lldb/source/Commands/Options.td
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/source/Target/TargetProperties.td
  lldb/source/Target/Thread.cpp
  lldb/source/Target/ThreadList.cpp
  lldb/source/Target/ThreadPlan.cpp
  lldb/source/Target/ThreadPlanStack.cpp
  lldb/source/Target/ThreadPlanStepOut.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/Makefile
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/TestOSPluginStepping.py
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/operating_system.py
  lldb/test/API/functionalities/thread_plan/Makefile
  lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
  lldb/test/API/functionalities/thread_plan/main.c

Index: lldb/test/API/functionalities/thread_plan/main.c
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/main.c
@@ -0,0 +1,16 @@
+#include 
+
+void
+call_me(int value) {
+  printf("called with %d\n", value); // Set another here.
+}
+
+int
+main(int argc, char **argv)
+{
+  call_me(argc); // Set a breakpoint here.
+  printf("This just spaces the two calls\n");
+  call_me(argc); // Run here to step over again.
+  printf("More spacing\n");
+  return 0; // Make sure we get here on last continue
+}
Index: lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
@@ -0,0 +1,164 @@
+"""
+Test that thread plan listing, and deleting works.
+"""
+
+
+
+import lldb
+import lldbsuite.test.lldbutil as lldbutil
+from lldbsuite.test.lldbtest import *
+
+
+class TestThreadPlanCommands(TestBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+NO_DEBUG_INFO_TESTCASE = True
+
+def test_thread_plan_actions(self):
+self.build()
+self.main_source_file = lldb.SBFileSpec("main.c")
+self.thread_plan_test()
+
+def check_list_output(self, command, active_plans = [], completed_plans = [], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & completed and discarded plans.
+# If all three check arrays are empty, that means the command is expected to fail. 
+
+interp = self.dbg.GetCommandInterpreter()
+result = lldb.SBCommandReturnObject()
+
+num_active = len(active_plans)
+num_completed = len(completed_plans)
+num_discarded = len(discarded_plans)
+
+interp.HandleCommand(command, result)
+if self.TraceOn():
+print("Command: %s"%(command))
+print(result.GetOutput())
+
+if num_active == 0 and num_completed == 0 and num_discarded == 0:
+self.assertFalse(result.Succeeded(), "command: '%s' succeeded when it should have failed: '%s'"%
+ (command, result.GetError()))
+return
+
+self.assertTrue(result.Succeeded(), "command: '%s' failed: '%s'"%(command, result.GetError()))
+result_arr = result.GetOutput().splitlines()
+num_results = len(result_arr)
+
+# Match the expected number of elements.
+# Adjust the count for the number of header lines we aren't matching:
+fudge = 0
+
+if num_completed == 0 and num_discarded == 0:
+# The fudge is 3: Thread header, Active Plan header and base plan
+fudge = 3
+elif num_completed == 0 or num_discarded == 0:
+# The fudge is 4: The above plus either the Completed or Discarded Plan header:
+fudge = 4
+else:
+# The fudge is 5 since we have both headers:
+fudge = 5
+
+self.assertEqual(num_results, num_active + num_completed + num_discarded + fudge,
+ "Too many elements in match arrays")
+
+# Now iterate through the results array and pick out the results.
+result_idx = 0
+self.assertIn("thread #", result_arr[result_idx], "Found thread header") ; result_idx += 1
+self.assertIn("Active plan stack", result_arr[result_idx], "Found active header") ; result_idx += 1
+

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-02 Thread Pavel Labath via Phabricator via lldb-commits
labath added inline comments.



Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

jingham wrote:
> labath wrote:
> > jingham wrote:
> > > labath wrote:
> > > > jingham wrote:
> > > > > labath wrote:
> > > > > > If this is relevant only for os plugins, then it would be good to 
> > > > > > reflect that in the name as well.
> > > > > I thought about that.  In the future a regular Process plugin might 
> > > > > decide it was too expensive to report all threads as well.  There's 
> > > > > nothing in this patch that wouldn't "just work" with that case as 
> > > > > well.  Leaving the OS out was meant to indicate this was about how 
> > > > > the Process plugin OR any of its helpers (e.g. the OS Plugin) 
> > > > > produces threads.
> > > > Well, I am hoping that if we ever extend this support to the regular 
> > > > process plugins, we will implement it in a way where the plugin itself 
> > > > can tell us whether it is operating/supporting this mode (which I guess 
> > > > would involve a new gdb-remote packet and some specification of what 
> > > > exactly should happen when it gets sent), instead of relying on the 
> > > > user to set this correctly.
> > > > 
> > > > I mean, in an ideal world this is I would want to happen with the 
> > > > python plugins as well, but it seems that we are stuck with some 
> > > > existing plugins which already do that. However, I wouldn't want to 
> > > > plan on the same thing happening again. :)
> > > Right, I put this in mostly to help backwards compatibility.  For 
> > > instance, another OS Plugin I know about handles some older cooperative 
> > > threading scheme.  That one does report all threads on each stop.  I 
> > > didn't want to force them to do anything to keep their plugin working as 
> > > well as it did before.  That's also why I set the default to true here.
> > > 
> > > Even when we have plugins that actually support not reporting all 
> > > threads, you could imagine somebody having a Process plugin that supports 
> > > both modes - particularly early in the development of its support for 
> > > this feature, and in some corner cases the "doesn't report all threads" 
> > > mode has some subtle problem.  Having this setting will allow people to 
> > > get the slower but more assuredly correct behavior till it works 100% 
> > > reliably.  So I still think the setting has some value.
> > > 
> > > But I agree, going forward there should be some kind of handshake between 
> > > the ThreadPlanStackMap and the Process Plugin, either a "I've reported 
> > > all threads now" which could trigger a prune, or a "Is TID X still alive" 
> > > which the generic code could use to balance the cost of keeping outdated 
> > > stacks alive against when we want to ask about all threads.
> > All of this sounds fine, and I wouldn't mind having a setting like that, 
> > even after the support for partial thread lists is considered "stable". 
> > However, that sounds like a different setting to me -- that's like a 
> > directive to the process plugin about how it should behave, whereas this 
> > setting is more like a statement of fact about what the plugin does.
> > 
> > The setting could be later repurposed to do both, but it's not clear to me 
> > whether that is useful. Like, since we already support plugin-specific 
> > settings, the plugin which decides to implement both modes of operation 
> > could expose that as a custom setting. That way, one wouldn't get the 
> > impression that this setting applies to any process plugin...
> Okay.  I don't want to have to design that right now.  
> 
> Since this is just a workaround for the fact that the extant OS plugins don't 
> have a way to tell me this fact, I'll go back to using this just for OS 
> plugins.  In that case, I think I should move it to experimental.  Once we 
> add an API to the various plugins to advertise how they work, it won't be 
> needed.  The benefit of experimental settings is that their absence doesn't 
> cause the "settings set" to raise an error, so people can leave this in their 
> lldbinit's and it won't cause problems once we use it.
> 
> Now I just have to figure out how experimental settings work in the new 
> tablegen way of doing properties...
An experimental setting sounds good to me.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-01 Thread Jim Ingham via Phabricator via lldb-commits
jingham marked 5 inline comments as done.
jingham added inline comments.



Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

labath wrote:
> jingham wrote:
> > labath wrote:
> > > jingham wrote:
> > > > labath wrote:
> > > > > If this is relevant only for os plugins, then it would be good to 
> > > > > reflect that in the name as well.
> > > > I thought about that.  In the future a regular Process plugin might 
> > > > decide it was too expensive to report all threads as well.  There's 
> > > > nothing in this patch that wouldn't "just work" with that case as well. 
> > > >  Leaving the OS out was meant to indicate this was about how the 
> > > > Process plugin OR any of its helpers (e.g. the OS Plugin) produces 
> > > > threads.
> > > Well, I am hoping that if we ever extend this support to the regular 
> > > process plugins, we will implement it in a way where the plugin itself 
> > > can tell us whether it is operating/supporting this mode (which I guess 
> > > would involve a new gdb-remote packet and some specification of what 
> > > exactly should happen when it gets sent), instead of relying on the user 
> > > to set this correctly.
> > > 
> > > I mean, in an ideal world this is I would want to happen with the python 
> > > plugins as well, but it seems that we are stuck with some existing 
> > > plugins which already do that. However, I wouldn't want to plan on the 
> > > same thing happening again. :)
> > Right, I put this in mostly to help backwards compatibility.  For instance, 
> > another OS Plugin I know about handles some older cooperative threading 
> > scheme.  That one does report all threads on each stop.  I didn't want to 
> > force them to do anything to keep their plugin working as well as it did 
> > before.  That's also why I set the default to true here.
> > 
> > Even when we have plugins that actually support not reporting all threads, 
> > you could imagine somebody having a Process plugin that supports both modes 
> > - particularly early in the development of its support for this feature, 
> > and in some corner cases the "doesn't report all threads" mode has some 
> > subtle problem.  Having this setting will allow people to get the slower 
> > but more assuredly correct behavior till it works 100% reliably.  So I 
> > still think the setting has some value.
> > 
> > But I agree, going forward there should be some kind of handshake between 
> > the ThreadPlanStackMap and the Process Plugin, either a "I've reported all 
> > threads now" which could trigger a prune, or a "Is TID X still alive" which 
> > the generic code could use to balance the cost of keeping outdated stacks 
> > alive against when we want to ask about all threads.
> All of this sounds fine, and I wouldn't mind having a setting like that, even 
> after the support for partial thread lists is considered "stable". However, 
> that sounds like a different setting to me -- that's like a directive to the 
> process plugin about how it should behave, whereas this setting is more like 
> a statement of fact about what the plugin does.
> 
> The setting could be later repurposed to do both, but it's not clear to me 
> whether that is useful. Like, since we already support plugin-specific 
> settings, the plugin which decides to implement both modes of operation could 
> expose that as a custom setting. That way, one wouldn't get the impression 
> that this setting applies to any process plugin...
Okay.  I don't want to have to design that right now.  

Since this is just a workaround for the fact that the extant OS plugins don't 
have a way to tell me this fact, I'll go back to using this just for OS 
plugins.  In that case, I think I should move it to experimental.  Once we add 
an API to the various plugins to advertise how they work, it won't be needed.  
The benefit of experimental settings is that their absence doesn't cause the 
"settings set" to raise an error, so people can leave this in their lldbinit's 
and it won't cause problems once we use it.

Now I just have to figure out how experimental settings work in the new 
tablegen way of doing properties...



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:40
+  std::thread thread_1(thread_func);
+  g_cv.wait(main_lock);
+  g_value = 1;

labath wrote:
> spurious wake up danger here too
g_value had a different purpose and using it more widely as the variable 
backing the condition_variable was awkward, so I added an explicit condition 
variable.



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-04-01 Thread Jim Ingham via Phabricator via lldb-commits
jingham updated this revision to Diff 254372.
jingham added a comment.

Add a variable backing the std::condition_variable in the source for the 
stepping test to catch spurious wakeups.

AssertTrue -> AssertIn in the ThreadPlanList test.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Thread.h
  lldb/include/lldb/Target/ThreadPlan.h
  lldb/include/lldb/Target/ThreadPlanStack.h
  lldb/source/Commands/CommandObjectThread.cpp
  lldb/source/Commands/Options.td
  lldb/source/Target/Process.cpp
  lldb/source/Target/TargetProperties.td
  lldb/source/Target/Thread.cpp
  lldb/source/Target/ThreadList.cpp
  lldb/source/Target/ThreadPlan.cpp
  lldb/source/Target/ThreadPlanStack.cpp
  lldb/source/Target/ThreadPlanStepOut.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/Makefile
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/TestOSPluginStepping.py
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/operating_system.py
  lldb/test/API/functionalities/thread_plan/Makefile
  lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
  lldb/test/API/functionalities/thread_plan/main.c

Index: lldb/test/API/functionalities/thread_plan/main.c
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/main.c
@@ -0,0 +1,16 @@
+#include 
+
+void
+call_me(int value) {
+  printf("called with %d\n", value); // Set another here.
+}
+
+int
+main(int argc, char **argv)
+{
+  call_me(argc); // Set a breakpoint here.
+  printf("This just spaces the two calls\n");
+  call_me(argc); // Run here to step over again.
+  printf("More spacing\n");
+  return 0; // Make sure we get here on last continue
+}
Index: lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
@@ -0,0 +1,164 @@
+"""
+Test that thread plan listing, and deleting works.
+"""
+
+
+
+import lldb
+import lldbsuite.test.lldbutil as lldbutil
+from lldbsuite.test.lldbtest import *
+
+
+class TestThreadPlanCommands(TestBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+NO_DEBUG_INFO_TESTCASE = True
+
+def test_thread_plan_actions(self):
+self.build()
+self.main_source_file = lldb.SBFileSpec("main.c")
+self.thread_plan_test()
+
+def check_list_output(self, command, active_plans = [], completed_plans = [], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & completed and discarded plans.
+# If all three check arrays are empty, that means the command is expected to fail. 
+
+interp = self.dbg.GetCommandInterpreter()
+result = lldb.SBCommandReturnObject()
+
+num_active = len(active_plans)
+num_completed = len(completed_plans)
+num_discarded = len(discarded_plans)
+
+interp.HandleCommand(command, result)
+if self.TraceOn():
+print("Command: %s"%(command))
+print(result.GetOutput())
+
+if num_active == 0 and num_completed == 0 and num_discarded == 0:
+self.assertFalse(result.Succeeded(), "command: '%s' succeeded when it should have failed: '%s'"%
+ (command, result.GetError()))
+return
+
+self.assertTrue(result.Succeeded(), "command: '%s' failed: '%s'"%(command, result.GetError()))
+result_arr = result.GetOutput().splitlines()
+num_results = len(result_arr)
+
+# Match the expected number of elements.
+# Adjust the count for the number of header lines we aren't matching:
+fudge = 0
+
+if num_completed == 0 and num_discarded == 0:
+# The fudge is 3: Thread header, Active Plan header and base plan
+fudge = 3
+elif num_completed == 0 or num_discarded == 0:
+# The fudge is 4: The above plus either the Completed or Discarded Plan header:
+fudge = 4
+else:
+# The fudge is 5 since we have both headers:
+fudge = 5
+
+self.assertEqual(num_results, num_active + num_completed + num_discarded + fudge,
+ "Too many elements in match arrays")
+
+# Now iterate through the results array and pick out the results.
+result_idx = 0
+self.assertIn("thread #", result_arr[result_idx], "Found thread header") ; result_idx += 1
+self.assertIn("Active plan stack", result_arr[result_idx], "Found active header") ; result_idx += 1
+self.assertIn("Element 0: 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-30 Thread Pavel Labath via Phabricator via lldb-commits
labath added inline comments.



Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

jingham wrote:
> labath wrote:
> > jingham wrote:
> > > labath wrote:
> > > > If this is relevant only for os plugins, then it would be good to 
> > > > reflect that in the name as well.
> > > I thought about that.  In the future a regular Process plugin might 
> > > decide it was too expensive to report all threads as well.  There's 
> > > nothing in this patch that wouldn't "just work" with that case as well.  
> > > Leaving the OS out was meant to indicate this was about how the Process 
> > > plugin OR any of its helpers (e.g. the OS Plugin) produces threads.
> > Well, I am hoping that if we ever extend this support to the regular 
> > process plugins, we will implement it in a way where the plugin itself can 
> > tell us whether it is operating/supporting this mode (which I guess would 
> > involve a new gdb-remote packet and some specification of what exactly 
> > should happen when it gets sent), instead of relying on the user to set 
> > this correctly.
> > 
> > I mean, in an ideal world this is I would want to happen with the python 
> > plugins as well, but it seems that we are stuck with some existing plugins 
> > which already do that. However, I wouldn't want to plan on the same thing 
> > happening again. :)
> Right, I put this in mostly to help backwards compatibility.  For instance, 
> another OS Plugin I know about handles some older cooperative threading 
> scheme.  That one does report all threads on each stop.  I didn't want to 
> force them to do anything to keep their plugin working as well as it did 
> before.  That's also why I set the default to true here.
> 
> Even when we have plugins that actually support not reporting all threads, 
> you could imagine somebody having a Process plugin that supports both modes - 
> particularly early in the development of its support for this feature, and in 
> some corner cases the "doesn't report all threads" mode has some subtle 
> problem.  Having this setting will allow people to get the slower but more 
> assuredly correct behavior till it works 100% reliably.  So I still think the 
> setting has some value.
> 
> But I agree, going forward there should be some kind of handshake between the 
> ThreadPlanStackMap and the Process Plugin, either a "I've reported all 
> threads now" which could trigger a prune, or a "Is TID X still alive" which 
> the generic code could use to balance the cost of keeping outdated stacks 
> alive against when we want to ask about all threads.
All of this sounds fine, and I wouldn't mind having a setting like that, even 
after the support for partial thread lists is considered "stable". However, 
that sounds like a different setting to me -- that's like a directive to the 
process plugin about how it should behave, whereas this setting is more like a 
statement of fact about what the plugin does.

The setting could be later repurposed to do both, but it's not clear to me 
whether that is useful. Like, since we already support plugin-specific 
settings, the plugin which decides to implement both modes of operation could 
expose that as a custom setting. That way, one wouldn't get the impression that 
this setting applies to any process plugin...



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:40
+  std::thread thread_1(thread_func);
+  g_cv.wait(main_lock);
+  g_value = 1;

spurious wake up danger here too



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & 
completed and discarded plans.

jingham wrote:
> labath wrote:
> > jingham wrote:
> > > labath wrote:
> > > > this looks like a model example for using `self.filecheck`
> > > I don't see that.  Do you mean write a file check matching file with the 
> > > contents of the three arrays in this function, and then run file check on 
> > > that?  Or did you mean convert the call sites into embedded patterns and 
> > > then call file check on myself?  But then I'd have to also embed the 
> > > headers in the body such that they came in the right order for all the 
> > > tests.  Neither of these seem like attractive options to me.  It doesn't 
> > > seem like it will be hard to maintain this little checker, and I don't 
> > > see what I would gain by using file check instead.
> > What I meant was doing doing something like:
> > ```
> > self.filecheck("thread plan list %d"%(current_id), __file__, 
> > 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-27 Thread Jim Ingham via Phabricator via lldb-commits
jingham marked 2 inline comments as done.
jingham added inline comments.



Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

labath wrote:
> jingham wrote:
> > labath wrote:
> > > If this is relevant only for os plugins, then it would be good to reflect 
> > > that in the name as well.
> > I thought about that.  In the future a regular Process plugin might decide 
> > it was too expensive to report all threads as well.  There's nothing in 
> > this patch that wouldn't "just work" with that case as well.  Leaving the 
> > OS out was meant to indicate this was about how the Process plugin OR any 
> > of its helpers (e.g. the OS Plugin) produces threads.
> Well, I am hoping that if we ever extend this support to the regular process 
> plugins, we will implement it in a way where the plugin itself can tell us 
> whether it is operating/supporting this mode (which I guess would involve a 
> new gdb-remote packet and some specification of what exactly should happen 
> when it gets sent), instead of relying on the user to set this correctly.
> 
> I mean, in an ideal world this is I would want to happen with the python 
> plugins as well, but it seems that we are stuck with some existing plugins 
> which already do that. However, I wouldn't want to plan on the same thing 
> happening again. :)
Right, I put this in mostly to help backwards compatibility.  For instance, 
another OS Plugin I know about handles some older cooperative threading scheme. 
 That one does report all threads on each stop.  I didn't want to force them to 
do anything to keep their plugin working as well as it did before.  That's also 
why I set the default to true here.

Even when we have plugins that actually support not reporting all threads, you 
could imagine somebody having a Process plugin that supports both modes - 
particularly early in the development of its support for this feature, and in 
some corner cases the "doesn't report all threads" mode has some subtle 
problem.  Having this setting will allow people to get the slower but more 
assuredly correct behavior till it works 100% reliably.  So I still think the 
setting has some value.

But I agree, going forward there should be some kind of handshake between the 
ThreadPlanStackMap and the Process Plugin, either a "I've reported all threads 
now" which could trigger a prune, or a "Is TID X still alive" which the generic 
code could use to balance the cost of keeping outdated stacks alive against 
when we want to ask about all threads.



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & 
completed and discarded plans.

labath wrote:
> jingham wrote:
> > labath wrote:
> > > this looks like a model example for using `self.filecheck`
> > I don't see that.  Do you mean write a file check matching file with the 
> > contents of the three arrays in this function, and then run file check on 
> > that?  Or did you mean convert the call sites into embedded patterns and 
> > then call file check on myself?  But then I'd have to also embed the 
> > headers in the body such that they came in the right order for all the 
> > tests.  Neither of these seem like attractive options to me.  It doesn't 
> > seem like it will be hard to maintain this little checker, and I don't see 
> > what I would gain by using file check instead.
> What I meant was doing doing something like:
> ```
> self.filecheck("thread plan list %d"%(current_id), __file__, 
> "--check-prefix=CHECK1")
> # CHECK1:  Active plan stack:
> # CHECK1-NEXT: Element 0: Base thread plan
> # CHECK1-NEXT: Element 1: Stepping over line main.c
> # CHECK1-EMPTY:
> ```
> instead of `self.check_list_output(..., [], ["Stepping over line main.c"])`
> 
> The main advantage of that I see is in the error messages it produces. If 
> anything goes wrong, the most likely error message we're going to get is "9 
> != 10, Too many elements in match arrays".  The second most likely error 
> message is "False is not True, Didn't find active plan Stepping over line 
> main.c". If the same happens while using filecheck, it would print the string 
> it is trying to match, the line it is trying to match it against, and the 
> line of a "possible intended match", which makes it much easier to figure out 
> what is going on without debugging the test. Now you could embed all of that 
> into this matcher function, but then you're sort of reinventing FileCheck.
> 
> The second advantage is that it is easier to see what is the "expected" 
> output supposed to be when it is layout 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-27 Thread Pavel Labath via Phabricator via lldb-commits
labath added inline comments.



Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

jingham wrote:
> labath wrote:
> > If this is relevant only for os plugins, then it would be good to reflect 
> > that in the name as well.
> I thought about that.  In the future a regular Process plugin might decide it 
> was too expensive to report all threads as well.  There's nothing in this 
> patch that wouldn't "just work" with that case as well.  Leaving the OS out 
> was meant to indicate this was about how the Process plugin OR any of its 
> helpers (e.g. the OS Plugin) produces threads.
Well, I am hoping that if we ever extend this support to the regular process 
plugins, we will implement it in a way where the plugin itself can tell us 
whether it is operating/supporting this mode (which I guess would involve a new 
gdb-remote packet and some specification of what exactly should happen when it 
gets sent), instead of relying on the user to set this correctly.

I mean, in an ideal world this is I would want to happen with the python 
plugins as well, but it seems that we are stuck with some existing plugins 
which already do that. However, I wouldn't want to plan on the same thing 
happening again. :)



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & 
completed and discarded plans.

jingham wrote:
> labath wrote:
> > this looks like a model example for using `self.filecheck`
> I don't see that.  Do you mean write a file check matching file with the 
> contents of the three arrays in this function, and then run file check on 
> that?  Or did you mean convert the call sites into embedded patterns and then 
> call file check on myself?  But then I'd have to also embed the headers in 
> the body such that they came in the right order for all the tests.  Neither 
> of these seem like attractive options to me.  It doesn't seem like it will be 
> hard to maintain this little checker, and I don't see what I would gain by 
> using file check instead.
What I meant was doing doing something like:
```
self.filecheck("thread plan list %d"%(current_id), __file__, 
"--check-prefix=CHECK1")
# CHECK1:  Active plan stack:
# CHECK1-NEXT: Element 0: Base thread plan
# CHECK1-NEXT: Element 1: Stepping over line main.c
# CHECK1-EMPTY:
```
instead of `self.check_list_output(..., [], ["Stepping over line main.c"])`

The main advantage of that I see is in the error messages it produces. If 
anything goes wrong, the most likely error message we're going to get is "9 != 
10, Too many elements in match arrays".  The second most likely error message 
is "False is not True, Didn't find active plan Stepping over line main.c". If 
the same happens while using filecheck, it would print the string it is trying 
to match, the line it is trying to match it against, and the line of a 
"possible intended match", which makes it much easier to figure out what is 
going on without debugging the test. Now you could embed all of that into this 
matcher function, but then you're sort of reinventing FileCheck.

The second advantage is that it is easier to see what is the "expected" output 
supposed to be when it is layout out like this, instead of embedded in the 
logic of the matcher. That may introduce some amount of repetition for 
specifying the common parts of the output, but that is not necessarily bad 
(because it's more readable). If there is a huge amount of repetition, then 
there are ways to avoid that via using multiple prefixes:
```
FileCheck --check-prefixes=ALL,FIRST
...
FileCheck --check-prefixes=ALL,SECOND
# ALL: Active plan stack:
# ALL: Element 0: Base thread plan
# FIRST: Element 1: Stepping over line main.c
# SECOND: Element 1: ???
```
But that again makes it harder to see the expected output, so it should be 
balanced against that.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-26 Thread Jim Ingham via Phabricator via lldb-commits
jingham added a comment.

Addressed most of the review comments.  I don't see that using file check would 
make the plan output checking any easier.  The current function seems clear and 
easy to use.  Trying to dynamically insert the passed in matches into a 
filecheck test file and then run file check on that doesn't seem like it would 
be any easier than what is here.  I don't need any of the substitutions or any 
of the other fancy things FileCheck provides, that just seems like overkill.  
But maybe I'm misunderstanding what you meant.




Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

labath wrote:
> If this is relevant only for os plugins, then it would be good to reflect 
> that in the name as well.
I thought about that.  In the future a regular Process plugin might decide it 
was too expensive to report all threads as well.  There's nothing in this patch 
that wouldn't "just work" with that case as well.  Leaving the OS out was meant 
to indicate this was about how the Process plugin OR any of its helpers (e.g. 
the OS Plugin) produces threads.



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & 
completed and discarded plans.

labath wrote:
> this looks like a model example for using `self.filecheck`
I don't see that.  Do you mean write a file check matching file with the 
contents of the three arrays in this function, and then run file check on that? 
 Or did you mean convert the call sites into embedded patterns and then call 
file check on myself?  But then I'd have to also embed the headers in the body 
such that they came in the right order for all the tests.  Neither of these 
seem like attractive options to me.  It doesn't seem like it will be hard to 
maintain this little checker, and I don't see what I would gain by using file 
check instead.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-26 Thread Jim Ingham via Phabricator via lldb-commits
jingham updated this revision to Diff 252982.
jingham marked 2 inline comments as done.
jingham added a comment.

Addressed most of Pavel's review comments.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Thread.h
  lldb/include/lldb/Target/ThreadPlan.h
  lldb/include/lldb/Target/ThreadPlanStack.h
  lldb/source/Commands/CommandObjectThread.cpp
  lldb/source/Commands/Options.td
  lldb/source/Target/Process.cpp
  lldb/source/Target/TargetProperties.td
  lldb/source/Target/Thread.cpp
  lldb/source/Target/ThreadList.cpp
  lldb/source/Target/ThreadPlan.cpp
  lldb/source/Target/ThreadPlanStack.cpp
  lldb/source/Target/ThreadPlanStepOut.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/Makefile
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/TestOSPluginStepping.py
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/operating_system.py
  lldb/test/API/functionalities/thread_plan/Makefile
  lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
  lldb/test/API/functionalities/thread_plan/main.c

Index: lldb/test/API/functionalities/thread_plan/main.c
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/main.c
@@ -0,0 +1,16 @@
+#include 
+
+void
+call_me(int value) {
+  printf("called with %d\n", value); // Set another here.
+}
+
+int
+main(int argc, char **argv)
+{
+  call_me(argc); // Set a breakpoint here.
+  printf("This just spaces the two calls\n");
+  call_me(argc); // Run here to step over again.
+  printf("More spacing\n");
+  return 0; // Make sure we get here on last continue
+}
Index: lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
@@ -0,0 +1,160 @@
+"""
+Test that thread plan listing, and deleting works.
+"""
+
+
+
+import lldb
+import lldbsuite.test.lldbutil as lldbutil
+from lldbsuite.test.lldbtest import *
+
+
+class TestThreadPlanCommands(TestBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+NO_DEBUG_INFO_TESTCASE = True
+
+def test_thread_plan_actions(self):
+self.build()
+self.main_source_file = lldb.SBFileSpec("main.c")
+self.thread_plan_test()
+
+def check_list_output(self, command, active_plans = [], completed_plans = [], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & completed and discarded plans.
+# If all three check arrays are empty, that means the command is expected to fail. 
+
+interp = self.dbg.GetCommandInterpreter()
+result = lldb.SBCommandReturnObject()
+
+num_active = len(active_plans)
+num_completed = len(completed_plans)
+num_discarded = len(discarded_plans)
+
+interp.HandleCommand(command, result)
+if num_active == 0 and num_completed == 0 and num_discarded == 0:
+self.assertFalse(result.Succeeded(), "command: '%s' succeeded when it should have failed: '%s'"%
+ (command, result.GetError()))
+return
+
+self.assertTrue(result.Succeeded(), "command: '%s' failed: '%s'"%(command, result.GetError()))
+result_arr = result.GetOutput().splitlines()
+num_results = len(result_arr)
+
+# Match the expected number of elements.
+# Adjust the count for the number of header lines we aren't matching:
+fudge = 0
+
+if num_completed == 0 and num_discarded == 0:
+# The fudge is 3: Thread header, Active Plan header and base plan
+fudge = 3
+elif num_completed == 0 or num_discarded == 0:
+# The fudge is 4: The above plus either the Completed or Discarded Plan header:
+fudge = 4
+else:
+# The fudge is 5 since we have both headers:
+fudge = 5
+
+self.assertEqual(num_results, num_active + num_completed + num_discarded + fudge,
+ "Too many elements in match arrays")
+
+# Now iterate through the results array and pick out the results.
+result_idx = 0
+self.assertTrue("thread #" in result_arr[result_idx], "Found thread header") ; result_idx += 1
+self.assertTrue("Active plan stack" in result_arr[result_idx], "Found active header") ; result_idx += 1
+self.assertTrue("Element 0: Base thread plan" in result_arr[result_idx], "Found base plan") ; result_idx += 1
+
+for text in active_plans:
+self.assertFalse("Completed plan stack" in 

[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-26 Thread Pavel Labath via Phabricator via lldb-commits
labath added a comment.

I still don't know much about thread plans, but I didn't see anything that 
would stand out. All my comments are fairly superficial.




Comment at: lldb/source/Target/TargetProperties.td:183
 Desc<"A path to a python OS plug-in module file that contains a 
OperatingSystemPlugIn class.">;
+  def PluginReportsAllThreads: Property<"plugin-reports-all-threads", 
"Boolean">,
+Global,

If this is relevant only for os plugins, then it would be good to reflect that 
in the name as well.



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:1-7
+//===-- main.cpp *- C++ 
-*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM 
Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===--===//

We don't put licence headers on tests.



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:20
+#include 
+#include 
+

unistd.h is not portable, and it doesn't look like you are using anything from 
it.



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:33
+  g_cv.notify_one();
+  g_cv.wait(func_lock);
+}

The synchronization here seems to rely on the fact that there will be no 
spurious wakeups. The simplest way to guard against that is to pass a lambda 
into the wait function `g_cv.wait(func_lock, [&] { return g_value == 2; })`



Comment at: 
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp:38-41
+  step_out_of_here(); // Expect to stop here after step-out (clang)
+
+  // Return
+  return NULL; // Expect to stop here after step-out (icc and gcc)

The clang/gcc/icc anotations are not used, and who knows if they are even 
accurate at this point.



Comment at: 
lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py:23
+
+def check_list_output(self, command, active_plans = [], completed_plans = 
[], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & 
completed and discarded plans.

this looks like a model example for using `self.filecheck`


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76814/new/

https://reviews.llvm.org/D76814



___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D76814: Preserve ThreadPlanStacks for unreported threads

2020-03-25 Thread Jim Ingham via Phabricator via lldb-commits
jingham created this revision.
jingham added reviewers: clayborg, labath, friss.
Herald added a project: LLDB.
Herald added a subscriber: lldb-commits.
jingham added a parent revision: D75880: [NFC} Move ThreadPlans stacks into 
their own class, store it in Process by TID.

This is the final patch in the work I've done so far to support Process plugins 
(or their associated OS Plugins) not reporting all threads on every stop.

This commit moves the thread plan updating and dumping into the 
ThreadPlanStackMap.  It adds a setting 
(target.process.plugin-reports-all-threads) you can use to indicate that you 
want us to preserve unreported thread plans, and a "thread plan prune" command 
you can use to remove any orphaned plans.

If this is approved, I'd like to check in all these pieces.  At this stage if 
you opt into preserving ThreadPlanStacks, the process is pretty minimal.  I 
want to get this more by-hand version working because there are a lot of OS 
Plugins in the wild (there's on in every mach_kernel dSYM) and we have not 
supported them well.  So I want to get something that supports the extant 
plugins.

The next step is to add a way for the plugins to sync up with the thread plans, 
and then call that at public stop time, but I'd like to do that as a separate 
patch.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D76814

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Thread.h
  lldb/include/lldb/Target/ThreadPlan.h
  lldb/include/lldb/Target/ThreadPlanStack.h
  lldb/source/Commands/CommandObjectThread.cpp
  lldb/source/Commands/Options.td
  lldb/source/Target/Process.cpp
  lldb/source/Target/TargetProperties.td
  lldb/source/Target/Thread.cpp
  lldb/source/Target/ThreadList.cpp
  lldb/source/Target/ThreadPlan.cpp
  lldb/source/Target/ThreadPlanStack.cpp
  lldb/source/Target/ThreadPlanStepOut.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/Makefile
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/TestOSPluginStepping.py
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/main.cpp
  
lldb/test/API/functionalities/plugins/python_os_plugin/stepping_plugin_threads/operating_system.py
  lldb/test/API/functionalities/thread_plan/Makefile
  lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
  lldb/test/API/functionalities/thread_plan/main.c

Index: lldb/test/API/functionalities/thread_plan/main.c
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/main.c
@@ -0,0 +1,16 @@
+#include 
+
+void
+call_me(int value) {
+  printf("called with %d\n", value); // Set another here.
+}
+
+int
+main(int argc, char **argv)
+{
+  call_me(argc); // Set a breakpoint here.
+  printf("This just spaces the two calls\n");
+  call_me(argc); // Run here to step over again.
+  printf("More spacing\n");
+  return 0; // Make sure we get here on last continue
+}
Index: lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
===
--- /dev/null
+++ lldb/test/API/functionalities/thread_plan/TestThreadPlanCommands.py
@@ -0,0 +1,160 @@
+"""
+Test that thread plan listing, and deleting works.
+"""
+
+
+
+import lldb
+import lldbsuite.test.lldbutil as lldbutil
+from lldbsuite.test.lldbtest import *
+
+
+class TestThreadPlanCommands(TestBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+NO_DEBUG_INFO_TESTCASE = True
+
+def test_thread_plan_actions(self):
+self.build()
+self.main_source_file = lldb.SBFileSpec("main.c")
+self.thread_plan_test()
+
+def check_list_output(self, command, active_plans = [], completed_plans = [], discarded_plans = []):
+# Check the "thread plan list" output against a list of active & completed and discarded plans.
+# If all three check arrays are empty, that means the command is expected to fail. 
+
+interp = self.dbg.GetCommandInterpreter()
+result = lldb.SBCommandReturnObject()
+
+num_active = len(active_plans)
+num_completed = len(completed_plans)
+num_discarded = len(discarded_plans)
+
+interp.HandleCommand(command, result)
+if num_active == 0 and num_completed == 0 and num_discarded == 0:
+self.assertFalse(result.Succeeded(), "command: '%s' succeeded when it should have failed: '%s'"%
+ (command, result.GetError()))
+return
+
+self.assertTrue(result.Succeeded(), "command: '%s' failed: '%s'"%(command, result.GetError()))
+result_arr = result.GetOutput().splitlines()
+num_results = len(result_arr)
+
+# Match the expected number of elements.
+# Adjust the count for the number of header lines we aren't matching:
+fudge = 0
+
+if num_completed == 0 and num_discarded == 0:
+