Re: [lldb-dev] Mailman->Discourse Migration on February 1, 10am PST

2022-02-01 Thread Tanya Lattner via lldb-dev
As a reminder, this will be happening this morning. 

Thanks,
Tanya

> On Jan 29, 2022, at 8:29 AM, Tanya Lattner  wrote:
> 
> LLVM Community,
> 
> As referenced in this blog post 
> , we are getting 
> close to the deadline for migration for some Mailman lists to Discourse. If 
> you are receiving this email from a LLVM Mailman list, then this list will be 
> migrating to Discourse on February 1st.
> 
> We have gone through several test runs and feel that we are ready to do the 
> final migration on February 1st starting at 10 am PST. Once the migration 
> begins, the impacted Mailman lists and Discourse will be read only until it 
> has completed.
> 
> Here are the steps of the migration process on February 1st at 10 am PST:
> The Mailman lists that are migrating will be placed in “read-only” mode. No 
> mail will be accepted to the list. 
> Mailman will be shut down while the final archives are collected. We expect 
> this downtime to be about 15 minutes.
> The mailman archives are sent to Discourse for migration (15-20 minutes for 
> data transfer).
> The LLVM DIscourse is put into read-only mode. Given the size of our 
> archives, we expect the import to take 5 hours. 
> Sanity-checks will be made to ensure that everything looks as expected. We 
> estimate this will take 1 hour or less.
> The LLVM Discourse will be opened up again with all the archives imported.
> We will post on Discourse about how things went and any next steps in the 
> “Announcements” category.
> 
> We will use the Discourse Migration website 
>  to communicate where we 
> are in the process. 
> 
> We expect the LLVM Discourse to open by 5pm PST, but realize that even in the 
> best plans, there may be unexpected situations that arise that cause the 
> migration to take longer or possibly be stopped. If that occurs, we will 
> evaluate the best course of action and communicate to the community as 
> described above.
> 
> If you have any questions, please let me know.
> 
> Thanks,
> Tanya

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Mailman->Discourse Migration on February 1, 10am PST

2022-01-31 Thread Tanya Lattner via lldb-dev
Thank you Paul for pointing this out. I will get this information updated 
tonight.

-Tanya

> On Jan 29, 2022, at 9:58 AM, Paul Smith  wrote:
> 
> On Sat, 2022-01-29 at 08:29 -0800, Tanya Lattner via lldb-dev wrote:
>> We will use the Discourse Migration website to communicate where we
>> are in the process.
> 
> Just to point out that the "Setting up email interactions" section on
> this page could use some attention.
> 
> For example the first bullet links to a Mozilla help page which is
> obsolete; it describes modifying user preferences which don't exist in
> the current LLVM Discourse (maybe LLVM is using a newer version?), or
> at least they don't exist in my account.  I can't find any setting
> related to "Send me email notifications when I am active on the site",
> nor can I find any setting like "Mark posts as read when I'm emailed
> about them".
> 
> Also the link at "Quoting previous topics in an reply" points to an
> issue where the answer appears to be changing a site-wide setting, not
> a per-user setting, so there's not much that we can do about it
> individually.
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Accessing attached process environment variables with SBAPI

2022-01-31 Thread Jim Ingham via lldb-dev
The SBEnvironment classes and the setting for the environment are currently 
used just for launching processes.  lldb doesn’t keep track of the “live” state 
of the environment in a process - which can change as the program runs.  It 
would certainly be useful to have a “printenv” function in lldb that fetched 
the current process environment, however.  Please file an enhancement request 
or propose a patch.

Jim


> On Jan 28, 2022, at 4:35 PM, Ivan Hernandez via lldb-dev 
>  wrote:
> 
> Hi all,
> 
> I'm trying to read the value of an environment variable that a process was 
> launched with but to which lldb attached to after it launched. SBEnvironment 
> looked interesting but I tried using 
> ```
> script 
> print(lldb.debugger.GetSelectedTarget().GetEnvironment().Get("PRINT_ME")
> ```
> and that prints 'None'. 
> 
> I am able to get the value using
> ```
> script addr = lldb.debugger.GetSelectedTarget().EvaluateExpression("(char 
> *)getenv(\"PRINT_ME\")").GetValueAsUnsigned()
> script err = lldb.SBError()
> script 
> print(lldb.debugger.GetSelectedTarget().GetProcess().ReadCStringFromMemory(addr,
>  1024, err))
> ```
> but that seems like overkill for reading an environment variable. Is there a 
> better way to do this that I'm missing?
> 
> I'm using the following program to quickly test things:
> ```
> #include 
> #include 
> #include 
> int main() {
>   raise(SIGSTOP);
>   printf("%s\n", getenv("PRINT_ME"));
> }
> ```
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Mailman->Discourse Migration on February 1, 10am PST

2022-01-29 Thread Paul Smith via lldb-dev
On Sat, 2022-01-29 at 08:29 -0800, Tanya Lattner via lldb-dev wrote:
> We will use the Discourse Migration website to communicate where we
> are in the process.

Just to point out that the "Setting up email interactions" section on
this page could use some attention.

For example the first bullet links to a Mozilla help page which is
obsolete; it describes modifying user preferences which don't exist in
the current LLVM Discourse (maybe LLVM is using a newer version?), or
at least they don't exist in my account.  I can't find any setting
related to "Send me email notifications when I am active on the site",
nor can I find any setting like "Mark posts as read when I'm emailed
about them".

Also the link at "Quoting previous topics in an reply" points to an
issue where the answer appears to be changing a site-wide setting, not
a per-user setting, so there's not much that we can do about it
individually.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Mailman->Discourse Migration on February 1, 10am PST

2022-01-29 Thread Tanya Lattner via lldb-dev
LLVM Community,

As referenced in this blog post 
, we are getting 
close to the deadline for migration for some Mailman lists to Discourse. If you 
are receiving this email from a LLVM Mailman list, then this list will be 
migrating to Discourse on February 1st.

We have gone through several test runs and feel that we are ready to do the 
final migration on February 1st starting at 10 am PST. Once the migration 
begins, the impacted Mailman lists and Discourse will be read only until it has 
completed.

Here are the steps of the migration process on February 1st at 10 am PST:
The Mailman lists that are migrating will be placed in “read-only” mode. No 
mail will be accepted to the list. 
Mailman will be shut down while the final archives are collected. We expect 
this downtime to be about 15 minutes.
The mailman archives are sent to Discourse for migration (15-20 minutes for 
data transfer).
The LLVM DIscourse is put into read-only mode. Given the size of our archives, 
we expect the import to take 5 hours. 
Sanity-checks will be made to ensure that everything looks as expected. We 
estimate this will take 1 hour or less.
The LLVM Discourse will be opened up again with all the archives imported.
We will post on Discourse about how things went and any next steps in the 
“Announcements” category.

We will use the Discourse Migration website 
 to communicate where we 
are in the process. 

We expect the LLVM Discourse to open by 5pm PST, but realize that even in the 
best plans, there may be unexpected situations that arise that cause the 
migration to take longer or possibly be stopped. If that occurs, we will 
evaluate the best course of action and communicate to the community as 
described above.

If you have any questions, please let me know.

Thanks,
Tanya___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Accessing attached process environment variables with SBAPI

2022-01-28 Thread Ivan Hernandez via lldb-dev
Hi all,

I'm trying to read the value of an environment variable that a process was
launched with but to which lldb attached to after it launched.
SBEnvironment looked interesting but I tried using
```
script
print(lldb.debugger.GetSelectedTarget().GetEnvironment().Get("PRINT_ME")
```
and that prints 'None'.

I am able to get the value using
```
script addr = lldb.debugger.GetSelectedTarget().EvaluateExpression("(char
*)getenv(\"PRINT_ME\")").GetValueAsUnsigned()
script err = lldb.SBError()
script
print(lldb.debugger.GetSelectedTarget().GetProcess().ReadCStringFromMemory(addr,
1024, err))
```
but that seems like overkill for reading an environment variable. Is there
a better way to do this that I'm missing?

I'm using the following program to quickly test things:
```
#include 
#include 
#include 
int main() {
  raise(SIGSTOP);
  printf("%s\n", getenv("PRINT_ME"));
}
```
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Semantics of SBValue::CreateChildAtOffset

2022-01-28 Thread Greg Clayton via lldb-dev


> On Oct 22, 2021, at 6:12 AM, Pavel Labath via lldb-dev 
>  wrote:
> 
> Hello Jim, everyone,
> 
> I recently got a question/bug report about python pretty printers (synthetic 
> child providers) that I couldn't answer.
> 
> The actual script is of course more complicated, but the essence boils down 
> to this.
> 
> There's a class, something like:
> struct S {
>  ...
>  T member;
> };
> 
> The pretty printer tries to print this type, and it does something like:
> def get_child_at_index(self, index):
> if index == 0:
> child = self.sbvalue.GetChildMemberWithName("child")
> return child.CreateChildAtOffset("[0]", 0, T2)
> 
> 
> Now here comes the interesting part. The exact behaviour here depends on the 
> type T. If T (and of course, in the real example this is a template) is  
> plain type, then this behaves a like a bitcast so the synthetic child is 
> essentially *reinterpret_cast().
> 
> *However*, if T is a pointer, then lldb will *dereference* it before 
> performing the cast, giving something like
>   *reinterpret_cast(s.member) // no &
> as a result.
> 
> The first question that comes to mind is: Is this behavior intentional or a 
> bug?
> 
> At first it seemed like this is too subtle to be a bug, but the more I though 
> about it, the less I was sure about the CreateChildAtOffset function as a 
> whole.
> 
> What I mean is, this pretty printer is essentially creating child for a value 
> that it is not printing. That seems like a bad idea in general, although I 
> wasn't able to observe any ill effects (e.g. when I printi s.member directly, 
> I don't see any bonus children). Then I looked at some of the in-tree 
> pretty-printers, and I did find this pattern at least two libc++ printers 
> (libcxx.py:125 and :614), although they don't suffer from this ambiguity, 
> because the values they are printing are always pointers.
> 
> However, that means I absolutely don't know what is the expected behavior 
> here:
> - Are pretty printers allowed to call CreateChildAtOffset on values they are 
> not printing

This should be fine yes AFAIK.

> - Is CreateChildAtOffset supposed to behave differently for pointer types?

yes. It all comes down to what a child of a specific type would be. For 
pointers or references, you do want it to be at an offset from what it is 
pointing to. If you have an array of bytes, then you want it to be an offset 
within that array. 

I would be really bad to take a pointer and create a child at offset and not 
follow the pointer by taking the next thing past the pointer. Like if you have:

struct A {
  Foo *ptr1;
  Bar *ptr2;
}

And if we have a "ptr1" in a  SBValue, and asked it to CreatChildAtOffset(), we 
would never really want to access "ptr2" because we could just ask for "ptr2" 
if we wanted it. 

> 
> I'd appreciate any insight,
> Pavel
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Why can't I break on an address resulting in unresolved?

2022-01-28 Thread Greg Clayton via lldb-dev
You can set breakpoints at addresses prior to running a process. ASLR will 
shift shared libraries around each time you run, so it really isn't safe to set 
these. If you do disable ASLR, and are able to debug, just reverse your 
statement and do the "process launch" first:

process launch --stop-at-entry --disable-aslr true
br s -a 0x7fff5fc01031
br s -a 0x7fff5fc01271
br s -a 0x7fff5fc05bdc


Why is this? These addresses mean nothing before the process is launched since 
what you are specifying is a "load address". Before you run each shared library 
hasn't be loaded at a specific location, which means if you did a "br s -a 
0x1000", this address could match every shared library since each shared 
library could have a "file address" values of 0x1000. 

> On Nov 17, 2021, at 4:23 AM, Pi Pony via lldb-dev  
> wrote:
> 
> Hello,
> 
> why does lldb can't break on an address? What does it say when it says 
> unresolved? And how can I fix it?
> 
> Thanks in advance
> 
> See this for more details: https://bugs.llvm.org/show_bug.cgi?id=22323 
> <https://bugs.llvm.org/show_bug.cgi?id=22323>___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] lldb integration with (user mode) qemu

2022-01-28 Thread Greg Clayton via lldb-dev


> On Oct 28, 2021, at 6:33 AM, Pavel Labath via lldb-dev 
>  wrote:
> 
> Hello everyone,
> 
> I'd like to propose a new plugin for better lldb+qemu integration.
> 
> As you're probably aware qemu has an integrated gdb stub. Lldb is able
> to communicate with it, but currently this process is somewhat tedious.
> One has to manually start qemu, giving it a port number, and then
> separately start lldb, and have it connect to that port.
> 
> The chief purpose of this feature would be to automate this behavior,
> ideally to the point where one can just point lldb to an executable,
> type "run", and everything would just work. It would take the form of a
> platform plugin (PlatformQemuUser, perhaps). This would be a non-host,
> always-connected plugin, and it's heart would be the DebugProcess
> method, which would ensure the emulator gets started when the user wants
> to start debugging. It would operate the same way as our host platforms
> do, except that it would start qemu instead of debug/lldb-server. Most
> of the other methods would be implemented by delegating to the host
> platform (as the process will be running on the host), possibly with
> some minor adjustments like prepending sysroot to the paths, etc. (My
> initial proof-of-concept implementation was 200 LOC.)

> The plugin would be configured via multiple settings, which would let
> the user specify, the path to the emulator, the kind of cpu it should
> emulate and the path to the system libraries, and any other arguments
> that the user wishes to pass to the emulator. The user could then
> configure it in their lldbinit file to match their system setup.

Yeah, I would create a "PlatformQemuEmulator" and allow multiple instances of 
this to be created. The setup for the architecture would then happen during the 
"platform connect" command. The "platform connect" command has different 
options for each platform, so you can customize the platform connect options to 
make sense for QEMU. Something like:

(lldb) platform select qemu-emulator
(lldb) platform connect --arch arm64 --sysroot /path/to/arm64/qemu/sysroot 
--emulator-path /path/to/arm64/emulator ...

> 
> 
> The needs of this plugin should match the existing Platform abstraction
> fairly well, so I don't anticipate (*) the need to add new entry points
> or modify existing ones.

Totally fine to add new virtual functions as needed if necessary.

> There is one tricky aspect which I see, and it
> relates to platform selection. Our current platform selection code gives
> each platform instance (while preferring the current platform) a chance
> to "claim" an executable, and aborts if the choice is ambiguous. The
> introduction of a qemu platform would introduce such an ambiguity, since
> (when running on a linux host) a linux executable would be claimed by
> both the qemu plugin and the existing remote-linux platform. This would
> prevent "target create arm-linux.exe" from working out-of-the-box.
> 
> To resolve this, I'd like to create some kind of a mechanism to give
> preference to some plugin. This could either be something internal,
> where a plugin indicates "strong" preference for an executable (the qemu
> platform could e.g. do this when the user sets the emulator path, the
> remote platform when it is connected), or some external mechanism like a
> global setting giving the preferred platform order. I'd very much like
> hear your thoughts on this.

Seems like selecting the platform first and then connecting to it, and 
specifying the architecture in the "platform connect --arch  
> I'm also not sure how to handle the case of multiple emulated
> architectures. Qemu can emulate any processor architecture (of those
> that lldb supports, anyway), but the path to the emulator, sysroot, and
> probably other settings as well are going to be different. I see two
> possible ways to go about this:
> 
> a) have just a single set of settings, effectively limiting the user to
> emulating just a single architecture per session. While it would most
> likely be enough for most use cases, this kind of limitation seems
> artificial. It would also likely require the introduction of another
> setting, which would specify which architecture the plugin should
> actually emulate (and return from GetSupportedArchitectureAtIndex,
> etc.). On the flip side, this would be consistent with the how our
> remote-plugins work, although there it is given by the need to connect
> to something, and the supported architecture is then determined by the
> remote machine.
> 
> b) have multiple platform instances for each architecture. This solution
> be a more general solution, but it would mean that our "platform list"
> output 

Re: [lldb-dev] No script in lldb of build

2022-01-28 Thread Greg Clayton via lldb-dev
I have had to add the following to my cmake command line:

-DPython3_EXECUTABLE=/usr/bin/python3



> On Dec 5, 2021, at 12:02 PM, Pi Pony via lldb-dev  
> wrote:
> 
> Hello,
> 
> I build lldb for macOS and tried to get into script but I get this error 
> message: there is no embedded script interpreter in this mode.
> 
> I appreciate any help you can provide
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2022-01-28 Thread Greg Clayton via lldb-dev
I am fine with a new plug-in to handle this, but I want to verify a few things 
first:

Can this core dump file format basically allow debugging of multiple targets? 
For example could you for example want to examine the kernel itself as is, but 
also provide a view into any of the user space processes that exist? Mach-o 
kernel dumps can currently do this, but I am not sure how much of this code is 
public. The idea was you connect to the kernel dump, but you can create new 
targets that represent each user space process as it's own target within LLDB. 
The Apple tool would vend a new GDB remote protocol for each user space process 
and all memory reads that are asked of this GDB remote protocol that is created 
for each process can be asked for memory and each instance would translate the 
address correctly using the TLB entries in the kernel and give the user a user 
space view of this process. 

So the idea is connect to the kernel core file and display only the things that 
belong to the kernel, including all data structures and kernel threads in the 
target that represents the kernel. Have a way to list all of the user space 
processes that can have targets created so that each user space process can be 
debugged by a separate target in LLDB.

The natural area to do this would with a new lldb_private::Platform, or 
extending the existing PlatformFreeBSD. If you did a "platform select 
remote-freebsd", followed by a "platform connect --kernel-core-file 
/path/to/kernel/core.file", then the platform can be asked to list all 
available processes, one of which will be the kernel itself, and one process 
for each user space process that can have a target created for it. Then you can 
"process attach --pid " to attach to the kernel (we would need to make up 
a process ID for the kernel, and use the native process ID for all user space 
processes). The the new core file plug-in can be used to create a 
ProcessFreeBSDKernelCore instance that can be created and knows how to 
correctly answer all of the process questions for the targeted process.



> On Nov 30, 2021, at 5:49 AM, Michał Górny via lldb-dev 
>  wrote:
> 
> Hi,
> 
> I'm working on a FreeBSD-sponsored project aiming at improving LLDB's
> support for debugging FreeBSD kernel to achieve feature parity with
> KGDB.  As a part of that, I'd like to improve LLDB's ability of working
> with kernel coredumps ("vmcores"), plus add the ability to read kernel
> memory via special character device /dev/mem.
> 
> 
> The FreeBSD kernel supports two coredump formats that are of interest to
> us:
> 
> 1. The (older) "full memory" coredumps that use an ELF container.
> 
> 2. The (newer) minidumps that dump only the active memory and use
> a custom format.
> 
> At this point, LLDB recognizes the ELF files but doesn't handle them
> correctly, and outright rejects the FreeBSD minidump format.  In both
> cases some additional logic is required.  This is because kernel
> coredumps contain physical contents of memory, and for user convenience
> the debugger needs to be able to read memory maps from the physical
> memory and use them to translate virtual addresses to physical
> addresses.
> 
> Unless I'm mistaken, the rationale for using this format is that
> coredumps are -- after all -- usually created when something goes wrong
> with the kernel.  In that case, we want the process for dumping core to
> be as simple as possible, and coredumps need to be small enough to fit
> in swap space (that's where they're being usually written).
> The complexity of memory translation should then naturally fall into
> userspace processes used to debug them.
> 
> FreeBSD (following Solaris and other BSDs) provides a helper libkvm
> library that can be used by userspace programs to access both coredumps
> and running kernel memory.  Additionally, we have split the routines
> related to coredumps and made them portable to other operating systems
> via libfbsdvmcore [1].  We have also included a program that can convert
> minidump into a debugger-compatible ELF core file.
> 
> 
> We'd like to discuss the possible approaches to integrating this
> additional functionality to LLDB.  At this point, our goal is to make it
> possible for LLDB to correctly read memory from coredumps and live
> system.
> 
> 
> Plan A: new FreeBSDKernel plugin
> 
> I think the preferable approach is to write a new plugin that would
> enable out-of-the-box support for the new functions in LLDB.  The plugin
> would be based on using both libraries.  When available, libfbsdvmcore
> will be used as the primary provider for vmcore support on all operating
> systems.  Additionally, libkvm will be usable on FreeBSD as a fallback
> provider for coredump

Re: [lldb-dev] Source-level stepping with emulated instructions

2022-01-28 Thread Greg Clayton via lldb-dev
We just need to specify that the addresses for these emulated instruction 
address ranges have symbols and the type of these symbols are set to 
"eSymbolTypeTrampoline". We run into a similar case when you are stepping 
through the PLT entries for external functions. If your main binary has a 
"printf" symbol which is the trampoline to the actual "printf" function, the 
stepping logic will just continue through this code until it gets out of the 
address range.



> On Jan 14, 2022, at 10:49 PM, Kjell Winblad via lldb-dev 
>  wrote:
> 
> Hi!
> 
> I'm implementing LLDB support for a new processor architecture that
> the company I'm working for has created. The processor architecture
> has a few emulated instructions. An emulated instruction works by
> jumping to a specific address that contains the start of a block of
> instructions that emulates the emulated instructions. The emulated
> instructions execute with interrupts turned off to be treated as
> atomic by the programmer. So an emulated instruction is similar to a
> function call. However, the address that the instruction jumps to is
> implicit and not specified by the programmer.
> 
> I'm facing a problem with the emulated instructions when implementing
> source-level stepping (the LLDB next and step commands) for C code in
> LLDB. LLDB uses hardware stepping to step through the address range
> that makes up a source-level statement. This algorithm works fine
> until the PC jumps to the start of the block that implements an
> emulated instruction. Then LLDB stops because the PC exited the
> address range for the source-level statement. This behavior is not
> what we want. Instead, LLDB should ideally step through the emulation
> instructions and continue until the current source-level statement has
> been completed.
> 
> My questions are:
> 
> 1. Is there currently any LLDB plugin functionality or special DWARF
> debug information to handle the kind of emulated instructions that I
> have described? All the code for the emulated instructions is within
> the same address range that does not contain any other code.
> 2. If the answer to question 1 is no, do you have suggestions for
> extending LLVM to support this kind of emulated instructions?
> 
> Best regards,
> Kjell Winblad
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-28 Thread Greg Clayton via lldb-dev
The other idea would be to allow the Platform subclasses to be able to fill in 
some fixed variable names when asked.

So if the user typed either:

(lldb) frame variable $platform.siginfo
(lldb) expression $platform.siginfo

We would have the name lookup mechanism check with the current platform if the 
string starts with "$platform" for the current target and ask it _first_ if it 
knows anything about this name. If the linux platform recognizes it, it can 
create one however it wants to and return the variable all filled in. We could 
use this same mechanism for other things like the uncaught exceptions for C++ 
(GDB has a way of vending info on these exceptions in certain cases too.


> On Jan 13, 2022, at 9:09 AM, Jim Ingham via lldb-dev 
>  wrote:
> 
> You are really going to make a lldb_private::CompilerType, since that’s what 
> backs the Type & ultimately the SBTypes.  There’s a self-contained example 
> where we make a CompilerType to represent the pairs in the synthetic child 
> provider for NSDictionaries in the function GetLLDBNSPairType in 
> NSDictionary.cpp.  And then you can follow the use of that function to see 
> how that gets turned into a Type.
> 
> Also, the whole job of the DWARF parser is to make up CompilerTypes out of 
> information from external sources, so if you need other examples for how to 
> add elements to a CompilerType the DWARF parser is replete with them.
> 
> Jim
> 
>> On Jan 13, 2022, at 4:03 AM, Michał Górny  wrote:
>> 
>> On Wed, 2022-01-12 at 11:22 -0800, Jim Ingham wrote:
>>> If we can’t always get our hands on the siginfo type, we will have to cons 
>>> that type up by hand.  But we would have had to do that if we were 
>>> implementing this feature in the expression parser anyway, and we already 
>>> hand-make types to hand out in SBValues for a bunch of the synthetic child 
>>> providers already, so that’s a well trodden path.
>> 
>> Could you point me to some example I could base my code on?  ;-)
>> 
>> -- 
>> Best regards,
>> Michał Górny
>> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] problems using EvaluateExpression in lldb, when it creates new object

2022-01-28 Thread Greg Clayton via lldb-dev




> On Jan 14, 2022, at 10:13 AM, fhjiwerfghr fhiewrgfheir via lldb-dev 
>  wrote:
> 
> I'm sorry in advance, if it's not a correct mailing list, there doesn't seem 
> to be lldb-usage mailing list.
> 
> I'm writing a pretty-printer python script, which - to cut to the chase, 
> pretty prints members of a class by using EvaluateExpression and creating new 
> object inside it. It doesn't seem to work - i'm getting " type>" error. Should my idea work in a first place and i't s a bug or it 
> shouldn't and i need to find a different solution?

This might be a case where -flimit-debug-info, the default setting for things 
on linux, is getting in the way. Can you try specifying -glldb on the command 
line and see if that fixes things? The default case for linux is for the 
compiler to not emit debug info for types that might exist in other frameworks. 
This was an effort to reduce .o file sizes. In this case, you might be running 
into a case where the debug info for std::string_view is being emitted as a 
forward declarartion, so the expression parser can't end up using it. The best 
way to tell is to make a local "std::string_view" object that has valid 
contents and see if you can expand the variable in the debugger. I am not a fan 
of this approach of omitting debug info that the debugger needs, but it is 
unfortunately the default setting for linux compiles.


The other thing to mention is it is a really bad idea to run expressions as 
part of data formatters or summary providers. Why? Expressions can end up 
resuming all threads if the expression deadlocks for any reason or if the 
expression doesn't complete. Of course if you have properties that are 
generated, you don't have much of a choice, so try to only use expressions if 
you know the expression won't ever deadlock due to another thread holding a 
mutex or other locking mechanism. If you can, best to try and just access 
existing data within an object if at all possible.

> 
> I'm attaching a repro case:
> 
> clang++ q.cpp -g -o o -std=c++20
> lldb o
> command script import lldb_script.py
> br set --file q.cpp --line 19
> r
> print c
> 
> 
> it prints:
> (lldb) print c
> (C) $0 = CCC {
>= 
> }
> 
> it should something akin to:
> (lldb) print c
> (C) $0 = CCC {
>   b   = B {
> a = A {
>   id = "qwerty"
> }
>   }
> }
> 
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Multiple platforms with the same name

2022-01-28 Thread Pavel Labath via lldb-dev
I'm sorry for the slow response. I had to attend to some other things 
first. It sounds like there's agreement to support multiple platform 
instances, so I'll try to move things in that direction.


Further responses inline

On 20/01/2022 01:19, Greg Clayton wrote:




On Jan 19, 2022, at 4:28 AM, Pavel Labath  wrote:

On 19/01/2022 00:38, Greg Clayton wrote:

Platforms can contain connection specific setting and data. You might want to create two 
different "remote-linux" platforms and connect each one to a different remote 
linux machine. Each target which uses this platform would each be able to fetch files, 
resolve symbol files, get OS version/build string/kernel info, get set working directory 
from the remote server they are attached. Since each platform tends to belong to a target 
and since you might want to create two different targets and have each one connected to a 
different remote machine, I believe it is fine to have multiple instances.
I would vote to almost always create a new instance unless it is the host 
platform. Though it should be possible to create to targets and possibly set 
the platform on one target using the platform from another that might already 
be connected.
I am open to suggestions if anyone has any objections.
Greg


I agree that permitting multiple platforms would be a more principled position, 
but it was not clear to me if that was ever planned to be the case.


This code definitely evolved as time went on. Then we added the remote 
capabilities. As Jim said, there are two parts for the platform that _could_ be 
separated: PlatformLocal and PlatformRemote. Horrible names that can be 
improved upon, I am sure, but just names I quickly came up with.

PlatformLocal would be "what can I do for a platform that only involves finding 
things on this machine for supporting debugging on a remote platform". This would 
involve things like:
- where are remote files cached on the local machine for easy access
- where can I locate SDK/NDK stuff that might help me for this platform
- what architectures/triples are supported by this platform so it can be 
selected
- how to start a debug session for a given binary (which might use parts of 
PlatformRemote) as platforms like "iOS-simulator" do not require any remote 
connections to be able to start a process. Same could happen for VM based debugging on a 
local machine.

PlatformRemote
- get/put files
- get/set working directory
- install executable so OS can see/launch it
- create/delete directories

So as things evolved, everything got thrown into the Platform case and we just 
made things work as we went. I am sure this can be improved.
I actually have a branch where I've tried to separate the local and 
remote cases, and remove the if(IsHost()) checks everywhere, but I 
haven't found yet found the time to clean it up and send an rfc.






If it was (or if we want it to be), then I think we need to start making bigger distinctions 
between the platform plugins (classes), and the actual instantiations of those classes. Currently 
there is no way to refer to "older" instances of the platforms as they all share the same 
name (the name of the plugin). Like, you can enumerate them through 
SBDebugger.GetPlatformAtIndex(), but that's about the only thing you can do as all the interfaces 
(including the SB ones) take a platform _name_ as an argument. This gets particularly confusing as 
in some circumstances we end up choosing the newer one (e.g. if its the "current" 
platform) and sometimes the older.

If we want to do that, then this is what I'd propose:
a) Each platform plugin and each platform instance gets a name. We enforce the 
uniqueness of these names (within their category).


Maybe it would be better to maintain the name, but implement an instance 
identifier for each platform instance?
I'm not sure what you mean by that. Or, if you mean what I think you 
mean, then we're actually in agreement. Each platform plugin (class) 
gets a name (or identifier, or whatever we want to call it), and each 
instance of that class gets a name as well.


Practically speaking, I think we could reuse the existing GetPluginName 
and GetName (currently hardwired to return GetPluginName()). The former 
would return the plugin name, and the latter would give the "instance 
identifier".





b) "platform list" outputs two block -- the list of available plugins and the 
list of plugin instances


If we added a instance identifier, then we could just show the available 
plug-in names followed by their instances?
Yes, that would just be a different (and probably better) way of 
displaying the same information. We can definitely do that.





c) a new "platform create" command to create a platform
  - e.g. "platform create my-arm-test-machine --plugin remote-linux"


Now we are assuming you want to connect to a remote machine when we create platform? "platform 
connect" can be used currently if we want to actually connect to a remote platform, but 

Re: [lldb-dev] lldb-vscode plugin information for Windows/Arm platform

2022-01-21 Thread Ted Woodward via lldb-dev
The last 2 Hexagon release trains have shipped with vscode support on Linux and 
Windows. I worked with Greg to do what you want to do – make a vscode extension 
to allow it to use lldb-vscode as its debugger.

I wrote a batch script that:

  *   Removes the extension directory
  *   Creates the extension directory 
(%USERPROFILE%\.vscode\extensions\qualcomm-hexagon.lldb-vscode-8.5)
  *   Copies package.json to the extension directory
  *   Copies a dragon image I made (based on one from the llvm site) to 
\images
  *   Uses mklink /j to link the bin and lib directories from my toolset 
installation

The internal name of the extension defaults to lldb-vscode. I change this to 
include our release number, since it’s valid to have multiple toolset 
installations that are installed where the user wants, and we don’t want 8.5 to 
conflict with 8.6, etc.

Because it’s valid to install multiple releases, I link to the bin and lib 
directories for this extension’s installation. This also lets me have release 
and debug extensions – copy the release extension, change the internal plugin 
name in package.json, and point the links to my bin and lib directories. Then I 
can change the plugin name in my testcase’s tasks.json to point to my debug or 
release extension.

If you have any questions, please feel free to contact me.

On Jan 20, 2022, at 4:40 PM, Omair Javaid 
mailto:omair.jav...@linaro.org>> wrote:

Hi Greg,

I intend to understand requirements to set up the lldb-vscode tool for Windows 
on Arm. I have been looking at your vscode readme from 
https://github.com/llvm/llvm-project/blob/cfae2c65dbbe1a252958b4db2e32574e8e8dcec0/lldb/tools/lldb-vscode/README.md

If I understood correctly Windows on Arm platform is missing a vscode adapter 
plugin required to make lldb-vscode tool work on Arm/Windows platform. Similar 
adapter plugin is available for Windows x64 through third parties but I am 
wondering if there is an official version of the same plugin which can be 
packaged after porting for Windows on Arm.

Right now we don’t distribute a lldb-vscode through the marketplace so you just 
need to build it yourself and then create the directory as mentioned in the 
readme.


Basically you just need to download the LLDB sources to a Windows on Arm 
machine and build “lldb-vscode”. Then you take the “lldb-vscode.exe” binary and 
the LLDB DLL and put then into the extension’s bin folder. Then make sure that 
you can launch the program from the terminal and make sure it finds the DLL 
right next to the program

I am not sure if where the VSCode extensions live on a windows user folder, but 
the readme is saying:

~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin

So you would want to end up with:

~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin/lldb-vscode.exe
~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin/lldb.dll

(not sure of the name of the lldb library name on windows)

And then copy the package.json to the folder:

cp package.json ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0

Then you should be able to restart your VS code IDE and the extension should be 
available. It is important to make sure that the program lauches from the 
command line to make sure that the program is able to locate the lldb shared 
library (DLL). If you launch the program and it doesn’t complain, it will just 
sit there waiting for input. If it can’t find the DLL, then it should cause an 
error and you will need to figure where the DLL needs to be relative to the 
program. I am not familiar with exactly how this works on windows for a given 
EXE and how it locates the DLLs the main executable needs.

Let me know if you are able to get things working!

Greg



I ll really appreciate any sort of help/pointer to accelerate further progress.

Thanks!

--
Omair Javaid
www.linaro.org

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-vscode plugin information for Windows/Arm platform

2022-01-20 Thread Greg Clayton via lldb-dev


> On Jan 20, 2022, at 4:40 PM, Omair Javaid  wrote:
> 
> Hi Greg,
> 
> I intend to understand requirements to set up the lldb-vscode tool for 
> Windows on Arm. I have been looking at your vscode readme from 
> https://github.com/llvm/llvm-project/blob/cfae2c65dbbe1a252958b4db2e32574e8e8dcec0/lldb/tools/lldb-vscode/README.md
>  
> 
> 
> If I understood correctly Windows on Arm platform is missing a vscode adapter 
> plugin required to make lldb-vscode tool work on Arm/Windows platform. 
> Similar adapter plugin is available for Windows x64 through third parties but 
> I am wondering if there is an official version of the same plugin which can 
> be packaged after porting for Windows on Arm.

Right now we don’t distribute a lldb-vscode through the marketplace so you just 
need to build it yourself and then create the directory as mentioned in the 
readme.

Basically you just need to download the LLDB sources to a Windows on Arm 
machine and build “lldb-vscode”. Then you take the “lldb-vscode.exe” binary and 
the LLDB DLL and put then into the extension’s bin folder. Then make sure that 
you can launch the program from the terminal and make sure it finds the DLL 
right next to the program

I am not sure if where the VSCode extensions live on a windows user folder, but 
the readme is saying:

~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin

So you would want to end up with:

~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin/lldb-vscode.exe
~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin/lldb.dll 

(not sure of the name of the lldb library name on windows)

And then copy the package.json to the folder:

cp package.json ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0

Then you should be able to restart your VS code IDE and the extension should be 
available. It is important to make sure that the program lauches from the 
command line to make sure that the program is able to locate the lldb shared 
library (DLL). If you launch the program and it doesn’t complain, it will just 
sit there waiting for input. If it can’t find the DLL, then it should cause an 
error and you will need to figure where the DLL needs to be relative to the 
program. I am not familiar with exactly how this works on windows for a given 
EXE and how it locates the DLLs the main executable needs.

Let me know if you are able to get things working!

Greg

> 
> I ll really appreciate any sort of help/pointer to accelerate further 
> progress.
> 
> Thanks!
> 
> --
> Omair Javaid
> www.linaro.org 
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] lldb-vscode plugin information for Windows/Arm platform

2022-01-20 Thread Omair Javaid via lldb-dev
Hi Greg,

I intend to understand requirements to set up the lldb-vscode tool for
Windows on Arm. I have been looking at your vscode readme from
https://github.com/llvm/llvm-project/blob/cfae2c65dbbe1a252958b4db2e32574e8e8dcec0/lldb/tools/lldb-vscode/README.md

If I understood correctly Windows on Arm platform is missing a vscode
adapter plugin required to make lldb-vscode tool work on Arm/Windows
platform. Similar adapter plugin is available for Windows x64 through third
parties but I am wondering if there is an official version of the same
plugin which can be packaged after porting for Windows on Arm.

I ll really appreciate any sort of help/pointer to accelerate further
progress.

Thanks!

--
Omair Javaid
www.linaro.org
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-20 Thread Greg Clayton via lldb-dev
Understood, we need to be able to log if "any" bits are set. 

> On Jan 20, 2022, at 2:18 PM, Jim Ingham  wrote:
> 
> 
> 
>> On Jan 20, 2022, at 11:26 AM, Pavel Labath  wrote:
>> 
>> On 20/01/2022 00:30, Greg Clayton wrote:
>>> I also vote to remove and simplify.
>> 
>> Sounds like it's settled then. I'll fire up my sed scripts.
>> 
>> On 20/01/2022 01:38, Greg Clayton wrote:
>>> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote: 
 If we got rid of this, we could simplify the logging calls even further 
 and have something like:>> Log *log =
 GetLog(LLDBLog::Process);
>>> Can a template function deduce the log type from an argument? Wouldn't this 
>>> have to be:
>>> Log *log = GetLog(LLDBLog::Process);
>>> That is why I was hinting if we want to just use the enum class itself:
>>> Log *log = LLDBLog::GetLog(LLDBLog::Process);
>>> The template class in your second patch seems cool, but I don't understand 
>>> how it worked without going and reading up on templates
>>> in C++ and spending 20 minutes trying to wrap my brain around it.
>> Template functions have always been able to deduce template arguments.
>> Pretty much the entire c++ standard library is made of template
>> functions, but you don't see <> spelled out everywhere. Class templates
>> have not been able to auto-deduce template arguments until c++17, and I
>> am still not really clear on how that works.
>> 
>> The way that patch works is that you have one template function
>> `LogChannelFor`, which ties the enum to a specific channel class, and
>> then another one (GetLogIfAny), which returns the actual log object (and
>> uses the first one to obtain the channel).
>> 
>> But none of this is fundamentally tied to templates. One could achieve
>> the same thing by overloading the GetLogIfAny function (one overload for
>> each type). The template just saves a bit of repetition. This way, the
>> only thing you need to do when defining a new log channel, is to provide
>> the LogChannelFor function.
>> 
>>> Or do we just switch to a dedicated log class with unique methods:
>>> class LLDBLog: public Log { Log *Process() { return GetLog(1u << 0);
>>> } Log *Thread() { return GetLog(1u << 1); } };
>>> and avoid all the enums? Then we can't ever feed a bad enum or #define  
>>> into the wrong log class.
>> 
>> That could work too, and would definitely have some advantages -- for
>> instance we could prefix each message with the log channel it was going
>> to. The downside is that we would lose the ability to send one message to 
>> multiple log channels at once, and I believe that some (Jim?) value that 
>> functionality.
> 
> I think I’m just quibbling about terminology, I don’t think it’s possible for 
> one site to send its log message to two channels in a single go.  That would 
> be like “lldb types” and “dwarf info” for a single log statement.
> Anyway, that’s not something I see as particularly useful.
> 
> What is useful is to say “this message goes out on the lldb channel if any of 
> these categories (“step” and “expr” for instance) is set.”  I don’t really 
> think of that as sending the message to multiple channels, since it’s only 
> going to go out once, but the test is broader.
> 
> But, IIUC, Greg’s proposal would also make that impossible as well, so I’m 
> still against it…
> 
> Jim
> 
> 
>> 
>> pl

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-20 Thread Jim Ingham via lldb-dev


> On Jan 20, 2022, at 11:26 AM, Pavel Labath  wrote:
> 
> On 20/01/2022 00:30, Greg Clayton wrote:
>> I also vote to remove and simplify.
> 
> Sounds like it's settled then. I'll fire up my sed scripts.
> 
> On 20/01/2022 01:38, Greg Clayton wrote:
>> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote: 
>>> If we got rid of this, we could simplify the logging calls even further and 
>>> have something like:>> Log *log =
>>> GetLog(LLDBLog::Process);
>> Can a template function deduce the log type from an argument? Wouldn't this 
>> have to be:
>> Log *log = GetLog(LLDBLog::Process);
>> That is why I was hinting if we want to just use the enum class itself:
>> Log *log = LLDBLog::GetLog(LLDBLog::Process);
>> The template class in your second patch seems cool, but I don't understand 
>> how it worked without going and reading up on templates
>> in C++ and spending 20 minutes trying to wrap my brain around it.
> Template functions have always been able to deduce template arguments.
> Pretty much the entire c++ standard library is made of template
> functions, but you don't see <> spelled out everywhere. Class templates
> have not been able to auto-deduce template arguments until c++17, and I
> am still not really clear on how that works.
> 
> The way that patch works is that you have one template function
> `LogChannelFor`, which ties the enum to a specific channel class, and
> then another one (GetLogIfAny), which returns the actual log object (and
> uses the first one to obtain the channel).
> 
> But none of this is fundamentally tied to templates. One could achieve
> the same thing by overloading the GetLogIfAny function (one overload for
> each type). The template just saves a bit of repetition. This way, the
> only thing you need to do when defining a new log channel, is to provide
> the LogChannelFor function.
> 
>> Or do we just switch to a dedicated log class with unique methods:
>> class LLDBLog: public Log { Log *Process() { return GetLog(1u << 0);
>> } Log *Thread() { return GetLog(1u << 1); } };
>> and avoid all the enums? Then we can't ever feed a bad enum or #define  into 
>> the wrong log class.
> 
> That could work too, and would definitely have some advantages -- for
> instance we could prefix each message with the log channel it was going
> to. The downside is that we would lose the ability to send one message to 
> multiple log channels at once, and I believe that some (Jim?) value that 
> functionality.

I think I’m just quibbling about terminology, I don’t think it’s possible for 
one site to send its log message to two channels in a single go.  That would be 
like “lldb types” and “dwarf info” for a single log statement.
Anyway, that’s not something I see as particularly useful.

What is useful is to say “this message goes out on the lldb channel if any of 
these categories (“step” and “expr” for instance) is set.”  I don’t really 
think of that as sending the message to multiple channels, since it’s only 
going to go out once, but the test is broader.

But, IIUC, Greg’s proposal would also make that impossible as well, so I’m 
still against it…

Jim


> 
> pl

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Release-testers] LLVM 14.0.0 Release Schedule

2022-01-20 Thread Tom Stellard via lldb-dev

On 1/20/22 00:28, Michał Górny wrote:

On Wed, 2022-01-19 at 21:23 -0800, Tom Stellard via Release-testers
wrote:

Hi,

I've posted the proposed 14.0.0 Release Schedule here: 
https://llvm.discourse.group/t/llvm-14-0-0-release-schedule/5846



Any reason this isn't in the 'release testers' category you told us to
follow?



I'm still trying to figure out the best place to post messages.  For the
schedule announcement I wanted to make sure it went out to the whole project
and not just the release testers.

-Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-20 Thread Pavel Labath via lldb-dev

On 20/01/2022 00:30, Greg Clayton wrote:

I also vote to remove and simplify.


Sounds like it's settled then. I'll fire up my sed scripts.

On 20/01/2022 01:38, Greg Clayton wrote:



On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote: 
If we got rid of this, we could simplify the logging calls even 
further and have something like:>> Log *log =

GetLog(LLDBLog::Process);


Can a template function deduce the log type from an argument? 
Wouldn't this have to be:


Log *log = GetLog(LLDBLog::Process);

That is why I was hinting if we want to just use the enum class 
itself:


Log *log = LLDBLog::GetLog(LLDBLog::Process);

The template class in your second patch seems cool, but I don't 
understand how it worked without going and reading up on templates

in C++ and spending 20 minutes trying to wrap my brain around it.

Template functions have always been able to deduce template arguments.
Pretty much the entire c++ standard library is made of template
functions, but you don't see <> spelled out everywhere. Class templates
have not been able to auto-deduce template arguments until c++17, and I
am still not really clear on how that works.

The way that patch works is that you have one template function
`LogChannelFor`, which ties the enum to a specific channel class, and
then another one (GetLogIfAny), which returns the actual log object (and
uses the first one to obtain the channel).

But none of this is fundamentally tied to templates. One could achieve
the same thing by overloading the GetLogIfAny function (one overload for
each type). The template just saves a bit of repetition. This way, the
only thing you need to do when defining a new log channel, is to provide
the LogChannelFor function.



Or do we just switch to a dedicated log class with unique methods:

class LLDBLog: public Log { Log *Process() { return GetLog(1u << 0);
} Log *Thread() { return GetLog(1u << 1); } };

and avoid all the enums? Then we can't ever feed a bad enum or 
#define  into the wrong log class.


That could work too, and would definitely have some advantages -- for
instance we could prefix each message with the log channel it was going
to. The downside is that we would lose the ability to send one message 
to multiple log channels at once, and I believe that some (Jim?) value 
that functionality.


pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Release-testers] LLVM 14.0.0 Release Schedule

2022-01-20 Thread Michał Górny via lldb-dev
On Wed, 2022-01-19 at 21:23 -0800, Tom Stellard via Release-testers
wrote:
> Hi,
> 
> I've posted the proposed 14.0.0 Release Schedule here: 
> https://llvm.discourse.group/t/llvm-14-0-0-release-schedule/5846
> 

Any reason this isn't in the 'release testers' category you told us to
follow?

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLVM 14.0.0 Release Schedule

2022-01-19 Thread Tom Stellard via lldb-dev

Hi,

I've posted the proposed 14.0.0 Release Schedule here: 
https://llvm.discourse.group/t/llvm-14-0-0-release-schedule/5846

-Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Greg Clayton via lldb-dev


> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote:
> 
> Hi all,
> 
> In case you haven't noticed, I'd like to draw your attention to the in-flight 
> patches (https://reviews.llvm.org/D117382, https://reviews.llvm.org/D117490) 
> whose goal clean up/improve/streamline the logging infrastructure.
> 
> I'm don't want go into technical details here (they're on the patch), but the 
> general idea is to replace statements like 
> GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)
> with
> GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
> i.e., drop macros and make use of templates to make the function calls 
> shorter and safer.
> 
> The reason I'm writing this email is to ask about the "All" versions of these 
> logging functions. Do you find them useful in practice?
> 
> I'm asking that because I've never used this functionality. While I can't 
> find anything wrong with the concept in theory, practically I think it's just 
> confusing to have some log message appear only for some combination of 
> enabled channels. It might have made some sense when we had a "verbose" 
> logging channel, but that one is long gone (we still have a verbose logging 
> *flag*).
> 
> In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses the 
> GetLogIfAll form with more than one category. Of those, three are in tests, 
> one is definitely a bug (it combines the category with 
> LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable usefulness 
> (to me anyway).
> 
> If we got rid of this, we could simplify the logging calls even further and 
> have something like:
> Log *log = GetLog(LLDBLog::Process);

Can a template function deduce the log type from an argument? Wouldn't this 
have to be:

Log *log = GetLog(LLDBLog::Process);

That is why I was hinting if we want to just use the enum class itself:

Log *log = LLDBLog::GetLog(LLDBLog::Process);

The template class in your second patch seems cool, but I don't understand how 
it worked without going and reading up on templates in C++ and spending 20 
minutes trying to wrap my brain around it.

Or do we just switch to a dedicated log class with unique methods:

class LLDBLog: public Log {
  Log *Process() { return GetLog(1u << 0); }
  Log *Thread() { return GetLog(1u << 1); }
};

and avoid all the enums? Then we can't ever feed a bad enum or #define into the 
wrong log class.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Multiple platforms with the same name

2022-01-19 Thread Greg Clayton via lldb-dev


> On Jan 19, 2022, at 4:28 AM, Pavel Labath  wrote:
> 
> On 19/01/2022 00:38, Greg Clayton wrote:
>> Platforms can contain connection specific setting and data. You might want 
>> to create two different "remote-linux" platforms and connect each one to a 
>> different remote linux machine. Each target which uses this platform would 
>> each be able to fetch files, resolve symbol files, get OS version/build 
>> string/kernel info, get set working directory from the remote server they 
>> are attached. Since each platform tends to belong to a target and since you 
>> might want to create two different targets and have each one connected to a 
>> different remote machine, I believe it is fine to have multiple instances.
>> I would vote to almost always create a new instance unless it is the host 
>> platform. Though it should be possible to create to targets and possibly set 
>> the platform on one target using the platform from another that might 
>> already be connected.
>> I am open to suggestions if anyone has any objections.
>> Greg
> 
> I agree that permitting multiple platforms would be a more principled 
> position, but it was not clear to me if that was ever planned to be the case.

This code definitely evolved as time went on. Then we added the remote 
capabilities. As Jim said, there are two parts for the platform that _could_ be 
separated: PlatformLocal and PlatformRemote. Horrible names that can be 
improved upon, I am sure, but just names I quickly came up with.

PlatformLocal would be "what can I do for a platform that only involves finding 
things on this machine for supporting debugging on a remote platform". This 
would involve things like:
- where are remote files cached on the local machine for easy access
- where can I locate SDK/NDK stuff that might help me for this platform
- what architectures/triples are supported by this platform so it can be 
selected
- how to start a debug session for a given binary (which might use parts of 
PlatformRemote) as platforms like "iOS-simulator" do not require any remote 
connections to be able to start a process. Same could happen for VM based 
debugging on a local machine.

PlatformRemote
- get/put files
- get/set working directory
- install executable so OS can see/launch it
- create/delete directories

So as things evolved, everything got thrown into the Platform case and we just 
made things work as we went. I am sure this can be improved.

> 
> If it was (or if we want it to be), then I think we need to start making 
> bigger distinctions between the platform plugins (classes), and the actual 
> instantiations of those classes. Currently there is no way to refer to 
> "older" instances of the platforms as they all share the same name (the name 
> of the plugin). Like, you can enumerate them through 
> SBDebugger.GetPlatformAtIndex(), but that's about the only thing you can do 
> as all the interfaces (including the SB ones) take a platform _name_ as an 
> argument. This gets particularly confusing as in some circumstances we end up 
> choosing the newer one (e.g. if its the "current" platform) and sometimes the 
> older.
> 
> If we want to do that, then this is what I'd propose:
> a) Each platform plugin and each platform instance gets a name. We enforce 
> the uniqueness of these names (within their category).

Maybe it would be better to maintain the name, but implement an instance 
identifier for each platform instance?

> b) "platform list" outputs two block -- the list of available plugins and the 
> list of plugin instances

If we added a instance identifier, then we could just show the available 
plug-in names followed by their instances?

> c) a new "platform create" command to create a platform
>  - e.g. "platform create my-arm-test-machine --plugin remote-linux"

Now we are assuming you want to connect to a remote machine when we create 
platform? "platform connect" can be used currently if we want to actually 
connect to a remote platform, but there is a lot of stuff in the iOS platforms 
that really only deals with finding stuff on the local machine. Each platform 
plugin in "platform connect" has the ability to create its own unique 
connection arguments and options which is nice for different platforms.

The creation and connecting should still be done separately. Seeing the 
arguments you added above leads me to believe this is like a "select" and a 
"connect" all in one. And each "platform connect" has unique and different 
arguments and options that are tailored to each plug-in currently.

> d) "platform select" selects the platform with the given /instance/ name
>  - for convenience and compatibility if the name does not refer to any 
> existing platform instance, but it *does* refer to a platform plugin, it 
> would create a platform instance with the same name as the class. (So the 
> first "platform select remote-linux" would create a new instance (also called 
> remote-linux) and all subsequent selects would switch to 

Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Greg Clayton via lldb-dev
I also vote to remove and simplify.

> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote:
> 
> Hi all,
> 
> In case you haven't noticed, I'd like to draw your attention to the in-flight 
> patches (https://reviews.llvm.org/D117382, https://reviews.llvm.org/D117490) 
> whose goal clean up/improve/streamline the logging infrastructure.
> 
> I'm don't want go into technical details here (they're on the patch), but the 
> general idea is to replace statements like 
> GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)
> with
> GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
> i.e., drop macros and make use of templates to make the function calls 
> shorter and safer.
> 
> The reason I'm writing this email is to ask about the "All" versions of these 
> logging functions. Do you find them useful in practice?
> 
> I'm asking that because I've never used this functionality. While I can't 
> find anything wrong with the concept in theory, practically I think it's just 
> confusing to have some log message appear only for some combination of 
> enabled channels. It might have made some sense when we had a "verbose" 
> logging channel, but that one is long gone (we still have a verbose logging 
> *flag*).
> 
> In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses the 
> GetLogIfAll form with more than one category. Of those, three are in tests, 
> one is definitely a bug (it combines the category with 
> LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable usefulness 
> (to me anyway).
> 
> If we got rid of this, we could simplify the logging calls even further and 
> have something like:
> Log *log = GetLog(LLDBLog::Process);
> everywhere.
> 
> cheers,
> pl
> 
> (*) I used this command to count:
> $ git grep -e LogIfAll -A 1 | fgrep -e '|' | wc -l

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Jonas Devlieghere via lldb-dev


> On Jan 19, 2022, at 10:25 AM, Jim Ingham  wrote:
> 
> 
> 
>> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote:
>> 
>> Hi all,
>> 
>> In case you haven't noticed, I'd like to draw your attention to the 
>> in-flight patches (https://reviews.llvm.org/D117382, 
>> https://reviews.llvm.org/D117490) whose goal clean up/improve/streamline the 
>> logging infrastructure.
>> 
>> I'm don't want go into technical details here (they're on the patch), but 
>> the general idea is to replace statements like 
>> GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)
>> with
>> GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
>> i.e., drop macros and make use of templates to make the function calls 
>> shorter and safer.
>> 
>> The reason I'm writing this email is to ask about the "All" versions of 
>> these logging functions. Do you find them useful in practice?
>> 
>> I'm asking that because I've never used this functionality. While I can't 
>> find anything wrong with the concept in theory, practically I think it's 
>> just confusing to have some log message appear only for some combination of 
>> enabled channels. It might have made some sense when we had a "verbose" 
>> logging channel, but that one is long gone (we still have a verbose logging 
>> *flag*).
>> 
>> In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses the 
>> GetLogIfAll form with more than one category. Of those, three are in tests, 
>> one is definitely a bug (it combines the category with 
>> LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable usefulness 
>> (to me anyway).
>> 
>> If we got rid of this, we could simplify the logging calls even further and 
>> have something like:
>> Log *log = GetLog(LLDBLog::Process);
>> everywhere.
> 
> The only time I’ve ever “used” GetLogIfAll was when I added another LOG 
> option to a log call, not noticing it was “All”, finding the new log didn’t 
> work, and going back to switch “All” to “Any”.
> 
> I vote for removing it.

+1 

> 
> Jim
> 
> 
>> 
>> cheers,
>> pl
>> 
>> (*) I used this command to count:
>> $ git grep -e LogIfAll -A 1 | fgrep -e '|' | wc -l

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Jim Ingham via lldb-dev


> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote:
> 
> Hi all,
> 
> In case you haven't noticed, I'd like to draw your attention to the in-flight 
> patches (https://reviews.llvm.org/D117382, https://reviews.llvm.org/D117490) 
> whose goal clean up/improve/streamline the logging infrastructure.
> 
> I'm don't want go into technical details here (they're on the patch), but the 
> general idea is to replace statements like 
> GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)
> with
> GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
> i.e., drop macros and make use of templates to make the function calls 
> shorter and safer.
> 
> The reason I'm writing this email is to ask about the "All" versions of these 
> logging functions. Do you find them useful in practice?
> 
> I'm asking that because I've never used this functionality. While I can't 
> find anything wrong with the concept in theory, practically I think it's just 
> confusing to have some log message appear only for some combination of 
> enabled channels. It might have made some sense when we had a "verbose" 
> logging channel, but that one is long gone (we still have a verbose logging 
> *flag*).
> 
> In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses the 
> GetLogIfAll form with more than one category. Of those, three are in tests, 
> one is definitely a bug (it combines the category with 
> LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable usefulness 
> (to me anyway).
> 
> If we got rid of this, we could simplify the logging calls even further and 
> have something like:
> Log *log = GetLog(LLDBLog::Process);
> everywhere.

The only time I’ve ever “used” GetLogIfAll was when I added another LOG option 
to a log call, not noticing it was “All”, finding the new log didn’t work, and 
going back to switch “All” to “Any”.

I vote for removing it.

Jim


> 
> cheers,
> pl
> 
> (*) I used this command to count:
> $ git grep -e LogIfAll -A 1 | fgrep -e '|' | wc -l

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Multiple platforms with the same name

2022-01-19 Thread Jim Ingham via lldb-dev


> On Jan 19, 2022, at 4:28 AM, Pavel Labath  wrote:
> 
> On 19/01/2022 00:38, Greg Clayton wrote:
>> Platforms can contain connection specific setting and data. You might want 
>> to create two different "remote-linux" platforms and connect each one to a 
>> different remote linux machine. Each target which uses this platform would 
>> each be able to fetch files, resolve symbol files, get OS version/build 
>> string/kernel info, get set working directory from the remote server they 
>> are attached. Since each platform tends to belong to a target and since you 
>> might want to create two different targets and have each one connected to a 
>> different remote machine, I believe it is fine to have multiple instances.
>> I would vote to almost always create a new instance unless it is the host 
>> platform. Though it should be possible to create to targets and possibly set 
>> the platform on one target using the platform from another that might 
>> already be connected.
>> I am open to suggestions if anyone has any objections.
>> Greg
> 
> I agree that permitting multiple platforms would be a more principled 
> position, but it was not clear to me if that was ever planned to be the case.

We made a choice early on in lldb that it would be a one-to-many debugger (as 
opposed to gdb where you use one gdb process to debug one inferior).  The idea 
was to allow people who have more complex inter-app communications to use the 
scripting features of lldb to make the process of debugging IPC and such-like 
more natural (like a “step-in” that steps across process boundaries when you 
step into a message dispatch).  Or to run two instances that are slightly 
different and compare the paths through some bit of code.  Or other cool uses 
we hadn’t thought of.  I don’t do this kind of debugging much either, but then 
I just debug lldb all the time which is a fairly simple process, and it’s 
communication with the stub is pretty simple.  So I don’t think that’s 
dispositive for how useful this design actually is...

Since the Platform class holds details about the current debug sessions on that 
platform, it has to take part in this design, which means either allowing one 
Platform to connect to all instances of it’s kind that lldb might want to 
debug, or making one Platform per instance.  The latter design was what we had 
always intended, it is certainly how we’ve talked about it for as long as I can 
remember.  OTOH, the whole Platform class is a bit of a mashup, since it holds 
both “things you need to know about a class of systems in order to debug on 
them” and “the connection you make to a particular instance”.  I think the 
intention would be clearer if we separated the “PlatformExpert” part of 
Platform and the “the Remote machine I’m talking to” part of Platform.

> 
> If it was (or if we want it to be), then I think we need to start making 
> bigger distinctions between the platform plugins (classes), and the actual 
> instantiations of those classes. Currently there is no way to refer to 
> "older" instances of the platforms as they all share the same name (the name 
> of the plugin). Like, you can enumerate them through 
> SBDebugger.GetPlatformAtIndex(), but that's about the only thing you can do 
> as all the interfaces (including the SB ones) take a platform _name_ as an 
> argument. This gets particularly confusing as in some circumstances we end up 
> choosing the newer one (e.g. if its the "current" platform) and sometimes the 
> older.
> 
> If we want to do that, then this is what I'd propose:
> a) Each platform plugin and each platform instance gets a name. We enforce 
> the uniqueness of these names (within their category).
> b) "platform list" outputs two block -- the list of available plugins and the 
> list of plugin instances
> c) a new "platform create" command to create a platform
>  - e.g. "platform create my-arm-test-machine --plugin remote-linux"
> d) "platform select" selects the platform with the given /instance/ name
>  - for convenience and compatibility if the name does not refer to any 
> existing platform instance, but it *does* refer to a platform plugin, it 
> would create a platform instance with the same name as the class. (So the 
> first "platform select remote-linux" would create a new instance (also called 
> remote-linux) and all subsequent selects would switch to that one -- a change 
> to existing behavior)
> e) SBPlatform gets a static factory function taking two string arguments
> f) existing SBPlatform constructor (taking one string) creates a new platform 
> instance with a name selected by us (remote-linux, remote-linux-2, etc.), but 
> its use is discouraged/deprecated.
> g) all other existing APIs (command line and SB) remain unchanged but any 
> "platform name" argument is taken to mean the platform instance name, and it 
> has the "platform select" semantics (select if it exists, create if it 
> doesn't)
> 
> I think this would strike a good balance between 

[lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Pavel Labath via lldb-dev

Hi all,

In case you haven't noticed, I'd like to draw your attention to the 
in-flight patches (https://reviews.llvm.org/D117382, 
https://reviews.llvm.org/D117490) whose goal clean up/improve/streamline 
the logging infrastructure.


I'm don't want go into technical details here (they're on the patch), 
but the general idea is to replace statements like 
GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)

with
GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
i.e., drop macros and make use of templates to make the function calls 
shorter and safer.


The reason I'm writing this email is to ask about the "All" versions of 
these logging functions. Do you find them useful in practice?


I'm asking that because I've never used this functionality. While I 
can't find anything wrong with the concept in theory, practically I 
think it's just confusing to have some log message appear only for some 
combination of enabled channels. It might have made some sense when we 
had a "verbose" logging channel, but that one is long gone (we still 
have a verbose logging *flag*).


In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses 
the GetLogIfAll form with more than one category. Of those, three are in 
tests, one is definitely a bug (it combines the category with 
LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable 
usefulness (to me anyway).


If we got rid of this, we could simplify the logging calls even further 
and have something like:

Log *log = GetLog(LLDBLog::Process);
everywhere.

cheers,
pl

(*) I used this command to count:
$ git grep -e LogIfAll -A 1 | fgrep -e '|' | wc -l
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Multiple platforms with the same name

2022-01-19 Thread Pavel Labath via lldb-dev

On 19/01/2022 00:38, Greg Clayton wrote:

Platforms can contain connection specific setting and data. You might want to create two 
different "remote-linux" platforms and connect each one to a different remote 
linux machine. Each target which uses this platform would each be able to fetch files, 
resolve symbol files, get OS version/build string/kernel info, get set working directory 
from the remote server they are attached. Since each platform tends to belong to a target 
and since you might want to create two different targets and have each one connected to a 
different remote machine, I believe it is fine to have multiple instances.

I would vote to almost always create a new instance unless it is the host 
platform. Though it should be possible to create to targets and possibly set 
the platform on one target using the platform from another that might already 
be connected.

I am open to suggestions if anyone has any objections.

Greg


I agree that permitting multiple platforms would be a more principled 
position, but it was not clear to me if that was ever planned to be the 
case.


If it was (or if we want it to be), then I think we need to start making 
bigger distinctions between the platform plugins (classes), and the 
actual instantiations of those classes. Currently there is no way to 
refer to "older" instances of the platforms as they all share the same 
name (the name of the plugin). Like, you can enumerate them through 
SBDebugger.GetPlatformAtIndex(), but that's about the only thing you can 
do as all the interfaces (including the SB ones) take a platform _name_ 
as an argument. This gets particularly confusing as in some 
circumstances we end up choosing the newer one (e.g. if its the 
"current" platform) and sometimes the older.


If we want to do that, then this is what I'd propose:
a) Each platform plugin and each platform instance gets a name. We 
enforce the uniqueness of these names (within their category).
b) "platform list" outputs two block -- the list of available plugins 
and the list of plugin instances

c) a new "platform create" command to create a platform
  - e.g. "platform create my-arm-test-machine --plugin remote-linux"
d) "platform select" selects the platform with the given /instance/ name
  - for convenience and compatibility if the name does not refer to any 
existing platform instance, but it *does* refer to a platform plugin, it 
would create a platform instance with the same name as the class. (So 
the first "platform select remote-linux" would create a new instance 
(also called remote-linux) and all subsequent selects would switch to 
that one -- a change to existing behavior)

e) SBPlatform gets a static factory function taking two string arguments
f) existing SBPlatform constructor (taking one string) creates a new 
platform instance with a name selected by us (remote-linux, 
remote-linux-2, etc.), but its use is discouraged/deprecated.
g) all other existing APIs (command line and SB) remain unchanged but 
any "platform name" argument is taken to mean the platform instance 
name, and it has the "platform select" semantics (select if it exists, 
create if it doesn't)


I think this would strike a good balance between a consistent interface 
and preserving existing semantics. The open questions are:
- is it worth it? While nice in theory, personally I have never actually 
needed to connect to more than one machine at the same time.
- what to do about platform-specific settings. The functionality has 
existed for a long time, but there was only one plugin 
(PlatformDarwinKernel) using it. I've now added a bunch of settings to 
the qemu-user platform on the assumption that there will only be one 
instance of the class. These are global, but they would really make more 
sense on a per-instance basis. We could either leave it be (I don't need 
multiple instances now), or come up with a way to have per-platform 
settings, similar like we do for targets. We could also do something 
with the "platform settings" command, which currently only sets the 
working directory.


Let me know what you think,
Pavel
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Source-level stepping with emulated instructions

2022-01-19 Thread Kjell Winblad via lldb-dev
Thank you Pavel and Jim for very helpful answers.

> Note, the “no debug info" part is not a strict requirement,
> since lldb also has controls for “shared libraries we never
> stop in when stepping” and “name regular expressions
> for functions we don’t stop in”.  At present, those controls
> are just for user-specification.  But if you find you need to
> do something more programmatic, you can easily add
> another “don’t stop in me” check specific to your
> architecture.

I have done some quick experiments with
"target.process.thread.step-avoid-regexp" and it seems like I can use
it to get the behavior we want so adding a “don’t stop in me” check
specific to our architecture seems like a good solution for us.

Best regards,
Kjell

On Tue, 18 Jan 2022 at 19:42, Jim Ingham via lldb-dev
 wrote:
>
> I think that description was a bit too much inside baseball….
>
> Every time lldb stops outside a stepping range while stepping, it invokes a 
> set of “should stop here” agents to determine what to do next.  If any of 
> those agents think we should NOT stop here, they are expected to produce a 
> set of instructions (i.e. a ThreadPlan) that drive the thread back to the 
> original code.
>
> The “we stopped in a function with no debug information, step back out” 
> behavior of lldb is implemented this way.  So if you can make your emulated 
> instruction regions look like function calls w/o debug info, you would get 
> this behavior for free.  But the mechanism is pretty flexible, and you just 
> need to leave yourself enough information to (a) know you are in one of your 
> regions and (b) how to drive the debugger to get back to the code that 
> invoked this emulated instruction, in order to get lldb’s “should stop here” 
> machinery to do what you want.
>
> Jim
>
>
> On Jan 18, 2022, at 10:28 AM, Jim Ingham via lldb-dev 
>  wrote:
>
>
>
> On Jan 16, 2022, at 11:23 PM, Pavel Labath  wrote:
>
> Hi Kjell,
>
> if you say these instructions are similar to function calls, then it sounds 
> to me like the best option would be to get lldb to treat them like function 
> calls. I think (Jim can correct me if I'm wrong) this consists of two things:
> - make sure lldb recognizes that these instructions can alter control flow 
> (Disassembler::GetIndexOfNextBranchInstruction). You may have done this 
> already.
> - make sure lldb can unwind out of these "functions" when it winds up inside 
> them. This will ensure the user does not stop in these functions when he does 
> a "step over". This means providing it the correct unwind info so it knows 
> where the functions will return. (As the functions know how to return to the 
> original instructions, this information has to be somewhere, and is hopefully 
> accessible to the debugger.) Probably the cleanest way to do that would be to 
> create a new Module object, which would contain the implementations of all 
> these functions, and all of their debug info. Then you could provide the 
> unwind info through the usual channels (e.g. .debug_frame), and it has the 
> advantage that you can also include any other information about these 
> functions (names, line numbers, whatever...)
>
>
> Pavel is right, if these blobs look like function calls with no debug 
> information, then lldb won’t stop in them by default. Note, the “no debug 
> info" part is not a strict requirement, since lldb also has controls for 
> “shared libraries we never stop in when stepping” and “name regular 
> expressions for functions we don’t stop in”. At present, those controls are 
> just for user-specification. But if you find you need to do something more 
> programmatic, you can easily add another “don’t stop in me” check specific to 
> your architecture.
>
> All this will work pretty transparently if the unwinder is able to tell us 
> how to get out of the function and back to it’s caller. But even if that’s 
> not the case, the “should stop here” mechanism in lldb works by at a lower 
> level by having the agent saying we should NOT stop here return a ThreadPlan 
> telling us how to get to the caller frame. For a function call, you get the 
> step out plan for free. But that’s not a requirement, your emulated 
> instruction region doesn’t strictly need to be a function call, provided you 
> know how to produce a thread plan that will step out of it.
>
> Jim
>
>
>
> pl
>
> On 15/01/2022 07:49, Kjell Winblad via lldb-dev wrote:
>
> Hi!
> I'm implementing LLDB support for a new processor architecture that
> the company I'm working for has created. The processor architecture
> has a few emulated instructions. An emulated instruction works by
> jumping to a specific address that contains the star

Re: [lldb-dev] Multiple platforms with the same name

2022-01-18 Thread Greg Clayton via lldb-dev
Platforms can contain connection specific setting and data. You might want to 
create two different "remote-linux" platforms and connect each one to a 
different remote linux machine. Each target which uses this platform would each 
be able to fetch files, resolve symbol files, get OS version/build 
string/kernel info, get set working directory from the remote server they are 
attached. Since each platform tends to belong to a target and since you might 
want to create two different targets and have each one connected to a different 
remote machine, I believe it is fine to have multiple instances.

I would vote to almost always create a new instance unless it is the host 
platform. Though it should be possible to create to targets and possibly set 
the platform on one target using the platform from another that might already 
be connected. 

I am open to suggestions if anyone has any objections.

Greg

> On Jan 17, 2022, at 8:18 AM, Pavel Labath  wrote:
> 
> Hello all,
> 
> currently our code treats platform name more-or-less as a unique identifier  
> (e.g. Platform::Find returns at most one platform instance --the first one it 
> finds).
> 
> This is why I was surprised that the "platform select" CLI command always 
> creates a new instance of the given platform, even if the platform of a given 
> name already exists. This is because Platform::Create does not search the 
> existing platform list before creating a new one. This might sound reasonable 
> at first, but for example the Platform::Create overload which takes an 
> ArchSpec first tries to look for a compatible platforms among the existing 
> ones before creating a new one.
> 
> For this reason, I am tempted to call this a bug and fix the name-taking 
> Create overload. This change passes the test suite, except for a single test, 
> which now gets confused because some information gets leaked from one test to 
> another. (although our coverage of the Platform class in the tests is fairly 
> weak)
> 
> However, this test got me thinking. It happens to use the the SB way of 
> manipulating platforms, and "creates" a new instance as 
> lldb.SBPlatform("remote-linux"). For this kind of a command, it would be 
> reasonable/expected to create a new instance, were it not for the fact that 
> this platform would be very tricky to access from the command line, and even 
> through some APIs -- SBDebugger::CreateTarget takes a platform _name_.
> 
> So, which one is it? Should we always have at most one instance of each 
> platform, or are multiple instances ok?

> cheers,
> pl
> 
> PS: In case you're wondering about how I run into this, I was trying to 
> create a pre-configured platform instance in (e.g.) an lldbinit file, without 
> making it the default. That way it would get automatically selected when the 
> user opens an executable of the appropriate type. This actually works, 
> *except* for the case when the user selects the platform manually. That's 
> because in that case, we would create an empty/unpopulated platform, and it 
> would be the one being selected because it was the /current/ platform.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Source-level stepping with emulated instructions

2022-01-18 Thread Jim Ingham via lldb-dev


> On Jan 16, 2022, at 11:23 PM, Pavel Labath  wrote:
> 
> Hi Kjell,
> 
> if you say these instructions are similar to function calls, then it sounds 
> to me like the best option would be to get lldb to treat them like function 
> calls. I think (Jim can correct me if I'm wrong) this consists of two things:
> - make sure lldb recognizes that these instructions can alter control flow 
> (Disassembler::GetIndexOfNextBranchInstruction). You may have done this 
> already.
> - make sure lldb can unwind out of these "functions" when it winds up inside 
> them. This will ensure the user does not stop in these functions when he does 
> a "step over". This means providing it the correct unwind info so it knows 
> where the functions will return. (As the functions know how to return to the 
> original instructions, this information has to be somewhere, and is hopefully 
> accessible to the debugger.) Probably the cleanest way to do that would be to 
> create a new Module object, which would contain the implementations of all 
> these functions, and all of their debug info. Then you could provide the 
> unwind info through the usual channels (e.g. .debug_frame), and it has the 
> advantage that you can also include any other information about these 
> functions (names, line numbers, whatever...)

Pavel is right, if these blobs look like function calls with no debug 
information, then lldb won’t stop in them by default.  Note, the “no debug 
info" part is not a strict requirement, since lldb also has controls for 
“shared libraries we never stop in when stepping” and “name regular expressions 
for functions we don’t stop in”.  At present, those controls are just for 
user-specification.  But if you find you need to do something more 
programmatic, you can easily add another “don’t stop in me” check specific to 
your architecture.  

All this will work pretty transparently if the unwinder is able to tell us how 
to get out of the function and back to it’s caller.  But even if that’s not the 
case, the “should stop here” mechanism in lldb works by at a lower level by 
having the agent saying we should NOT stop here return a ThreadPlan telling us 
how to get to the caller frame.  For a function call, you get the step out plan 
for free.  But that’s not a requirement, your emulated instruction region 
doesn’t strictly need to be a function call, provided you know how to produce a 
thread plan that will step out of it.
 
Jim
  

> 
> pl
> 
> On 15/01/2022 07:49, Kjell Winblad via lldb-dev wrote:
>> Hi!
>> I'm implementing LLDB support for a new processor architecture that
>> the company I'm working for has created. The processor architecture
>> has a few emulated instructions. An emulated instruction works by
>> jumping to a specific address that contains the start of a block of
>> instructions that emulates the emulated instructions. The emulated
>> instructions execute with interrupts turned off to be treated as
>> atomic by the programmer. So an emulated instruction is similar to a
>> function call. However, the address that the instruction jumps to is
>> implicit and not specified by the programmer.
>> I'm facing a problem with the emulated instructions when implementing
>> source-level stepping (the LLDB next and step commands) for C code in
>> LLDB. LLDB uses hardware stepping to step through the address range
>> that makes up a source-level statement. This algorithm works fine
>> until the PC jumps to the start of the block that implements an
>> emulated instruction. Then LLDB stops because the PC exited the
>> address range for the source-level statement. This behavior is not
>> what we want. Instead, LLDB should ideally step through the emulation
>> instructions and continue until the current source-level statement has
>> been completed.
>> My questions are:
>> 1. Is there currently any LLDB plugin functionality or special DWARF
>> debug information to handle the kind of emulated instructions that I
>> have described? All the code for the emulated instructions is within
>> the same address range that does not contain any other code.
>> 2. If the answer to question 1 is no, do you have suggestions for
>> extending LLVM to support this kind of emulated instructions?
>> Best regards,
>> Kjell Winblad
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Multiple platforms with the same name

2022-01-17 Thread Pavel Labath via lldb-dev

Hello all,

currently our code treats platform name more-or-less as a unique 
identifier  (e.g. Platform::Find returns at most one platform instance 
--the first one it finds).


This is why I was surprised that the "platform select" CLI command 
always creates a new instance of the given platform, even if the 
platform of a given name already exists. This is because 
Platform::Create does not search the existing platform list before 
creating a new one. This might sound reasonable at first, but for 
example the Platform::Create overload which takes an ArchSpec first 
tries to look for a compatible platforms among the existing ones before 
creating a new one.


For this reason, I am tempted to call this a bug and fix the name-taking 
Create overload. This change passes the test suite, except for a single 
test, which now gets confused because some information gets leaked from 
one test to another. (although our coverage of the Platform class in the 
tests is fairly weak)


However, this test got me thinking. It happens to use the the SB way of 
manipulating platforms, and "creates" a new instance as 
lldb.SBPlatform("remote-linux"). For this kind of a command, it would be 
reasonable/expected to create a new instance, were it not for the fact 
that this platform would be very tricky to access from the command line, 
and even through some APIs -- SBDebugger::CreateTarget takes a platform 
_name_.


So, which one is it? Should we always have at most one instance of each 
platform, or are multiple instances ok?


cheers,
pl

PS: In case you're wondering about how I run into this, I was trying to 
create a pre-configured platform instance in (e.g.) an lldbinit file, 
without making it the default. That way it would get automatically 
selected when the user opens an executable of the appropriate type. This 
actually works, *except* for the case when the user selects the platform 
manually. That's because in that case, we would create an 
empty/unpopulated platform, and it would be the one being selected 
because it was the /current/ platform.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Source-level stepping with emulated instructions

2022-01-16 Thread Pavel Labath via lldb-dev

Hi Kjell,

if you say these instructions are similar to function calls, then it 
sounds to me like the best option would be to get lldb to treat them 
like function calls. I think (Jim can correct me if I'm wrong) this 
consists of two things:
- make sure lldb recognizes that these instructions can alter control 
flow (Disassembler::GetIndexOfNextBranchInstruction). You may have done 
this already.
- make sure lldb can unwind out of these "functions" when it winds up 
inside them. This will ensure the user does not stop in these functions 
when he does a "step over". This means providing it the correct unwind 
info so it knows where the functions will return. (As the functions know 
how to return to the original instructions, this information has to be 
somewhere, and is hopefully accessible to the debugger.) Probably the 
cleanest way to do that would be to create a new Module object, which 
would contain the implementations of all these functions, and all of 
their debug info. Then you could provide the unwind info through the 
usual channels (e.g. .debug_frame), and it has the advantage that you 
can also include any other information about these functions (names, 
line numbers, whatever...)


pl

On 15/01/2022 07:49, Kjell Winblad via lldb-dev wrote:

Hi!

I'm implementing LLDB support for a new processor architecture that
the company I'm working for has created. The processor architecture
has a few emulated instructions. An emulated instruction works by
jumping to a specific address that contains the start of a block of
instructions that emulates the emulated instructions. The emulated
instructions execute with interrupts turned off to be treated as
atomic by the programmer. So an emulated instruction is similar to a
function call. However, the address that the instruction jumps to is
implicit and not specified by the programmer.

I'm facing a problem with the emulated instructions when implementing
source-level stepping (the LLDB next and step commands) for C code in
LLDB. LLDB uses hardware stepping to step through the address range
that makes up a source-level statement. This algorithm works fine
until the PC jumps to the start of the block that implements an
emulated instruction. Then LLDB stops because the PC exited the
address range for the source-level statement. This behavior is not
what we want. Instead, LLDB should ideally step through the emulation
instructions and continue until the current source-level statement has
been completed.

My questions are:

1. Is there currently any LLDB plugin functionality or special DWARF
debug information to handle the kind of emulated instructions that I
have described? All the code for the emulated instructions is within
the same address range that does not contain any other code.
2. If the answer to question 1 is no, do you have suggestions for
extending LLVM to support this kind of emulated instructions?

Best regards,
Kjell Winblad
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2022-01-14 Thread Tom Stellard via lldb-dev

On 12/17/21 13:15, Tom Stellard wrote:

Hi,

Here is a proposal for a new automated workflow for managing parts of the 
release
process.  I've been experimenting with this over the past few releases and
now that we have migrated to GitHub issues, it would be possible for us to
implement this in the main repo.

The workflow is pretty straight forward, but it does use pull requests.  My
idea is to enable pull requests for only this automated workflow and not
for general development (i.e. We would still use Phabricator for code review).
Let me know what you think about this:



Hi,

Thanks for the feedback on this.  I've posted a patch to implement this 
proposal:
https://reviews.llvm.org/D117386.  The only change is that the pull requests 
will
not be in the llvm/llvm-project repo, but will instead be in 
llvmbot/llvm-project.
This will reduce the number of notifications and avoid confusion about whether 
or
not we are using pull requests for this project.

-Tom



# Workflow

* On an existing issue or a newly created issue, a user who wants to backport
one or more commits to the release branch adds a comment:

/cherry-pick  <..>

* This starts a GitHub Action job that attempts to cherry-pick the commit(s)
to the current release branch.

* If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
     * Pushes the result of the cherry-pick to a branch in the
   llvmbot/llvm-project repo called issue, where n is the number of the
   GitHub Issue that launched the Action.

     * Adds this comment on the issue: /branch llvmbot/llvm-project/issue

     * Creates a pull request from llvmbot/llvm-project/issue to
   llvm/llvm-project/release/XX.x

     * Adds a comment on the issue: /pull-request #
   where n is the number of the pull request.

* If the commit(s) can't be cherry-picked cleanly, then the GitHub Action job 
adds
the release:cherry-pick-failed label to the issue and adds a comment:
"Failed to cherry-pick  <..>" along with a link to the failing
Action.

* If a user has manually cherry-picked the fixes, resolved the conflicts, and
pushed the result to a branch on github, they can automatically create a pull
request by adding this comment to an issue: /branch //

* Once a pull request has been created, this launches more GitHub Actions
to run pre-commit tests.

* Once the tests complete successfully and the changes have been approved
by the release manager, the pull request can me merged into the release branch.

* After the pull request is merged, a GitHub Action automatically closes the
associated issue.

Some Examples:

Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
Cherry-pick failure: https://github.com/tstellar/llvm-project/issues/730
Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710


# Motivation

Why do this?  The goal is to make the release process more efficient and 
transparent.
With this new workflow, users can get automatic and immediate feedback when a 
commit
they want backported doesn't apply cleanly or introduces some test failures.  
With
the current process, these kinds of issues are communicated by the release 
manager,
and it can be days or even weeks before a problem is discovered and 
communicated back
to the users.

Another advantage of this workflow is it introduces pre-commit CI to the 
release branch,
which is important for the stability of the branch and the releases, but also 
gives
the project an opportunity to experiment with new CI workflows in a way that
does not disrupt development on the main branch.

# Implementation

If this proposal is accepted, I would plan to implement this for the LLVM 14 
release cycle based
on the following proof of concept that I have been testing for the last few 
releases:

https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow.yml
https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow-create-pr.yml
https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-merge-pr.yml

Thanks,
Tom


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Source-level stepping with emulated instructions

2022-01-14 Thread Kjell Winblad via lldb-dev
Hi!

I'm implementing LLDB support for a new processor architecture that
the company I'm working for has created. The processor architecture
has a few emulated instructions. An emulated instruction works by
jumping to a specific address that contains the start of a block of
instructions that emulates the emulated instructions. The emulated
instructions execute with interrupts turned off to be treated as
atomic by the programmer. So an emulated instruction is similar to a
function call. However, the address that the instruction jumps to is
implicit and not specified by the programmer.

I'm facing a problem with the emulated instructions when implementing
source-level stepping (the LLDB next and step commands) for C code in
LLDB. LLDB uses hardware stepping to step through the address range
that makes up a source-level statement. This algorithm works fine
until the PC jumps to the start of the block that implements an
emulated instruction. Then LLDB stops because the PC exited the
address range for the source-level statement. This behavior is not
what we want. Instead, LLDB should ideally step through the emulation
instructions and continue until the current source-level statement has
been completed.

My questions are:

1. Is there currently any LLDB plugin functionality or special DWARF
debug information to handle the kind of emulated instructions that I
have described? All the code for the emulated instructions is within
the same address range that does not contain any other code.
2. If the answer to question 1 is no, do you have suggestions for
extending LLVM to support this kind of emulated instructions?

Best regards,
Kjell Winblad
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [EXTERNAL] LLDB Windows on Arm64 buildbot

2022-01-14 Thread Stella Stamenova via lldb-dev
That's very exciting!

Please let me know if you run into any issues I could help with.

Thanks,
-Stella

From: Omair Javaid 
Sent: Friday, January 14, 2022 2:12 PM
To: mailing list lldb-dev 
Cc: Stella Stamenova 
Subject: [EXTERNAL] LLDB Windows on Arm64 buildbot

You don't often get email from 
omair.jav...@linaro.org. Learn why this is 
important
Hi,

This is to notify that we are in process of setting up a LLDB Windows on Arm64 
buildbot which will help share the load of maintenance of LLDB Windows platform 
support.

Should you have any query or suggestions please feel free to contact us.

Thanks!

--
Omair Javaid
www.linaro.org
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB Windows on Arm64 buildbot

2022-01-14 Thread Omair Javaid via lldb-dev
Hi,

This is to notify that we are in process of setting up a LLDB Windows on
Arm64 buildbot which will help share the load of maintenance of LLDB
Windows platform support.

Should you have any query or suggestions please feel free to contact us.

Thanks!

--
Omair Javaid
www.linaro.org
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] problems using EvaluateExpression in lldb, when it creates new object

2022-01-14 Thread fhjiwerfghr fhiewrgfheir via lldb-dev
I'm sorry in advance, if it's not a correct mailing list, there doesn't
seem to be lldb-usage mailing list.

I'm writing a pretty-printer python script, which - to cut to the chase,
pretty prints members of a class by using EvaluateExpression and creating
new object inside it. It doesn't seem to work - i'm getting "" error. Should my idea work in a first place and i't s a bug
or it shouldn't and i need to find a different solution?

I'm attaching a repro case:

clang++ q.cpp -g -o o -std=c++20
lldb o
command script import lldb_script.py
br set --file q.cpp --line 19
r
print c


it prints:
(lldb) print c
(C) $0 = CCC {
   = 
}

it should something akin to:
(lldb) print c
(C) $0 = CCC {
  b   = B {
a = A {
  id = "qwerty"
}
  }
}
#include 
#include 
#include 

struct A {
std::string_view id() const { return "qwerty"; }
};

struct B {
A a() const { return A(); }
};
struct C {
B b() const { return B(); }
};

int main()
{
C c;
return 0;
}
import lldb.formatters.Logger
import lldb
import logging
import sys
import codecs
import platform
import json

logger = lldb.formatters.Logger.Logger()

logfile = codecs.open('log.txt', 'wb', encoding='utf8')
log = logging.getLogger()
log.setLevel(logging.INFO)
FORMAT = "[%(filename)s:%(lineno)s] %(message)s"
if log.handlers:
log.handlers[0].setFormatter(logging.Formatter(FORMAT))

ch = logging.StreamHandler(logfile)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter(FORMAT)
ch.setFormatter(formatter)
log.addHandler(ch)

log.error('---')
log.error('starting')

module = sys.modules[__name__]

if sys.version_info[0] == 2:
# python2-based LLDB accepts utf8-encoded ascii strings only.
def to_lldb_str(s): return s.encode(
'utf8', 'backslashreplace') if isinstance(s, unicode) else s
range = xrange
else:
to_lldb_str = str

log = logging.getLogger(__name__)

class V(object):
def __init__(self, v, dict=None):
self.v = v

def EvaluateExpression(self, expr, name=None):
v = self.v.CreateValueFromExpression(name, expr).dynamic
assert v
return V(v)


class Base(V):
regex = False

def get_summary(self):
return ''

def update(self):
pass

def num_children(self):
return 0

def has_children(self):
return False

def get_child_at_index(self, index):
return None

def get_child_index(self, name):
return -1


class cc_A(Base):
type = 'A'
regex = False

def num_children(self):
return 1

def has_children(self):
return True

def get_child_at_index(self, index):
assert index == 0
v = self.EvaluateExpression(f"id()", f'id')
return v.v

class cc_B(Base):
type = 'B'
regex = False

def num_children(self):
return 1

def has_children(self):
return True

def get_child_at_index(self, index):
assert index == 0
v = self.EvaluateExpression(f"a()", f'a')
return v.v

class cc_C(Base):
type = 'C'
regex = False

def get_summary(self):
return 'CCC'

def num_children(self):
return 1

def has_children(self):
return True

def get_child_at_index(self, index):
assert index == 0
return self.EvaluateExpression(f"b()").v

def initialize_category(debugger):
global module, std_category

std_category = debugger.CreateCategory('C++')
std_category.SetEnabled(True)

glob = globals()
todo = []
log.error('initialize_category')
def add(a, b, c, d):
todo.append(lambda: a(b, c, d))
for x, c in glob.items():
if x.startswith('ff_'):
if isinstance(c.type, list):
for t in c.type:
add(attach_summary_to_type, c, t, c.regex)
else:
add(attach_summary_to_type, c, c.type, c.regex)
elif x.startswith('cc_'):
if isinstance(c.type, list):
for t in c.type:
add(attach_synthetic_to_type, c, t, c.regex)
else:
add(attach_synthetic_to_type, c, c.type, c.regex)
for d in todo:
d()

def attach_synthetic_to_type(synth_class, type_name, is_regex=False):
global module, std_category

#log.info('attaching synthetic %s to "%s", is_regex=%s', synth_class.__name__, type_name, is_regex)
synth = lldb.SBTypeSynthetic.CreateWithClassName(
__name__ + '.' + synth_class.__name__)
synth.SetOptions(lldb.eTypeOptionCascade)
std_category.AddTypeSynthetic(
lldb.SBTypeNameSpecifier(type_name, is_regex), synth)

def summary_fn(valobj, dict): return get_synth_summary(synth_class, valobj, dict)
# LLDB accesses summary fn's by name, so we need to create a unique 

Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-13 Thread Jim Ingham via lldb-dev
You are really going to make a lldb_private::CompilerType, since that’s what 
backs the Type & ultimately the SBTypes.  There’s a self-contained example 
where we make a CompilerType to represent the pairs in the synthetic child 
provider for NSDictionaries in the function GetLLDBNSPairType in 
NSDictionary.cpp.  And then you can follow the use of that function to see how 
that gets turned into a Type.

Also, the whole job of the DWARF parser is to make up CompilerTypes out of 
information from external sources, so if you need other examples for how to add 
elements to a CompilerType the DWARF parser is replete with them.

Jim

> On Jan 13, 2022, at 4:03 AM, Michał Górny  wrote:
> 
> On Wed, 2022-01-12 at 11:22 -0800, Jim Ingham wrote:
>> If we can’t always get our hands on the siginfo type, we will have to cons 
>> that type up by hand.  But we would have had to do that if we were 
>> implementing this feature in the expression parser anyway, and we already 
>> hand-make types to hand out in SBValues for a bunch of the synthetic child 
>> providers already, so that’s a well trodden path.
> 
> Could you point me to some example I could base my code on?  ;-)
> 
> -- 
> Best regards,
> Michał Górny
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-13 Thread Michał Górny via lldb-dev
On Wed, 2022-01-12 at 11:22 -0800, Jim Ingham wrote:
> If we can’t always get our hands on the siginfo type, we will have to cons 
> that type up by hand.  But we would have had to do that if we were 
> implementing this feature in the expression parser anyway, and we already 
> hand-make types to hand out in SBValues for a bunch of the synthetic child 
> providers already, so that’s a well trodden path.

Could you point me to some example I could base my code on?  ;-)

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-13 Thread Michał Górny via lldb-dev
On Wed, 2022-01-12 at 11:22 -0800, Jim Ingham wrote:
> 
> > On Jan 12, 2022, at 4:28 AM, Pavel Labath  wrote:
> > 
> > I kinda like the cleanliness (of the design, not the implementation) of a 
> > $siginfo variable, but you're right that implementing it would be tricky (I 
> > guess we'd have to write the struct info the process memory somewhere and 
> > then read it back when the expression completes).
> > 
> > I don't expect that users will frequently want to modify the siginfo 
> > structure. I think the typical use case would be to inspect the struct 
> > fields (maybe in a script -- we have one user wanting to do that) to 
> > understand more about the nature of the stop/crash.
> > 
> > With that in mind, I don't have a problem with a separate command, but I 
> > don't think that the "platform" subtree is a good fit for this. I mean, I 
> > am sure the (internal) Platform class will be involved in interpreting the 
> > data, but all of the platform _commands_ have something to do with the 
> > system as a whole (moving files around, listing processes, etc.) and not a 
> > specific process. I think this would belong under the "thread" subtree, 
> > since the signal is tied to a specific thread.
> 
> Platform seemed appropriate to me because this is a platform specific 
> feature; some platforms don’t support siginfo at all…. But I’m fine with 
> thread too.
> 
> > 
> > Due to the scripting use case, I am also interested in being able to 
> > inspect the siginfo struct through the SB API -- the expression approach 
> > would (kinda) make that possible, while a brand new command doesn't 
> > (without extra work). So, I started thinking whether this be exposed there. 
> > We already kinda expose the si_signo field via GetStopReasonDataAtIndex(0) 
> > (and it even happens to be the first siginfo field), but I don't think we 
> > would want to expose all fields in that manner.
> > 
> 
> Why not something like:
> 
> SBValue
> SBThread::GetSiginfo();
> 
> That returns an SBValue with the siginfo type and the data filled in from the 
> gdb-remote packet.  If the platform didn’t support this you’d just get an 
> SBValue with the error set saying “not supported” or whatever.
> 
> If you have all the types of the members to hand it’s easy to cons up an 
> SBValue from the data you got from the stub.  An SBValue is exactly what 
> you’d get back from the expression parser anyway, so from the client’s 
> perspective this would be just as good.  And printing the SBValue and doing 
> logic on its members are all well supported.  
> 
> If we can’t always get our hands on the siginfo type, we will have to cons 
> that type up by hand.  But we would have had to do that if we were 
> implementing this feature in the expression parser anyway, and we already 
> hand-make types to hand out in SBValues for a bunch of the synthetic child 
> providers already, so that’s a well trodden path.
> 
> You could even make a ValueObjectSiginfo to back the SBValue you hand out 
> which implements “SetValueFromCString” through the gdb-remote protocol 
> interface, so writing back to the siginfo through this interface would be 
> natural.
> 

Well, it all makes sense to me.  It should also make the implementation
somewhat easier, as I can focus on getting siginfo_t parser with unit
tests first, and then work on the additional commands.

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-13 Thread Michał Górny via lldb-dev
On Wed, 2022-01-12 at 13:28 +0100, Pavel Labath wrote:
> 
> This wouldn't solve the problem of writing to the siginfo struct, but I 
> am not sure if this is a use case Michał is actually trying to solve 
> right now (?) If it is then, maybe this could be done through a separate 
> command, as we currently lack the ability to resume a process/thread 
> with a specific signal ("process signal" does something slightly 
> different). It could either be brand new command, or integrated into the 
> existing process/thread continue commands. (thread continue --signal 
> SIGFOO => "continue with SIGFOO"; thread continue --siginfo $47 => 
> continue with siginfo in $47 ???)

Yeah, writing is not very important to me right now.  I think it's
rather uncommon for people to override siginfo.

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] lldb-dev mailing list moving to LLVM Discourse

2022-01-12 Thread Tanya Lattner via lldb-dev
The lldb-dev mailing list will be moved to LLVM Discourse under the “LLDB” 
category (under Subprojects). All archives will be migrated. This list will be 
no longer be in use starting February 1, 2022. Please see this blog post for 
all details: https://blog.llvm.org/posts/2022-01-07-moving-to-discourse/ 


If you would like to continue to get notifications regarding LLDB, you must do 
the following:

1) Sign up for an account on LLVM Discourse (you may use your GitHub account):
https://llvm.discourse.group/ 

Note: If you are attempting to sign up after the mailing list archives have 
been migrated to Discourse (Feb 1), you may find that an account has been 
created for the email you used on the LLVM mailing list. If this is the case, 
click “Forgot password” to get access to this account.

2) Sign up for notifications to the “LLDB" category. 

Click on the “LLDB" category:




Click on the bell icon to set notifications. You can also modify these in your 
Account->Preferences->Notifications.




Thanks,
Tanya Lattner___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-12 Thread Jim Ingham via lldb-dev


> On Jan 12, 2022, at 4:28 AM, Pavel Labath  wrote:
> 
> I kinda like the cleanliness (of the design, not the implementation) of a 
> $siginfo variable, but you're right that implementing it would be tricky (I 
> guess we'd have to write the struct info the process memory somewhere and 
> then read it back when the expression completes).
> 
> I don't expect that users will frequently want to modify the siginfo 
> structure. I think the typical use case would be to inspect the struct fields 
> (maybe in a script -- we have one user wanting to do that) to understand more 
> about the nature of the stop/crash.
> 
> With that in mind, I don't have a problem with a separate command, but I 
> don't think that the "platform" subtree is a good fit for this. I mean, I am 
> sure the (internal) Platform class will be involved in interpreting the data, 
> but all of the platform _commands_ have something to do with the system as a 
> whole (moving files around, listing processes, etc.) and not a specific 
> process. I think this would belong under the "thread" subtree, since the 
> signal is tied to a specific thread.

Platform seemed appropriate to me because this is a platform specific feature; 
some platforms don’t support siginfo at all…. But I’m fine with thread too.

> 
> Due to the scripting use case, I am also interested in being able to inspect 
> the siginfo struct through the SB API -- the expression approach would 
> (kinda) make that possible, while a brand new command doesn't (without extra 
> work). So, I started thinking whether this be exposed there. We already kinda 
> expose the si_signo field via GetStopReasonDataAtIndex(0) (and it even 
> happens to be the first siginfo field), but I don't think we would want to 
> expose all fields in that manner.
> 

Why not something like:

SBValue
SBThread::GetSiginfo();

That returns an SBValue with the siginfo type and the data filled in from the 
gdb-remote packet.  If the platform didn’t support this you’d just get an 
SBValue with the error set saying “not supported” or whatever.

If you have all the types of the members to hand it’s easy to cons up an 
SBValue from the data you got from the stub.  An SBValue is exactly what you’d 
get back from the expression parser anyway, so from the client’s perspective 
this would be just as good.  And printing the SBValue and doing logic on its 
members are all well supported.  

If we can’t always get our hands on the siginfo type, we will have to cons that 
type up by hand.  But we would have had to do that if we were implementing this 
feature in the expression parser anyway, and we already hand-make types to hand 
out in SBValues for a bunch of the synthetic child providers already, so that’s 
a well trodden path.

You could even make a ValueObjectSiginfo to back the SBValue you hand out which 
implements “SetValueFromCString” through the gdb-remote protocol interface, so 
writing back to the siginfo through this interface would be natural.

Jim


> This then lead me to SBThread::GetStopReasonExtendedInfoAsJSON. What is this 
> meant to contain? Could we put the signal info there. If yes, then the 
> natural command-line way of retrieving this would be the "thread info" 
> command, and we would need to add any new commands.
> 
> This wouldn't solve the problem of writing to the siginfo struct, but I am 
> not sure if this is a use case Michał is actually trying to solve right now 
> (?) If it is then, maybe this could be done through a separate command, as we 
> currently lack the ability to resume a process/thread with a specific signal 
> ("process signal" does something slightly different). It could either be 
> brand new command, or integrated into the existing process/thread continue 
> commands. (thread continue --signal SIGFOO => "continue with SIGFOO"; thread 
> continue --siginfo $47 => continue with siginfo in $47 ???)
> 
> pl
> 
> On 12/01/2022 01:07, Jim Ingham via lldb-dev wrote:
>> I would not do this with the expression parser.
>> First off, the expression parser doesn’t know how to do anything JIT code 
>> that will run directly in the target.  So if:
>> (lldb) expr $signinfo.some_field = 10
>> doesn’t resolve to some $siginfo structure in real memory with a real type 
>> such that clang can calculate the offset of the field “some_field” and write 
>> to it to make the change, then this wouldn’t be a natural fit in the current 
>> expression parser.  I’m guessing this is not the case, since you fetch this 
>> field through ptrace calls in the stub.
>> And the expression parser is enough of a beast already that we don’t want to 
>> add complexity to it without good reason.
>> We also don’t have any other instances of lldb injected $v

Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-12 Thread Pavel Labath via lldb-dev
I kinda like the cleanliness (of the design, not the implementation) of 
a $siginfo variable, but you're right that implementing it would be 
tricky (I guess we'd have to write the struct info the process memory 
somewhere and then read it back when the expression completes).


I don't expect that users will frequently want to modify the siginfo 
structure. I think the typical use case would be to inspect the struct 
fields (maybe in a script -- we have one user wanting to do that) to 
understand more about the nature of the stop/crash.


With that in mind, I don't have a problem with a separate command, but I 
don't think that the "platform" subtree is a good fit for this. I mean, 
I am sure the (internal) Platform class will be involved in interpreting 
the data, but all of the platform _commands_ have something to do with 
the system as a whole (moving files around, listing processes, etc.) and 
not a specific process. I think this would belong under the "thread" 
subtree, since the signal is tied to a specific thread.


Due to the scripting use case, I am also interested in being able to 
inspect the siginfo struct through the SB API -- the expression approach 
would (kinda) make that possible, while a brand new command doesn't 
(without extra work). So, I started thinking whether this be exposed 
there. We already kinda expose the si_signo field via 
GetStopReasonDataAtIndex(0) (and it even happens to be the first siginfo 
field), but I don't think we would want to expose all fields in that manner.


This then lead me to SBThread::GetStopReasonExtendedInfoAsJSON. What is 
this meant to contain? Could we put the signal info there. If yes, then 
the natural command-line way of retrieving this would be the "thread 
info" command, and we would need to add any new commands.


This wouldn't solve the problem of writing to the siginfo struct, but I 
am not sure if this is a use case Michał is actually trying to solve 
right now (?) If it is then, maybe this could be done through a separate 
command, as we currently lack the ability to resume a process/thread 
with a specific signal ("process signal" does something slightly 
different). It could either be brand new command, or integrated into the 
existing process/thread continue commands. (thread continue --signal 
SIGFOO => "continue with SIGFOO"; thread continue --siginfo $47 => 
continue with siginfo in $47 ???)


pl

On 12/01/2022 01:07, Jim Ingham via lldb-dev wrote:

I would not do this with the expression parser.

First off, the expression parser doesn’t know how to do anything JIT code that 
will run directly in the target.  So if:

(lldb) expr $signinfo.some_field = 10

doesn’t resolve to some $siginfo structure in real memory with a real type such 
that clang can calculate the offset of the field “some_field” and write to it 
to make the change, then this wouldn’t be a natural fit in the current 
expression parser.  I’m guessing this is not the case, since you fetch this 
field through ptrace calls in the stub.

And the expression parser is enough of a beast already that we don’t want to 
add complexity to it without good reason.

We also don’t have any other instances of lldb injected $variables that we use 
for various purposes.  I’m not in favor of introducing them as they end up 
being pretty undiscoverable….

Why not something like:

(lldb) platform signinfo read [-field field_name]

Without the field name it would print the full siginfo, or you can list fields 
one by one with the —field argument.

And then make the write a raw command like:

(lldb) platform signinfo write -field name expression

The platform is a natural place for this, it is the agent that knows about all 
the details of the system your target is running on, so it would know what 
access you have to siginfo for the target system.

Having the argument write use expressions to produce the new value for the 
field would get you most of the value of introducing a virtual variable into 
the expression parser, since:

(lldb) pl si w -f some_field 

Is the same as you would get with the proposed $siginfo:

(lldb) expr $siginfo.some_field = 
  
You could also implement the write command as a raw command like:


(lldb)platform siginfo write —field some_field 

Which has the up side that people wouldn’t need to quote their expressions, but 
the down side that you could only change one field at a time.

This would also mean “apropos siginfo” would turn up the commands, as would a 
casual scan through the command tree.  So the feature would be pretty 
discoverable.

The only things this would make inconvenient are if you wanted to pass the 
value of a signinfo field to some function call or do something like:

$signinfo.some_field += 5

These don’t seem very common operations, and if you needed you could always do 
this with scripting, since the result from “platform siginfo read -field name” 
would be the value, so you c

Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-11 Thread Jim Ingham via lldb-dev
This sentence makes no sense, it was a remnant of a previous draft, which 
included the option to do:

(lldb) platform write —field name —value expression —field other_name —value 
other_expression

But that would require people to quote their expressions to get them past the 
command parser, which seems more annoying than having to set fields one by one 
would.

Jim


> On Jan 11, 2022, at 4:07 PM, Jim Ingham  wrote:
> 
> Which has the up side that people wouldn’t need to quote their expressions, 
> but the down side that you could only change one field at a time.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-11 Thread Jim Ingham via lldb-dev
I would not do this with the expression parser.

First off, the expression parser doesn’t know how to do anything JIT code that 
will run directly in the target.  So if:

(lldb) expr $signinfo.some_field = 10

doesn’t resolve to some $siginfo structure in real memory with a real type such 
that clang can calculate the offset of the field “some_field” and write to it 
to make the change, then this wouldn’t be a natural fit in the current 
expression parser.  I’m guessing this is not the case, since you fetch this 
field through ptrace calls in the stub.  

And the expression parser is enough of a beast already that we don’t want to 
add complexity to it without good reason.

We also don’t have any other instances of lldb injected $variables that we use 
for various purposes.  I’m not in favor of introducing them as they end up 
being pretty undiscoverable….

Why not something like:

(lldb) platform signinfo read [-field field_name]

Without the field name it would print the full siginfo, or you can list fields 
one by one with the —field argument.

And then make the write a raw command like:

(lldb) platform signinfo write -field name expression

The platform is a natural place for this, it is the agent that knows about all 
the details of the system your target is running on, so it would know what 
access you have to siginfo for the target system.

Having the argument write use expressions to produce the new value for the 
field would get you most of the value of introducing a virtual variable into 
the expression parser, since:

(lldb) pl si w -f some_field 

Is the same as you would get with the proposed $siginfo:

(lldb) expr $siginfo.some_field = 
 
You could also implement the write command as a raw command like:

(lldb)platform siginfo write —field some_field 

Which has the up side that people wouldn’t need to quote their expressions, but 
the down side that you could only change one field at a time.

This would also mean “apropos siginfo” would turn up the commands, as would a 
casual scan through the command tree.  So the feature would be pretty 
discoverable.

The only things this would make inconvenient are if you wanted to pass the 
value of a signinfo field to some function call or do something like:

$signinfo.some_field += 5

These don’t seem very common operations, and if you needed you could always do 
this with scripting, since the result from “platform siginfo read -field name” 
would be the value, so you could write a little script to grab the value and 
insert it into the desired expression and run that.

One of the advantages of having a command tree is has a few top level nodes and 
organizes the commands under them is that adding new commands doesn’t clutter 
up the top level of the command tree, and so you should feel free to add new 
commands where they fit into the hierarchy rather than trying to slide them 
into some other command, like using the expression parser.

Jim

 

> On Jan 11, 2022, at 7:48 AM, Ted Woodward via lldb-dev 
>  wrote:
> 
> 
> You should use Hg for this instead of Hc. Hc is used for step/continue, while 
> Hg is used for everything else.
> 
> 
>> -Original Message-
>> From: lldb-dev  On Behalf Of Michal Górny
>> via lldb-dev
>> Sent: Tuesday, January 11, 2022 6:38 AM
>> To: lldb-dev@lists.llvm.org
>> Subject: [lldb-dev] RFC: siginfo reading/writing support
>> 
>> Hello,
>> 
>> TL;DR: I'd like to implement at least partial support for reading/writing 
>> siginfo
>> via LLDB.  I can't think of a better approach than copying the GDB's idea of
>> "magical" $_siginfo variable that works through the expression evaluator.  
>> I'd
>> like to know your opinion/ideas.
>> 
>> 
>> POSIX defines a siginfo_t structure that is used to pass additional signal
>> information -- such as more detailed signal code, faulting memory address in
>> case of SIGSEGV or PID of the child process in case of SIGCHLD.  LLDB already
>> uses ptrace(2) to obtain this information and use it internally but it 
>> doesn't
>> expose it to the user.
>> 
>> The GDB Remote Serial protocol provides the ability to read/write siginfo via
>> qXfer:siginfo:... packets [1].  GDB exposes this information to the user via 
>> a
>> special $_siginfo variable [2].
>> 
>> A few things to note:
>> 
>> 1. Some targets (e.g. Linux, NetBSD) support overwriting siginfo, some (e.g.
>> FreeBSD) only reading.
>> 
>> 2. Siginfo is generally associated with a single thread, so the packets 
>> should
>> be combined with respective thread selection (Hg or Hc?).
>> 
>> 3. The exact type of siginfo_t differs per platform (POSIX specifies a 
>> minimal
>> subset).
>> 
>> 
>> My rough idea right now is to follow GDB

Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-11 Thread Michał Górny via lldb-dev
On Tue, 2022-01-11 at 15:48 +, Ted Woodward wrote:
> You should use Hg for this instead of Hc. Hc is used for step/continue, while 
> Hg is used for everything else.
> 

Thanks for the explanation.

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: siginfo reading/writing support

2022-01-11 Thread Ted Woodward via lldb-dev

You should use Hg for this instead of Hc. Hc is used for step/continue, while 
Hg is used for everything else.


> -Original Message-
> From: lldb-dev  On Behalf Of Michal Górny
> via lldb-dev
> Sent: Tuesday, January 11, 2022 6:38 AM
> To: lldb-dev@lists.llvm.org
> Subject: [lldb-dev] RFC: siginfo reading/writing support
> 
> Hello,
> 
> TL;DR: I'd like to implement at least partial support for reading/writing 
> siginfo
> via LLDB.  I can't think of a better approach than copying the GDB's idea of
> "magical" $_siginfo variable that works through the expression evaluator.  I'd
> like to know your opinion/ideas.
> 
> 
> POSIX defines a siginfo_t structure that is used to pass additional signal
> information -- such as more detailed signal code, faulting memory address in
> case of SIGSEGV or PID of the child process in case of SIGCHLD.  LLDB already
> uses ptrace(2) to obtain this information and use it internally but it doesn't
> expose it to the user.
> 
> The GDB Remote Serial protocol provides the ability to read/write siginfo via
> qXfer:siginfo:... packets [1].  GDB exposes this information to the user via a
> special $_siginfo variable [2].
> 
> A few things to note:
> 
> 1. Some targets (e.g. Linux, NetBSD) support overwriting siginfo, some (e.g.
> FreeBSD) only reading.
> 
> 2. Siginfo is generally associated with a single thread, so the packets should
> be combined with respective thread selection (Hg or Hc?).
> 
> 3. The exact type of siginfo_t differs per platform (POSIX specifies a minimal
> subset).
> 
> 
> My rough idea right now is to follow GDB here.  While using "$_siginfo"
> may seem hacky, it has the nice advantage that it can easily support all
> different siginfo_t structures used by various platforms.
> 
> The plan would be to:
> 
> 1. Implement the qXfer:siginfo:... packets in lldb-server, and add tests to
> them.
> 
> 2. Implement support for "$_siginfo" in the client (I suppose this means
> hacking on expression evaluator).
> 
> 3. (Optionally) implement hardcoded siginfo_t definitions for common
> platforms to make things work without debug info.
> 
> WDYT?
> 
> 
> [1]
> https://www.sourceware.org/gdb/onlinedocs/gdb/General-Query-
> Packets.html#qXfer-siginfo-read
> [2] https://sourceware.org/gdb/current/onlinedocs/gdb.html#Signals
> 
> 
> --
> Best regards,
> Michał Górny
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: siginfo reading/writing support

2022-01-11 Thread Michał Górny via lldb-dev

Hello,

TL;DR: I'd like to implement at least partial support for
reading/writing siginfo via LLDB.  I can't think of a better approach
than copying the GDB's idea of "magical" $_siginfo variable that works
through the expression evaluator.  I'd like to know your opinion/ideas.


POSIX defines a siginfo_t structure that is used to pass additional
signal information -- such as more detailed signal code, faulting memory
address in case of SIGSEGV or PID of the child process in case of
SIGCHLD.  LLDB already uses ptrace(2) to obtain this information and use
it internally but it doesn't expose it to the user.

The GDB Remote Serial protocol provides the ability to read/write
siginfo via qXfer:siginfo:... packets [1].  GDB exposes this information
to the user via a special $_siginfo variable [2].

A few things to note:

1. Some targets (e.g. Linux, NetBSD) support overwriting siginfo, some
(e.g. FreeBSD) only reading.

2. Siginfo is generally associated with a single thread, so the packets
should be combined with respective thread selection (Hg or Hc?).

3. The exact type of siginfo_t differs per platform (POSIX specifies
a minimal subset).


My rough idea right now is to follow GDB here.  While using "$_siginfo"
may seem hacky, it has the nice advantage that it can easily support all
different siginfo_t structures used by various platforms.

The plan would be to:

1. Implement the qXfer:siginfo:... packets in lldb-server, and add tests
to them.

2. Implement support for "$_siginfo" in the client (I suppose this means
hacking on expression evaluator).

3. (Optionally) implement hardcoded siginfo_t definitions for common
platforms to make things work without debug info.

WDYT?


[1]
https://www.sourceware.org/gdb/onlinedocs/gdb/General-Query-Packets.html#qXfer-siginfo-read
[2] https://sourceware.org/gdb/current/onlinedocs/gdb.html#Signals


-- 
Best regards,
Michał Górny


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2022-01-08 Thread Tom Stellard via lldb-dev

On 1/1/22 12:54, Sean McBride wrote:

On 30 Nov 2021, at 1:07, Tom Stellard via cfe-dev wrote:


Testers can begin testing and uploading binaries.


It's been over a month since 13.0.1-rc1, and, as has been the case for many 
previous releases, there are no macOS binaries.  And chance we'll see some?



This has been uploaded now.

-Tom


Thanks,

Sean



___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] Subscribing to GitHub issue labels

2022-01-05 Thread Fāng-ruì Sòng via lldb-dev
On Wed, Jan 5, 2022 at 8:56 PM Mehdi AMINI via cfe-dev
 wrote:
>
>
>
> On Wed, Jan 5, 2022 at 2:28 PM Tom Stellard via llvm-dev 
>  wrote:
>>
>> Hi,
>>
>> We have a system now for subscribing to GitHub issue labels.  If you
>> want to subscribe to a label, request membership in the
>> issue-subscribers-$LABEL_NAME team from https://github.com/orgs/llvm/teams.
>
>
> That's awesome! Thanks :)
> I really missed this after the transition.
>
> I still haven't figured how to request membership though? Scrubbing through 
> GitHub documentation I only found how an admin can add members but not how to 
> request as an aspiring member?
>
>>
>>
>> If the team does not exist yet, file an issue and assign it to me and I will
>> create the team.
>>
>> I would also like to document these steps somewhere.  Where is the best 
>> place to
>> do this?  The Developer Policy?
>
>
> Did we have any doc about BugZilla?
>
>>
>>
>> - Tom

My process for now 

* Ensure the component is available on
https://github.com/llvm/llvm-project/issues/labels
* Visit https://github.com/orgs/llvm/teams/ . If the relevant
issue-subscribers-* already exists, finish
* Otherwise, search
https://github.com/llvm/llvm-project/issues?q=is%3Aissue+is%3Aopen+%22Create+team%22+
whether a feature request already exists.
* Otherwise, click "New issue" and create a request like an existing
one (e.g. https://github.com/llvm/llvm-project/issues/53028 thanks to
keith for making the first request!)
* After Tom has created the team, visit
https://github.com/orgs/llvm/teams/issue-subscribers-lld-wasm/ (this
URL is available on https://github.com/orgs/llvm/teams)
* Click the "Members" tab
(https://github.com/orgs/llvm/teams/issue-subscribers-lld-wasm/members)
* Click the "Request to join" button
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Subscribing to GitHub issue labels

2022-01-05 Thread Tom Stellard via lldb-dev

Hi,

We have a system now for subscribing to GitHub issue labels.  If you
want to subscribe to a label, request membership in the
issue-subscribers-$LABEL_NAME team from https://github.com/orgs/llvm/teams.

If the team does not exist yet, file an issue and assign it to me and I will
create the team.

I would also like to document these steps somewhere.  Where is the best place to
do this?  The Developer Policy?

- Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: How to handle non-address bits in the output of "memory read"

2022-01-04 Thread Stephen Hines via lldb-dev
Hi David,

Sending this message again as I'm now back from vacation (and I was finally
able to subscribe to lldb-dev - non-subscribers are prevented from sending
email to it).

On Fri, Dec 10, 2021 at 1:56 AM David Spickett 
wrote:

> (Peter and Stephen on CC since you've previously asked about this sort of
> thing)
>
> This relates to https://reviews.llvm.org/D103626 and other recent
> patches about non-address bits.
>
> On AArch64 we've got a few extensions that use "non address bits".
> These are bits beyond the (in most cases) 48 bit virtual address size.
> Currently we have pointer authentication (armv8.3), memory tagging
> (armv8.5) and top byte ignore (a feature of armv8.0-a).
>
> This means we need to know about these bits when doing some
> operations. One such time is when passing addresses to memory read.
> Consider two pointers to the same location where the first one has a
> greater memory tag (bits 56-60) than the second. This is what happens
> if we don't remove the non-address bits:
> (lldb) memory read mte_buf_alt_tag mte_buf+16
> error: end address (0x900f7ff8010) must be greater than the start
> address (0xa00f7ff8000).
>
> A pure number comparison is going to think that end < begin address.
> If we use the ABI plugin's FixDataAddress we can remove those bits and
> read normally.
>
> With one caveat. The output will not include those non address bits
> unless we make special effort to do so, here's an example:
> (lldb) p ptr1
> (char *) $4 = 0x3400f140 "\x80\xf1\xff\xff\xff\xff"
> (lldb) p ptr2
> (char *) $5 = 0x5600f140 "\x80\xf1\xff\xff\xff\xff"
> (lldb) memory read ptr1 ptr2+16
> 0xf140: 80 f1 ff ff ff ff 00 00 38 70 bc f7 ff ff 00 00
> 8p..
>
> My current opinion is that in this case the output should not include
> the non address bits:
> * The actual memory being read is not at the virtual address the raw
> pointer value gives.
> * Many, if not all, non address bits cannot be incremented as the
> memory address we're showing is incremented. (not in a way that makes
> sense if you think about how the core interprets them)
>
>
I agree that the printed addresses should not include any of the ignored
top byte, because lldb is displaying what's at the actual virtual address
now, and not how we got there (i.e. the pointer).


> For example once you get into the next memory granule, the memory tag
> attached to it in hardware may be different. (and FWIW I have a series
> to show the actual memory tags https://reviews.llvm.org/D107140)
> You could perhaps argue that if the program itself used that pointer,
> it would use those non address bits as well so show the user *how* it
> would access the memory. However I don't think that justifies
> complicating the implementation and output.
>
> So what do people think of that direction? I've thought about this for
> too long before asking for feedback, so I'm definitely missing some of
> the wood for the trees.
>
> Input/bug reports/complaints from anyone who (unlike me) has debugged
> a large program that uses these non-address features is most welcome!
>
> Thanks,
> David Spickett.
>

We have a customer who is encountering issues with this in LLDB today, so I
asked them to comment on this thread (but I'm not sure if they will). The
current behavior prevents them from using LLDB with their core dumps.

Thanks,
Steve
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-24 Thread Renato Golin via lldb-dev
Ah, awesome, thanks!

On Fri, 24 Dec 2021, 03:11 Tom Stellard,  wrote:

> On 12/23/21 09:53, Renato Golin wrote:
> > On Fri, 17 Dec 2021 at 21:15, Tom Stellard via llvm-dev <
> llvm-...@lists.llvm.org > wrote:
> >
> > * On an existing issue or a newly created issue, a user who wants to
> backport
> > one or more commits to the release branch adds a comment:
> >
> > /cherry-pick  <..>
> >
> >
> > Hi Tom,
> >
> > Would this be *any* user or users with certain permissions in the repo
> (like code owners, release managers)?
> >
>
> Any user can do this.
>
> > Ignoring malicious action, *any* user creating a cherry-pick at any
> time, may create confusion if two users are trying to pick changes that
> need multiple (non-sequential) commits each.
> >
> > An alternative would be to build a branch off the release branch (ex.
> "release-x.y.z-$username") and pick the commits on that branch, run the
> pre-commit tests, and then merge to the release branch if it's all green.
> >
>
> This is actually how it works.  The cherry-picked commits get
> pushed to a branch called issue and the pull request is created
> off of that branch.
>
> -Tom
>
> > Because the merge is atomic, and the tests passed on the alternative
> branch, the probability of the release branch breaking is lower.
> >
> > Of course, interaction between the users' branches can still break, and
> well, further tests that are not present in the pre-commit tests, can also.
> >
> > But with atomic merges of cherry-picks in a linear sequence will also
> make it easier to bisect in case anything goes wrong with the release
> candidate.
> >
> > If only a subset of users can merge, then they'd do one at a time and
> this problem wouldn't be a big issue and we'd avoid a complicated
> infrastructure setup.
> >
> > Does that make sense?
> >
> > cheers,
> > --renato
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-23 Thread Tom Stellard via lldb-dev

On 12/23/21 09:53, Renato Golin wrote:

On Fri, 17 Dec 2021 at 21:15, Tom Stellard via llvm-dev mailto:llvm-...@lists.llvm.org>> wrote:

* On an existing issue or a newly created issue, a user who wants to 
backport
one or more commits to the release branch adds a comment:

/cherry-pick  <..>


Hi Tom,

Would this be *any* user or users with certain permissions in the repo (like 
code owners, release managers)?



Any user can do this.


Ignoring malicious action, *any* user creating a cherry-pick at any time, may 
create confusion if two users are trying to pick changes that need multiple 
(non-sequential) commits each.

An alternative would be to build a branch off the release branch (ex. 
"release-x.y.z-$username") and pick the commits on that branch, run the 
pre-commit tests, and then merge to the release branch if it's all green.



This is actually how it works.  The cherry-picked commits get
pushed to a branch called issue and the pull request is created
off of that branch.

-Tom


Because the merge is atomic, and the tests passed on the alternative branch, 
the probability of the release branch breaking is lower.

Of course, interaction between the users' branches can still break, and well, 
further tests that are not present in the pre-commit tests, can also.

But with atomic merges of cherry-picks in a linear sequence will also make it 
easier to bisect in case anything goes wrong with the release candidate.

If only a subset of users can merge, then they'd do one at a time and this 
problem wouldn't be a big issue and we'd avoid a complicated infrastructure 
setup.

Does that make sense?

cheers,
--renato


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-23 Thread Renato Golin via lldb-dev
On Fri, 17 Dec 2021 at 21:15, Tom Stellard via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> * On an existing issue or a newly created issue, a user who wants to
> backport
> one or more commits to the release branch adds a comment:
>
> /cherry-pick  <..>
>

Hi Tom,

Would this be *any* user or users with certain permissions in the repo
(like code owners, release managers)?

Ignoring malicious action, *any* user creating a cherry-pick at any time,
may create confusion if two users are trying to pick changes that need
multiple (non-sequential) commits each.

An alternative would be to build a branch off the release branch (ex.
"release-x.y.z-$username") and pick the commits on that branch, run the
pre-commit tests, and then merge to the release branch if it's all green.

Because the merge is atomic, and the tests passed on the alternative
branch, the probability of the release branch breaking is lower.

Of course, interaction between the users' branches can still break, and
well, further tests that are not present in the pre-commit tests, can also.

But with atomic merges of cherry-picks in a linear sequence will also make
it easier to bisect in case anything goes wrong with the release candidate.

If only a subset of users can merge, then they'd do one at a time and this
problem wouldn't be a big issue and we'd avoid a complicated infrastructure
setup.

Does that make sense?

cheers,
--renato
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Are any LLVM components affected by the recent log4j issues?

2021-12-22 Thread Kristof Beyls via lldb-dev
As one of the points of contact at CERT for LLVM, I've received messages
from CERT asking if LLVM is affected by any of the recent log4j
vulnerabilities: *CVE-2021-45105*, *CVE-2021-4104*, *CVE-2021-45046* and
*CVE-2021-44228*. It seems CERT is reaching out to every single vendor
registered with them about these vulnerabilities.

As far as I know no LLVM sub-project uses Java, so LLVM should not be
vulnerable to any of the log4j issues.
Before I go ahead and record in the CERT database that LLVM is not
affected, I thought I'd just double check if anyone is aware of any use of
Java in LLVM and/or any potential way LLVM could be affected by the recent
log4j issues?

Thanks,

Kristof
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-20 Thread Tom Stellard via lldb-dev

On 12/20/21 18:21, Mehdi AMINI wrote:



On Mon, Dec 20, 2021 at 3:24 PM Tom Stellard mailto:tstel...@redhat.com>> wrote:

On 12/20/21 09:16, Tom Stellard wrote:
 > On 12/18/21 15:04, David Blaikie wrote:
 >>
 >>
 >> On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard mailto:tstel...@redhat.com> >> 
wrote:
 >>
 >>     On 12/17/21 16:47, David Blaikie wrote:
 >>  > Sounds pretty good to me - wouldn't mind knowing more about/a 
good summary of the effects of this on project/repo/etc notifications that Mehdi's 
mentioning. (be good to have a write up of the expected impact/options to then discuss - 
from the thread so far I understand some general/high level concerns, but it's not clear 
to me exactly how it plays out)
 >>  >
 >>
 >>     The impact is really going to depend on the person and what 
notification preferences they
 >>     have/want.  If you are already watching the repo with the default 
settings, then you probably
 >>     won't notice much of a difference given the current volume of 
notifications.
 >>
 >>
 >> I think I'm on the default settings - which does currently mean a notification for every issue update, which is a 
lot. Given that llvm-b...@email.llvm.org  > has been re-enabled, sending mail only on issue creation, I & others might opt 
back in to that behavior by disabling the baseline "notify on everything" to "notify only on issues I'm 
mentioned in".
 >>
 >> I guess currently the only email that github is generating is one email 
per issue update. We don't have any pull requests, so there aren't any emails for 
that, yeah?
 >>
 >> So this new strategy might add a few more back-and-forth on each cherrypick 
issue (for those using llvm-bugs & disabling general issue notifications, this will 
not be relevant to them - there won't be more issues created, just more comments on 
existing issues). But there will be some more emails generated related to the pull 
requests themselves, I guess? So each cherrypick goes from 2 emails to llvm-bugs (the 
issue creation and closure) to, how many? 4 (2 for llvm-bugs and I guess at least 2 for 
the pull request - one to make the request and one to close it - maybe a couple more 
status ones along the way?)
 >>
 >
 > I think the number of net new comments on issues will be very minimal or 
none at all.  The
 > automated comments that are created by this process are replacing 
comments that I'm already making
 > manually.
 >
 > So 2+ for pull requests is probably a good estimate.  I still need to 
figure out how many notifications
 > get generated for Actions with the default settings.
 >

I did some research on the notifications and here is what I came up with:

  From what I can tell, notifications for actions are only sent to the
user that initiated the event that led to the actions, so there would
be no global notifications sent for the actions used by this workflow.

There have been 131 bugs marked as release blockers in the llvm-13 cycle,
this includes the 13.0.0 and 13.0.1 release.  In the best case scenario,
this proposal would generate 2 additional notifications per issue
(1 for creating a pull request and 1 for merging it), and 0 net new
issue comments (the automated comments just replace manual comments).

If you assume that no manual comments would be replaced by the automation,
then in the typical use case there would be a maximum of  4 notifications
generated from issues (/cherry-pick comment, cherry-pick failed comment,
/branch comment, /pull-request comment). In addition to the 2 pull
request notifications.

Based on this, my estimate is that this proposal will produce between
(2 * 131) = 262 and (6 * 131) = 786 net new notifications every 6 months.
Or between 1.46 and 4.367 net new notifications per day.

For comparison, on Fri Dec 17, I received 115 email notifications from
the llvm/llvm-project repo.

The pull request emails should be easy for people to filter out of their
inboxes with a rule.  Pull request emails would have llvm/llvm-project in
the To: field and have '(PR #123456)' at the end of the Subject: field
(where 123456 is pull request number).


Actually it isn't enough: there isn't a way to filter on regexes in gmail for 
example. Until GitHub allows the use of some different alias / target / 
cc-email or similar mechanisms, it'll be hard to filter GitHub emails 
accurately / reliably.



Matching on '(PR #' might be enough if people wanted to try it.


There are also the confusing aspects of starting to use pull-requests in the 
monorepo, but only for some branches, which seem undesirable to me.



Yeah, this is one of the downsides of using pull-requests in the 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-20 Thread Philip Reames via lldb-dev


On 12/20/21 3:24 PM, Tom Stellard via llvm-dev wrote:

On 12/20/21 09:16, Tom Stellard wrote:

On 12/18/21 15:04, David Blaikie wrote:



On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard > wrote:


    On 12/17/21 16:47, David Blaikie wrote:
 > Sounds pretty good to me - wouldn't mind knowing more about/a 
good summary of the effects of this on project/repo/etc 
notifications that Mehdi's mentioning. (be good to have a write up 
of the expected impact/options to then discuss - from the thread so 
far I understand some general/high level concerns, but it's not 
clear to me exactly how it plays out)

 >

    The impact is really going to depend on the person and what 
notification preferences they
    have/want.  If you are already watching the repo with the 
default settings, then you probably
    won't notice much of a difference given the current volume of 
notifications.



I think I'm on the default settings - which does currently mean a 
notification for every issue update, which is a lot. Given that 
llvm-b...@email.llvm.org  has been 
re-enabled, sending mail only on issue creation, I & others might 
opt back in to that behavior by disabling the baseline "notify on 
everything" to "notify only on issues I'm mentioned in".


I guess currently the only email that github is generating is one 
email per issue update. We don't have any pull requests, so there 
aren't any emails for that, yeah?


So this new strategy might add a few more back-and-forth on each 
cherrypick issue (for those using llvm-bugs & disabling general 
issue notifications, this will not be relevant to them - there won't 
be more issues created, just more comments on existing issues). But 
there will be some more emails generated related to the pull 
requests themselves, I guess? So each cherrypick goes from 2 emails 
to llvm-bugs (the issue creation and closure) to, how many? 4 (2 for 
llvm-bugs and I guess at least 2 for the pull request - one to make 
the request and one to close it - maybe a couple more status ones 
along the way?)




I think the number of net new comments on issues will be very minimal 
or none at all.  The
automated comments that are created by this process are replacing 
comments that I'm already making

manually.

So 2+ for pull requests is probably a good estimate.  I still need to 
figure out how many notifications

get generated for Actions with the default settings.



I did some research on the notifications and here is what I came up with:

From what I can tell, notifications for actions are only sent to the
user that initiated the event that led to the actions, so there would
be no global notifications sent for the actions used by this workflow.

There have been 131 bugs marked as release blockers in the llvm-13 cycle,
this includes the 13.0.0 and 13.0.1 release.  In the best case scenario,
this proposal would generate 2 additional notifications per issue
(1 for creating a pull request and 1 for merging it), and 0 net new
issue comments (the automated comments just replace manual comments).

If you assume that no manual comments would be replaced by the 
automation,

then in the typical use case there would be a maximum of  4 notifications
generated from issues (/cherry-pick comment, cherry-pick failed comment,
/branch comment, /pull-request comment). In addition to the 2 pull
request notifications.

Based on this, my estimate is that this proposal will produce between
(2 * 131) = 262 and (6 * 131) = 786 net new notifications every 6 months.
Or between 1.46 and 4.367 net new notifications per day.

For comparison, on Fri Dec 17, I received 115 email notifications from
the llvm/llvm-project repo.

The pull request emails should be easy for people to filter out of their
inboxes with a rule.  Pull request emails would have llvm/llvm-project in
the To: field and have '(PR #123456)' at the end of the Subject: field
(where 123456 is pull request number).

For people who filter out the pull request notifications, they would 
have between

0 and 2.9 net new notifications per day.


This seems both fairly minimal, and well justified to me.

Philip



- Tom



--Tom

    If people want to give their notification preferences, I can try 
to look at how

    this change will impact specific configurations.


@Mehdi AMINI  - are there particular 
scenarios you have in mind that'd be good to work through?



    -Tom


 > On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev 
mailto:llvm-...@lists.llvm.org> 
>> 
wrote:

 >
 >     Hi,
 >
 >     Here is a proposal for a new automated workflow for 
managing parts of the release
 >     process.  I've been experimenting with this over the past 
few releases and
 >     now that we have migrated to GitHub issues, it would be 
possible for us to

 >     implement this in the main 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-20 Thread Tom Stellard via lldb-dev

On 12/20/21 09:16, Tom Stellard wrote:

On 12/18/21 15:04, David Blaikie wrote:



On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard mailto:tstel...@redhat.com>> wrote:

    On 12/17/21 16:47, David Blaikie wrote:
 > Sounds pretty good to me - wouldn't mind knowing more about/a good 
summary of the effects of this on project/repo/etc notifications that Mehdi's 
mentioning. (be good to have a write up of the expected impact/options to then 
discuss - from the thread so far I understand some general/high level concerns, 
but it's not clear to me exactly how it plays out)
 >

    The impact is really going to depend on the person and what notification 
preferences they
    have/want.  If you are already watching the repo with the default settings, 
then you probably
    won't notice much of a difference given the current volume of notifications.


I think I'm on the default settings - which does currently mean a notification for every issue update, which 
is a lot. Given that llvm-b...@email.llvm.org  has been re-enabled, 
sending mail only on issue creation, I & others might opt back in to that behavior by disabling the 
baseline "notify on everything" to "notify only on issues I'm mentioned in".

I guess currently the only email that github is generating is one email per 
issue update. We don't have any pull requests, so there aren't any emails for 
that, yeah?

So this new strategy might add a few more back-and-forth on each cherrypick issue 
(for those using llvm-bugs & disabling general issue notifications, this will 
not be relevant to them - there won't be more issues created, just more comments on 
existing issues). But there will be some more emails generated related to the pull 
requests themselves, I guess? So each cherrypick goes from 2 emails to llvm-bugs 
(the issue creation and closure) to, how many? 4 (2 for llvm-bugs and I guess at 
least 2 for the pull request - one to make the request and one to close it - maybe 
a couple more status ones along the way?)



I think the number of net new comments on issues will be very minimal or none 
at all.  The
automated comments that are created by this process are replacing comments that 
I'm already making
manually.

So 2+ for pull requests is probably a good estimate.  I still need to figure 
out how many notifications
get generated for Actions with the default settings.



I did some research on the notifications and here is what I came up with:

From what I can tell, notifications for actions are only sent to the
user that initiated the event that led to the actions, so there would
be no global notifications sent for the actions used by this workflow.

There have been 131 bugs marked as release blockers in the llvm-13 cycle,
this includes the 13.0.0 and 13.0.1 release.  In the best case scenario,
this proposal would generate 2 additional notifications per issue
(1 for creating a pull request and 1 for merging it), and 0 net new
issue comments (the automated comments just replace manual comments).

If you assume that no manual comments would be replaced by the automation,
then in the typical use case there would be a maximum of  4 notifications
generated from issues (/cherry-pick comment, cherry-pick failed comment,
/branch comment, /pull-request comment). In addition to the 2 pull
request notifications.

Based on this, my estimate is that this proposal will produce between
(2 * 131) = 262 and (6 * 131) = 786 net new notifications every 6 months.
Or between 1.46 and 4.367 net new notifications per day.

For comparison, on Fri Dec 17, I received 115 email notifications from
the llvm/llvm-project repo.

The pull request emails should be easy for people to filter out of their
inboxes with a rule.  Pull request emails would have llvm/llvm-project in
the To: field and have '(PR #123456)' at the end of the Subject: field
(where 123456 is pull request number).

For people who filter out the pull request notifications, they would have 
between
0 and 2.9 net new notifications per day.

- Tom



--Tom


    If people want to give their notification preferences, I can try to look at 
how
    this change will impact specific configurations.


@Mehdi AMINI  - are there particular scenarios you 
have in mind that'd be good to work through?


    -Tom


 > On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev mailto:llvm-...@lists.llvm.org> >> wrote:
 >
 >     Hi,
 >
 >     Here is a proposal for a new automated workflow for managing parts 
of the release
 >     process.  I've been experimenting with this over the past few 
releases and
 >     now that we have migrated to GitHub issues, it would be possible for 
us to
 >     implement this in the main repo.
 >
 >     The workflow is pretty straight forward, but it does use pull 
requests.  My
 >     idea is to enable pull requests for 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-20 Thread Tom Stellard via lldb-dev

On 12/18/21 15:04, David Blaikie wrote:



On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard mailto:tstel...@redhat.com>> wrote:

On 12/17/21 16:47, David Blaikie wrote:
 > Sounds pretty good to me - wouldn't mind knowing more about/a good 
summary of the effects of this on project/repo/etc notifications that Mehdi's 
mentioning. (be good to have a write up of the expected impact/options to then 
discuss - from the thread so far I understand some general/high level concerns, 
but it's not clear to me exactly how it plays out)
 >

The impact is really going to depend on the person and what notification 
preferences they
have/want.  If you are already watching the repo with the default settings, 
then you probably
won't notice much of a difference given the current volume of notifications.


I think I'm on the default settings - which does currently mean a notification for every issue update, which 
is a lot. Given that llvm-b...@email.llvm.org  has been re-enabled, 
sending mail only on issue creation, I & others might opt back in to that behavior by disabling the 
baseline "notify on everything" to "notify only on issues I'm mentioned in".

I guess currently the only email that github is generating is one email per 
issue update. We don't have any pull requests, so there aren't any emails for 
that, yeah?

So this new strategy might add a few more back-and-forth on each cherrypick issue 
(for those using llvm-bugs & disabling general issue notifications, this will 
not be relevant to them - there won't be more issues created, just more comments on 
existing issues). But there will be some more emails generated related to the pull 
requests themselves, I guess? So each cherrypick goes from 2 emails to llvm-bugs 
(the issue creation and closure) to, how many? 4 (2 for llvm-bugs and I guess at 
least 2 for the pull request - one to make the request and one to close it - maybe 
a couple more status ones along the way?)



I think the number of net new comments on issues will be very minimal or none 
at all.  The
automated comments that are created by this process are replacing comments that 
I'm already making
manually.

So 2+ for pull requests is probably a good estimate.  I still need to figure 
out how many notifications
get generated for Actions with the default settings.

--Tom
  


If people want to give their notification preferences, I can try to look at 
how
this change will impact specific configurations.


@Mehdi AMINI  - are there particular scenarios you 
have in mind that'd be good to work through?


-Tom


 > On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev mailto:llvm-...@lists.llvm.org> >> wrote:
 >
 >     Hi,
 >
 >     Here is a proposal for a new automated workflow for managing parts 
of the release
 >     process.  I've been experimenting with this over the past few 
releases and
 >     now that we have migrated to GitHub issues, it would be possible for 
us to
 >     implement this in the main repo.
 >
 >     The workflow is pretty straight forward, but it does use pull 
requests.  My
 >     idea is to enable pull requests for only this automated workflow and 
not
 >     for general development (i.e. We would still use Phabricator for 
code review).
 >     Let me know what you think about this:
 >
 >
 >     # Workflow
 >
 >     * On an existing issue or a newly created issue, a user who wants to 
backport
 >     one or more commits to the release branch adds a comment:
 >
 >     /cherry-pick  <..>
 >
 >     * This starts a GitHub Action job that attempts to cherry-pick the 
commit(s)
 >     to the current release branch.
 >
 >     * If the commit(s) can be cherry-picked cleanly, then the GitHub 
Action:
 >           * Pushes the result of the cherry-pick to a branch in the
 >             llvmbot/llvm-project repo called issue, where n is the 
number of the
 >             GitHub Issue that launched the Action.
 >
 >           * Adds this comment on the issue: /branch 
llvmbot/llvm-project/issue
 >
 >           * Creates a pull request from llvmbot/llvm-project/issue to
 >             llvm/llvm-project/release/XX.x
 >
 >           * Adds a comment on the issue: /pull-request #
 >             where n is the number of the pull request.
 >
 >     * If the commit(s) can't be cherry-picked cleanly, then the GitHub 
Action job adds
 >     the release:cherry-pick-failed label to the issue and adds a comment:
 >     "Failed to cherry-pick  <..>" along with a link to the 
failing
 >     Action.
 >
 >     * If a user has manually cherry-picked the fixes, resolved the 
conflicts, and
 >     pushed the result to a branch on github, they can automatically 
create 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-18 Thread David Blaikie via lldb-dev
On Fri, Dec 17, 2021 at 6:38 PM Tom Stellard  wrote:

> On 12/17/21 16:47, David Blaikie wrote:
> > Sounds pretty good to me - wouldn't mind knowing more about/a good
> summary of the effects of this on project/repo/etc notifications that
> Mehdi's mentioning. (be good to have a write up of the expected
> impact/options to then discuss - from the thread so far I understand some
> general/high level concerns, but it's not clear to me exactly how it plays
> out)
> >
>
> The impact is really going to depend on the person and what notification
> preferences they
> have/want.  If you are already watching the repo with the default
> settings, then you probably
> won't notice much of a difference given the current volume of
> notifications.
>

I think I'm on the default settings - which does currently mean a
notification for every issue update, which is a lot. Given that
llvm-b...@email.llvm.org has been re-enabled, sending mail only on issue
creation, I & others might opt back in to that behavior by disabling the
baseline "notify on everything" to "notify only on issues I'm mentioned in".

I guess currently the only email that github is generating is one email per
issue update. We don't have any pull requests, so there aren't any emails
for that, yeah?

So this new strategy might add a few more back-and-forth on each
cherrypick issue (for those using llvm-bugs & disabling general issue
notifications, this will not be relevant to them - there won't be more
issues created, just more comments on existing issues). But there will be
some more emails generated related to the pull requests themselves, I
guess? So each cherrypick goes from 2 emails to llvm-bugs (the issue
creation and closure) to, how many? 4 (2 for llvm-bugs and I guess at least
2 for the pull request - one to make the request and one to close it -
maybe a couple more status ones along the way?)


> If people want to give their notification preferences, I can try to look
> at how
> this change will impact specific configurations.
>

@Mehdi AMINI  - are there particular scenarios you
have in mind that'd be good to work through?


>
> -Tom
>
>
> > On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev <
> llvm-...@lists.llvm.org > wrote:
> >
> > Hi,
> >
> > Here is a proposal for a new automated workflow for managing parts
> of the release
> > process.  I've been experimenting with this over the past few
> releases and
> > now that we have migrated to GitHub issues, it would be possible for
> us to
> > implement this in the main repo.
> >
> > The workflow is pretty straight forward, but it does use pull
> requests.  My
> > idea is to enable pull requests for only this automated workflow and
> not
> > for general development (i.e. We would still use Phabricator for
> code review).
> > Let me know what you think about this:
> >
> >
> > # Workflow
> >
> > * On an existing issue or a newly created issue, a user who wants to
> backport
> > one or more commits to the release branch adds a comment:
> >
> > /cherry-pick  <..>
> >
> > * This starts a GitHub Action job that attempts to cherry-pick the
> commit(s)
> > to the current release branch.
> >
> > * If the commit(s) can be cherry-picked cleanly, then the GitHub
> Action:
> >   * Pushes the result of the cherry-pick to a branch in the
> > llvmbot/llvm-project repo called issue, where n is the
> number of the
> > GitHub Issue that launched the Action.
> >
> >   * Adds this comment on the issue: /branch
> llvmbot/llvm-project/issue
> >
> >   * Creates a pull request from llvmbot/llvm-project/issue to
> > llvm/llvm-project/release/XX.x
> >
> >   * Adds a comment on the issue: /pull-request #
> > where n is the number of the pull request.
> >
> > * If the commit(s) can't be cherry-picked cleanly, then the GitHub
> Action job adds
> > the release:cherry-pick-failed label to the issue and adds a comment:
> > "Failed to cherry-pick  <..>" along with a link to the
> failing
> > Action.
> >
> > * If a user has manually cherry-picked the fixes, resolved the
> conflicts, and
> > pushed the result to a branch on github, they can automatically
> create a pull
> > request by adding this comment to an issue: /branch
> //
> >
> > * Once a pull request has been created, this launches more GitHub
> Actions
> > to run pre-commit tests.
> >
> > * Once the tests complete successfully and the changes have been
> approved
> > by the release manager, the pull request can me merged into the
> release branch.
> >
> > * After the pull request is merged, a GitHub Action automatically
> closes the
> > associated issue.
> >
> > Some Examples:
> >
> > Cherry-pick success:
> https://github.com/tstellar/llvm-project/issues/729
> > Cherry-pick <
> 

Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-17 Thread Tom Stellard via lldb-dev

On 12/17/21 16:47, David Blaikie wrote:

Sounds pretty good to me - wouldn't mind knowing more about/a good summary of 
the effects of this on project/repo/etc notifications that Mehdi's mentioning. 
(be good to have a write up of the expected impact/options to then discuss - 
from the thread so far I understand some general/high level concerns, but it's 
not clear to me exactly how it plays out)



The impact is really going to depend on the person and what notification 
preferences they
have/want.  If you are already watching the repo with the default settings, 
then you probably
won't notice much of a difference given the current volume of notifications.

If people want to give their notification preferences, I can try to look at how
this change will impact specific configurations.

-Tom



On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev mailto:llvm-...@lists.llvm.org>> wrote:

Hi,

Here is a proposal for a new automated workflow for managing parts of the 
release
process.  I've been experimenting with this over the past few releases and
now that we have migrated to GitHub issues, it would be possible for us to
implement this in the main repo.

The workflow is pretty straight forward, but it does use pull requests.  My
idea is to enable pull requests for only this automated workflow and not
for general development (i.e. We would still use Phabricator for code 
review).
Let me know what you think about this:


# Workflow

* On an existing issue or a newly created issue, a user who wants to 
backport
one or more commits to the release branch adds a comment:

/cherry-pick  <..>

* This starts a GitHub Action job that attempts to cherry-pick the commit(s)
to the current release branch.

* If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
      * Pushes the result of the cherry-pick to a branch in the
        llvmbot/llvm-project repo called issue, where n is the number of 
the
        GitHub Issue that launched the Action.

      * Adds this comment on the issue: /branch 
llvmbot/llvm-project/issue

      * Creates a pull request from llvmbot/llvm-project/issue to
        llvm/llvm-project/release/XX.x

      * Adds a comment on the issue: /pull-request #
        where n is the number of the pull request.

* If the commit(s) can't be cherry-picked cleanly, then the GitHub Action 
job adds
the release:cherry-pick-failed label to the issue and adds a comment:
"Failed to cherry-pick  <..>" along with a link to the failing
Action.

* If a user has manually cherry-picked the fixes, resolved the conflicts, 
and
pushed the result to a branch on github, they can automatically create a 
pull
request by adding this comment to an issue: /branch //

* Once a pull request has been created, this launches more GitHub Actions
to run pre-commit tests.

* Once the tests complete successfully and the changes have been approved
by the release manager, the pull request can me merged into the release 
branch.

* After the pull request is merged, a GitHub Action automatically closes the
associated issue.

Some Examples:

Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
Cherry-pick  
failure: https://github.com/tstellar/llvm-project/issues/730 

Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710 



# Motivation

Why do this?  The goal is to make the release process more efficient and 
transparent.
With this new workflow, users can get automatic and immediate feedback when 
a commit
they want backported doesn't apply cleanly or introduces some test 
failures.  With
the current process, these kinds of issues are communicated by the release 
manager,
and it can be days or even weeks before a problem is discovered and 
communicated back
to the users.

Another advantage of this workflow is it introduces pre-commit CI to the 
release branch,
which is important for the stability of the branch and the releases, but 
also gives
the project an opportunity to experiment with new CI workflows in a way that
does not disrupt development on the main branch.

# Implementation

If this proposal is accepted, I would plan to implement this for the LLVM 
14 release cycle based
on the following proof of concept that I have been testing for the last few 
releases:


https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow.yml
 



Re: [lldb-dev] [llvm-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-17 Thread David Blaikie via lldb-dev
Sounds pretty good to me - wouldn't mind knowing more about/a good summary
of the effects of this on project/repo/etc notifications that Mehdi's
mentioning. (be good to have a write up of the expected impact/options to
then discuss - from the thread so far I understand some general/high level
concerns, but it's not clear to me exactly how it plays out)

On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Hi,
>
> Here is a proposal for a new automated workflow for managing parts of the
> release
> process.  I've been experimenting with this over the past few releases and
> now that we have migrated to GitHub issues, it would be possible for us to
> implement this in the main repo.
>
> The workflow is pretty straight forward, but it does use pull requests.  My
> idea is to enable pull requests for only this automated workflow and not
> for general development (i.e. We would still use Phabricator for code
> review).
> Let me know what you think about this:
>
>
> # Workflow
>
> * On an existing issue or a newly created issue, a user who wants to
> backport
> one or more commits to the release branch adds a comment:
>
> /cherry-pick  <..>
>
> * This starts a GitHub Action job that attempts to cherry-pick the
> commit(s)
> to the current release branch.
>
> * If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
>  * Pushes the result of the cherry-pick to a branch in the
>llvmbot/llvm-project repo called issue, where n is the number of
> the
>GitHub Issue that launched the Action.
>
>  * Adds this comment on the issue: /branch
> llvmbot/llvm-project/issue
>
>  * Creates a pull request from llvmbot/llvm-project/issue to
>llvm/llvm-project/release/XX.x
>
>  * Adds a comment on the issue: /pull-request #
>where n is the number of the pull request.
>
> * If the commit(s) can't be cherry-picked cleanly, then the GitHub Action
> job adds
> the release:cherry-pick-failed label to the issue and adds a comment:
> "Failed to cherry-pick  <..>" along with a link to the failing
> Action.
>
> * If a user has manually cherry-picked the fixes, resolved the conflicts,
> and
> pushed the result to a branch on github, they can automatically create a
> pull
> request by adding this comment to an issue: /branch //
>
> * Once a pull request has been created, this launches more GitHub Actions
> to run pre-commit tests.
>
> * Once the tests complete successfully and the changes have been approved
> by the release manager, the pull request can me merged into the release
> branch.
>
> * After the pull request is merged, a GitHub Action automatically closes
> the
> associated issue.
>
> Some Examples:
>
> Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
> Cherry-pick
>  failure:
> https://github.com/tstellar/llvm-project/issues/730
> Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710
>
>
> # Motivation
>
> Why do this?  The goal is to make the release process more efficient and
> transparent.
> With this new workflow, users can get automatic and immediate feedback
> when a commit
> they want backported doesn't apply cleanly or introduces some test
> failures.  With
> the current process, these kinds of issues are communicated by the release
> manager,
> and it can be days or even weeks before a problem is discovered and
> communicated back
> to the users.
>
> Another advantage of this workflow is it introduces pre-commit CI to the
> release branch,
> which is important for the stability of the branch and the releases, but
> also gives
> the project an opportunity to experiment with new CI workflows in a way
> that
> does not disrupt development on the main branch.
>
> # Implementation
>
> If this proposal is accepted, I would plan to implement this for the LLVM
> 14 release cycle based
> on the following proof of concept that I have been testing for the last
> few releases:
>
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow.yml
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow-create-pr.yml
>
> https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-merge-pr.yml
>
> Thanks,
> Tom
>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Release-testers] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-17 Thread Tom Stellard via lldb-dev

On 12/17/21 13:25, Mehdi AMINI wrote:

Hi,




On Fri, Dec 17, 2021 at 1:15 PM Tom Stellard via Release-testers 
mailto:release-test...@lists.llvm.org>> wrote:

Hi,

Here is a proposal for a new automated workflow for managing parts of the 
release
process.  I've been experimenting with this over the past few releases and
now that we have migrated to GitHub issues, it would be possible for us to
implement this in the main repo.

The workflow is pretty straight forward, but it does use pull requests.  My
idea is to enable pull requests for only this automated workflow and not
for general development (i.e. We would still use Phabricator for code 
review).
Let me know what you think about this:


# Workflow

* On an existing issue or a newly created issue, a user who wants to 
backport
one or more commits to the release branch adds a comment:

/cherry-pick  <..>

* This starts a GitHub Action job that attempts to cherry-pick the commit(s)
to the current release branch.

* If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
      * Pushes the result of the cherry-pick to a branch in the
        llvmbot/llvm-project repo called issue, where n is the number of 
the
        GitHub Issue that launched the Action.

      * Adds this comment on the issue: /branch 
llvmbot/llvm-project/issue

      * Creates a pull request from llvmbot/llvm-project/issue to
        llvm/llvm-project/release/XX.x

      * Adds a comment on the issue: /pull-request #
        where n is the number of the pull request.

* If the commit(s) can't be cherry-picked cleanly, then the GitHub Action 
job adds
the release:cherry-pick-failed label to the issue and adds a comment:
"Failed to cherry-pick  <..>" along with a link to the failing
Action.

* If a user has manually cherry-picked the fixes, resolved the conflicts, 
and
pushed the result to a branch on github, they can automatically create a 
pull
request by adding this comment to an issue: /branch //

* Once a pull request has been created, this launches more GitHub Actions
to run pre-commit tests.

* Once the tests complete successfully and the changes have been approved
by the release manager, the pull request can me merged into the release 
branch.

* After the pull request is merged, a GitHub Action automatically closes the
associated issue.

Some Examples:

Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
Cherry-pick  
failure: https://github.com/tstellar/llvm-project/issues/730 

Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710 




Since your workflow can trigger actions from comments in the issues, why do you need 
pull-requests at all? Can't you trigger the pre-merge testing action on the branch from 
the issue? Then you can "approve" it with a //merge LGTM/ comment in the issue 
directly and let the action merge it for example.



Yes, it would be possible to emulate pull request features with GitHub Actions. 
You
would also have to implement some kind of reporting mechanism to report results
back to the issue.  I personally don't think it would be worth the effort to
do a lot of extra work to get what pull requests give us for free (someone else
would be welcome to implement this if they wanted).

If we did decide we don't want to use Pull Requests in the main repo for this,
I think the alternatives would be to use Pull Requests in the llvmbot
repo, or just drop this part of the proposal (in which case I would go back
to using Pull Request in my personal account for testing).

-Tom





# Motivation

Why do this?  The goal is to make the release process more efficient and 
transparent.
With this new workflow, users can get automatic and immediate feedback when 
a commit
they want backported doesn't apply cleanly or introduces some test 
failures.  With
the current process, these kinds of issues are communicated by the release 
manager,
and it can be days or even weeks before a problem is discovered and 
communicated back
to the users.

Another advantage of this workflow is it introduces pre-commit CI to the 
release branch,
which is important for the stability of the branch and the releases, but 
also gives
the project an opportunity to experiment with new CI workflows in a way that
does not disrupt development on the main branch.

# Implementation

If this proposal is accepted, I would plan to implement this for the LLVM 
14 release cycle based
on the following proof of concept that I have been testing for the last few 
releases:



[lldb-dev] RFC: New Automated Release Workflow (using Issues and Pull Requests)

2021-12-17 Thread Tom Stellard via lldb-dev

Hi,

Here is a proposal for a new automated workflow for managing parts of the 
release
process.  I've been experimenting with this over the past few releases and
now that we have migrated to GitHub issues, it would be possible for us to
implement this in the main repo.

The workflow is pretty straight forward, but it does use pull requests.  My
idea is to enable pull requests for only this automated workflow and not
for general development (i.e. We would still use Phabricator for code review).
Let me know what you think about this:


# Workflow

* On an existing issue or a newly created issue, a user who wants to backport
one or more commits to the release branch adds a comment:

/cherry-pick  <..>

* This starts a GitHub Action job that attempts to cherry-pick the commit(s)
to the current release branch.

* If the commit(s) can be cherry-picked cleanly, then the GitHub Action:
* Pushes the result of the cherry-pick to a branch in the
  llvmbot/llvm-project repo called issue, where n is the number of the
  GitHub Issue that launched the Action.

* Adds this comment on the issue: /branch llvmbot/llvm-project/issue

* Creates a pull request from llvmbot/llvm-project/issue to
  llvm/llvm-project/release/XX.x

* Adds a comment on the issue: /pull-request #
  where n is the number of the pull request.

* If the commit(s) can't be cherry-picked cleanly, then the GitHub Action job 
adds
the release:cherry-pick-failed label to the issue and adds a comment:
"Failed to cherry-pick  <..>" along with a link to the failing
Action.

* If a user has manually cherry-picked the fixes, resolved the conflicts, and
pushed the result to a branch on github, they can automatically create a pull
request by adding this comment to an issue: /branch //

* Once a pull request has been created, this launches more GitHub Actions
to run pre-commit tests.

* Once the tests complete successfully and the changes have been approved
by the release manager, the pull request can me merged into the release branch.

* After the pull request is merged, a GitHub Action automatically closes the
associated issue.

Some Examples:

Cherry-pick success: https://github.com/tstellar/llvm-project/issues/729
Cherry-pick failure: https://github.com/tstellar/llvm-project/issues/730
Manual Branch comment: https://github.com/tstellar/llvm-project/issues/710


# Motivation

Why do this?  The goal is to make the release process more efficient and 
transparent.
With this new workflow, users can get automatic and immediate feedback when a 
commit
they want backported doesn't apply cleanly or introduces some test failures.  
With
the current process, these kinds of issues are communicated by the release 
manager,
and it can be days or even weeks before a problem is discovered and 
communicated back
to the users.

Another advantage of this workflow is it introduces pre-commit CI to the 
release branch,
which is important for the stability of the branch and the releases, but also 
gives
the project an opportunity to experiment with new CI workflows in a way that
does not disrupt development on the main branch.

# Implementation

If this proposal is accepted, I would plan to implement this for the LLVM 14 
release cycle based
on the following proof of concept that I have been testing for the last few 
releases:

https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow.yml
https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-workflow-create-pr.yml
https://github.com/tstellar/llvm-project/blob/release-automation/.github/workflows/release-merge-pr.yml

Thanks,
Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] 13.0.1 Release Update

2021-12-16 Thread Anton Korobeynikov via lldb-dev
And for the record, here is the milestone:
https://github.com/llvm/llvm-project/milestone/2

On Fri, Dec 17, 2021 at 5:58 AM Tom Stellard via cfe-dev
 wrote:
>
> Hi,
>
> I'm back on track with the release after the buzilla migration, so I'm
> going to accept backport requests until Monday, Dec 20, and then
> I'll try to tag -rc2 on Wed Dec 21.
>
> If you have patches you want backported to the release/13.x branch, please
> file an issue and add it to the "LLVM 13.0.1 release" milestone.
>
> Also, if you emailed me a backport request and haven't seen an issue created 
> for
> it yet, please ping me on the email thread, so I don't forget about it.
>
> -Tom
>
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev



-- 
With best regards, Anton Korobeynikov
Department of Statistical Modelling, Saint Petersburg State University
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] 13.0.1 Release Update

2021-12-16 Thread Tom Stellard via lldb-dev

Hi,

I'm back on track with the release after the buzilla migration, so I'm
going to accept backport requests until Monday, Dec 20, and then
I'll try to tag -rc2 on Wed Dec 21.

If you have patches you want backported to the release/13.x branch, please
file an issue and add it to the "LLVM 13.0.1 release" milestone.

Also, if you emailed me a backport request and haven't seen an issue created for
it yet, please ping me on the email thread, so I don't forget about it.

-Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-14 Thread Ed Maste via lldb-dev
On Tue, 14 Dec 2021 at 10:58, Pavel Labath via lldb-dev
 wrote:
>
> So how would this be represented in lldb? Would there be any threads,
> registers? Just a process with a bunch of modules ?

Using GDB (kgdb) as an example - it lists a thread for every
kernel/userspace thread. For example,
...
  593  Thread 100691 (PID=20798: sleep)
sched_switch (td=0xfe0118579100, flags=)
at /usr/home/emaste/src/freebsd-git/laptop/sys/kern/sched_ule.c:2147
...

and it can fetch per-thread register state:

(kgdb) thread 593
[Switching to thread 593 (Thread 100691)]
#0  sched_switch (td=0xfe0118579100, flags=) at
/usr/home/emaste/src/freebsd-git/laptop/sys/kern/sched_ule.c:2147
2147cpuid = td->td_oncpu = PCPU_GET(cpuid);
(kgdb) info reg
rax
rbx0x882c545e  2284606558
rcx
rdx
rsi
rdi
rbp0xfe01172617d0  0xfe01172617d0
rsp0xfe0117261708  0xfe0117261708


(kgdb) bt
#0  sched_switch (td=0xfe0118579100, flags=) at
/usr/home/emaste/src/freebsd-git/laptop/sys/kern/sched_ule.c:2147
#1  0x80ba4261 in mi_switch (flags=flags@entry=260) at
/usr/home/emaste/src/freebsd-git/laptop/sys/kern/kern_synch.c:542
#2  0x80bf428e in sleepq_switch
(wchan=wchan@entry=0x81c8db21 , pri=pri@entry=108)
at /usr/home/emaste/src/freebsd-git/laptop/sys/kern/subr_sleepqueue.c:608
...
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-14 Thread Pavel Labath via lldb-dev

On 10/12/2021 11:12, Michał Górny wrote:

On Mon, 2021-12-06 at 14:28 +0100, Pavel Labath wrote:

The live kernel debugging sounds... scary. Can you explain how would
this actually work? Like, what would be the supported operations? I
presume you won't be able to actually "stop" the kernel, but what will
you actually be able to do?



Yes, it is scary.  No, the system doesn't stop -- it's just a racy way
to read and write kernel memory.  I don't think it's used often but I've
been told that sometimes it can be very helpful in debugging annoying
non-crash bugs, especially if they're hard to reproduce.



Interesting.

So how would this be represented in lldb? Would there be any threads, 
registers? Just a process with a bunch of modules ?


pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-10 Thread Michał Górny via lldb-dev
On Mon, 2021-12-06 at 14:28 +0100, Pavel Labath wrote:
> The live kernel debugging sounds... scary. Can you explain how would 
> this actually work? Like, what would be the supported operations? I 
> presume you won't be able to actually "stop" the kernel, but what will 
> you actually be able to do?
> 

Yes, it is scary.  No, the system doesn't stop -- it's just a racy way
to read and write kernel memory.  I don't think it's used often but I've
been told that sometimes it can be very helpful in debugging annoying
non-crash bugs, especially if they're hard to reproduce.

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: How to handle non-address bits in the output of "memory read"

2021-12-10 Thread David Spickett via lldb-dev
(Peter and Stephen on CC since you've previously asked about this sort of thing)

This relates to https://reviews.llvm.org/D103626 and other recent
patches about non-address bits.

On AArch64 we've got a few extensions that use "non address bits".
These are bits beyond the (in most cases) 48 bit virtual address size.
Currently we have pointer authentication (armv8.3), memory tagging
(armv8.5) and top byte ignore (a feature of armv8.0-a).

This means we need to know about these bits when doing some
operations. One such time is when passing addresses to memory read.
Consider two pointers to the same location where the first one has a
greater memory tag (bits 56-60) than the second. This is what happens
if we don't remove the non-address bits:
(lldb) memory read mte_buf_alt_tag mte_buf+16
error: end address (0x900f7ff8010) must be greater than the start
address (0xa00f7ff8000).

A pure number comparison is going to think that end < begin address.
If we use the ABI plugin's FixDataAddress we can remove those bits and
read normally.

With one caveat. The output will not include those non address bits
unless we make special effort to do so, here's an example:
(lldb) p ptr1
(char *) $4 = 0x3400f140 "\x80\xf1\xff\xff\xff\xff"
(lldb) p ptr2
(char *) $5 = 0x5600f140 "\x80\xf1\xff\xff\xff\xff"
(lldb) memory read ptr1 ptr2+16
0xf140: 80 f1 ff ff ff ff 00 00 38 70 bc f7 ff ff 00 00
8p..

My current opinion is that in this case the output should not include
the non address bits:
* The actual memory being read is not at the virtual address the raw
pointer value gives.
* Many, if not all, non address bits cannot be incremented as the
memory address we're showing is incremented. (not in a way that makes
sense if you think about how the core interprets them)

For example once you get into the next memory granule, the memory tag
attached to it in hardware may be different. (and FWIW I have a series
to show the actual memory tags https://reviews.llvm.org/D107140)
You could perhaps argue that if the program itself used that pointer,
it would use those non address bits as well so show the user *how* it
would access the memory. However I don't think that justifies
complicating the implementation and output.

So what do people think of that direction? I've thought about this for
too long before asking for feedback, so I'm definitely missing some of
the wood for the trees.

Input/bug reports/complaints from anyone who (unlike me) has debugged
a large program that uses these non-address features is most welcome!

Thanks,
David Spickett.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-06 Thread Pavel Labath via lldb-dev

On 30/11/2021 14:49, Michał Górny via lldb-dev wrote:

Hi,

I'm working on a FreeBSD-sponsored project aiming at improving LLDB's
support for debugging FreeBSD kernel to achieve feature parity with
KGDB.  As a part of that, I'd like to improve LLDB's ability of working
with kernel coredumps ("vmcores"), plus add the ability to read kernel
memory via special character device /dev/mem.


The FreeBSD kernel supports two coredump formats that are of interest to
us:

1. The (older) "full memory" coredumps that use an ELF container.

2. The (newer) minidumps that dump only the active memory and use
a custom format.

At this point, LLDB recognizes the ELF files but doesn't handle them
correctly, and outright rejects the FreeBSD minidump format.  In both
cases some additional logic is required.  This is because kernel
coredumps contain physical contents of memory, and for user convenience
the debugger needs to be able to read memory maps from the physical
memory and use them to translate virtual addresses to physical
addresses.

Unless I'm mistaken, the rationale for using this format is that
coredumps are -- after all -- usually created when something goes wrong
with the kernel.  In that case, we want the process for dumping core to
be as simple as possible, and coredumps need to be small enough to fit
in swap space (that's where they're being usually written).
The complexity of memory translation should then naturally fall into
userspace processes used to debug them.

FreeBSD (following Solaris and other BSDs) provides a helper libkvm
library that can be used by userspace programs to access both coredumps
and running kernel memory.  Additionally, we have split the routines
related to coredumps and made them portable to other operating systems
via libfbsdvmcore [1].  We have also included a program that can convert
minidump into a debugger-compatible ELF core file.


We'd like to discuss the possible approaches to integrating this
additional functionality to LLDB.  At this point, our goal is to make it
possible for LLDB to correctly read memory from coredumps and live
system.


Plan A: new FreeBSDKernel plugin

I think the preferable approach is to write a new plugin that would
enable out-of-the-box support for the new functions in LLDB.  The plugin
would be based on using both libraries.  When available, libfbsdvmcore
will be used as the primary provider for vmcore support on all operating
systems.  Additionally, libkvm will be usable on FreeBSD as a fallback
provider for coredump support, and as the provider of live memory
support.

support using system-installed libfbsdvmcore to read coredumps and
libkvm to read coredumps (as a fallback) and to read live memory.

The two main challenges with this approach are:

1) "Full memory" vmcores are currently recognized by LLDB's elf-core
plugin.  I haven't investigated LLDB's plugin architecture in detail yet
but I think the cleanest solution here would be to teach elf-core to
distinguish and reject FreeBSD vmcores, in order to have the new plugin
handle them.

2) How to integrate "live kernel" support into the current user
interface?  I don't think we should make major UI modifications to
support this specific case but I'd also like to avoid gross hacks.
My initial thought is to allow specifying "/dev/mem" as core path, that
would match how libkvm handles it.

Nevertheless, I think this is the cleanest approach and I think we
should go with it if possible.


Plan B: GDB Remote Protocol-based wrapper
=
If we cannot integrate FreeBSD vmcore support into LLDB directly,
I think the next best approach is to create a minimal GDB Remote
Protocol server for it.  The rough idea is that the server implements
the minimal subset of the protocol necessary for LLDB to connect,
and implements memory read operations via the aforementioned libraries.

The advantage of this solution is that it is still relatively clean
and can be implemented outside LLDB.  It still provides quite good
performance but probably requires more work than the alternatives
and does not provide out-of-box support in LLDB.


Plan C: converting vmcores
==
Our final option, one that's practically implemented already is to
require the user to explicitly convert vmcore into an ELF core
understood by LLDB.  This is the simplest solution but it has a few
drawbacks:

1. it is limited to minidumps right now

2. it requires storing a converted coredump which means that at least
temporarily it doubles the disk space use

3. there is possibility of cleanly supporting live kernel memory
operations and therefore reaching KGDB feature parity

We could create a wrapper to avoid having users convert coredumps
explicitly but well, we think other options are better.


WDYT?


[1] https://github.com/Moritz-Systems/libfbsdvmcore



Having a new plugin for opening these kinds of core f

Re: [lldb-dev] No script in lldb of build

2021-12-06 Thread David Spickett via lldb-dev
Can you link to/provide the build commands you used? It will help in
the case this is not a simple issue.

> there is no embedded script interpreter in this mode.

Probably because it didn't find Python (and/or LUA but I don't have
experience with that). To find out why, try passing
"-DLLDB_ENABLE_PYTHON=ON" to the initial cmake command
(LLDB_ENABLE_LUA if you want LUA). It defaults to auto which means if
it doesn't find Python it'll silently continue, with "ON" it'll print
an error and stop.

There are others on the list who use MacOS who can hopefully help from there.

On Sun, 5 Dec 2021 at 20:02, Pi Pony via lldb-dev
 wrote:
>
> Hello,
>
> I build lldb for macOS and tried to get into script but I get this error 
> message: there is no embedded script interpreter in this mode.
>
> I appreciate any help you can provide
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] No script in lldb of build

2021-12-05 Thread Pi Pony via lldb-dev
Hello,

I build lldb for macOS and tried to get into script but I get this error
message: there is no embedded script interpreter in this mode.

I appreciate any help you can provide
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Can't build on Mac after https://reviews.llvm.org/D113650

2021-12-02 Thread Greg Clayton via lldb-dev
Some recent python changes have stopped by cmake configure from working on 
fresh checkout on macOS. Details in the comments of the diff in 
https://reviews.llvm.org/D113650 as to how I am configuring cmake. If anyone 
has any example of how to properly set the python stuff to work around the 
issue, it would be most appreciated. Else I would like to revert this diff and 
all associated fixes that came in afterwards so that I can get some work done.


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-12-02 Thread Tom Stellard via lldb-dev

Re-sending and dropping the Fedora list, which I accidentally cc'd.

On 12/2/21 08:35, Tom Stellard wrote:

On 12/2/21 06:45, Nemanja Ivanovic wrote:

Hi Tom,

would it be OK to directly send you git hashes for patches we would like back 
ported until the bugzilla transition completes?



Yes, that's fine.

-Tom


On Tue, Nov 30, 2021 at 1:08 AM Tom Stellard via cfe-dev mailto:cfe-...@lists.llvm.org>> wrote:

    Hi,

    I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

    There is still time to submit fixes for the final 13.0.1.  I'll give more
    details about timelines and how to do this once the bugzilla migration is
    complete.  Currently, bugzilla is read-only, so we can't submit any fixes
    there.

    -Tom

    ___
    cfe-dev mailing list
    cfe-...@lists.llvm.org 
    https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev 






___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-12-02 Thread Tom Stellard via lldb-dev

On 12/2/21 06:45, Nemanja Ivanovic wrote:

Hi Tom,

would it be OK to directly send you git hashes for patches we would like back 
ported until the bugzilla transition completes?



Yes, that's fine.

-Tom


On Tue, Nov 30, 2021 at 1:08 AM Tom Stellard via cfe-dev mailto:cfe-...@lists.llvm.org>> wrote:

Hi,

I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

There is still time to submit fixes for the final 13.0.1.  I'll give more
details about timelines and how to do this once the bugzilla migration is
complete.  Currently, bugzilla is read-only, so we can't submit any fixes
there.

-Tom

___
cfe-dev mailing list
cfe-...@lists.llvm.org 
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev 




___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-02 Thread David Spickett via lldb-dev
> 1. The (older) "full memory" coredumps that use an ELF container.
>
> 2. The (newer) minidumps that dump only the active memory and use
a custom format.

Maybe a silly question, is the "minidumps" here the same sort of
minidump as lldb already supports
(https://chromium.googlesource.com/breakpad/breakpad/+/master/docs/getting_started_with_breakpad.md#the-minidump-file-format)?
Or mini meaning small and/or sparse relative to the ELF container core
files.

I see that the minidump tests use yaml2obj to make their files, but if
you end up only needing 1 file and it would need changes to yaml2obj
probably not worth pursuing.

On Thu, 2 Dec 2021 at 13:38, Michał Górny  wrote:
>
> On Thu, 2021-12-02 at 11:50 +, David Spickett wrote:
> > > Right now, the idea is that when the kernel crashes, the developer can
> > > take the vmcore file use LLDB to look the kernel state up.
> >
> > Thanks for the explanation. (FWIW your first email is clear now that I
> > read it properly but this still helped me :))
> >
> > > 2) How to integrate "live kernel" support into the current user
> > > interface?  I don't think we should make major UI modifications to
> > > support this specific case but I'd also like to avoid gross hacks.
> >
> > Do you think it will always be one or the other, corefile or live
> > memory? I assume you wouldn't want to fall back to live memory because
> > that memory might not have been in use at the time of the core dump.
>
> Yes, it's always one or the other.  When you're debugging crashed
> kernel, you want to see the state of the crashed kernel and not
> the kernel that's running right now.
>
> Reading the memory of running kernel seems less useful but I've been
> told that it sometimes helps debugging non-crash kernel bugs.
>
> > But I'm thinking about debuggers where they use the ELF file as a
> > quicker way to read memory. Not sure if lldb does this already but you
> > could steal some ideas from there if so.
> >
> > Using /dev/mem as the path seems fine unless you do need some
> > combination of that and a corefile. Is /dev/mem format identical to
> > the corefile format? (probably not an issue anyway because the plugin
> > is what will decide how to use it)
>
> No, the formats are distinct (well, /dev/mem doesn't really have
> a container format, to be precise) but libkvm distinguishes this case
> and handles it specially.
>
> > Your plans B and C seem like they are enablement of the initial use
> > case but have limited scope for improvements. The gdb-remote wrapper
> > for example would work fine but would you hit issues where the current
> > FreeBSD plugin is making userspace assumptions? For example the
> > AArch64 Linux plugin assumes that addresses will be in certain ranges,
> > so if you connected it to an in kernel stub you'd probably get some
> > surprises.
> >
> > So I agree a new plugin would make the most sense. Only reason I'd be
> > against it is if it added significant maintenance or build issues but
> > I'm not aware of any. (beyond checking for some libraries and plenty
> > of bits of llvm do that) And it'll be able to give the best
> > experience.
>
> Well, my initial attempt turned out quite trivial, primarily because
> the external library does most of the work:
>
> https://reviews.llvm.org/D114911
>
> Right now it just supports reading memory and printing variables.
> I still need to extend it to recognize kernel threads through the memory
> dump, and then add support for grabbing registers out of that to get
> backtraces.
>
> > Do you have a plan to test this if it is an in tree plugin? Will the
> > corefiles take up a lot of space or would you be able to craft minimal
> > files just for testing?
>
> I have some ideas but I don't have small core files right now.  I need
> to write more code to determine what exactly is necessary, and then
> decide to pursue either:
>
> a. trying to build a minimal FreeBSD kernel and run it in a VM with
> minimal amount of RAM to get a small minicore
>
> b. trying to strip unnecessary data from real minicore
>
> c. trying to construct a minicore file directly
>
> But as I said, I don't have enough data to decide which route would
> involve the least amount of work.
>
> --
> Best regards,
> Michał Górny
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Release 13.0.1-rc1 has been tagged

2021-12-02 Thread Dimitry Andric via lldb-dev
On 30 Nov 2021, at 07:07, Tom Stellard via llvm-dev  
wrote:
> 
> I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

For 13.0.1 rc1, I have built and tested on both FreeBSD 12 and 13. No
additional patches were needed.

For the 32-builds I used -no-flang, as flang is currently not 32-bit
clean, and I do not expect it will ever be.


Main results on amd64-freebsd12:

  Skipped: 3 (13.0.0: 3)
  Unsupported:  6353 (13.0.0:  6353)
  Passed : 91833 (13.0.0: 91836)
  Expectedly Failed  :   320 (13.0.0:   320)
  Timed Out  : 1 (13.0.0: 0)
  Failed :   301 (13.0.0:   294)
  Unexpectedly Passed: 2 (13.0.0: 2)

Test suite results on amd64-freebsd12:

  Passed: 2419 (13.0.0: 2419)
  Failed:3 (13.0.0:3)


Main results on amd64-freebsd13:

  Skipped: 3 (13.0.0: 3)
  Unsupported:  6352 (13.0.0:  6352)
  Passed : 91797 (13.0.0: 91841)
  Passed With Retry  : 0 (13.0.0: 1)
  Expectedly Failed  :   320 (13.0.0:   320)
  Timed Out  : 2 (13.0.0: 1)
  Failed :   337 (13.0.0:   324)
  Unexpectedly Passed: 2 (13.0.0: 2)

Test suite results on amd64-freebsd13:

  Passed: 2419 (13.0.0: 2419)
  Failed:3 (13.0.0:3)


Main results on i386-freebsd12:

  Skipped: 3 (13.0.0: 3)
  Unsupported:  4738 (13.0.0:  4738)
  Passed : 87561 (13.0.0: 87556)
  Expectedly Failed  :   295 (13.0.0:   295)
  Failed :   198 (13.0.0:   198)
  Unexpectedly Passed: 1 (13.0.0: 1)

Main results on i386-freebsd13:

  Skipped: 3 (13.0.0: 3)
  Unsupported:  4738 (13.0.0:  4738)
  Passed : 87558 (13.0.0: 87554)
  Passed With Retry  : 1 (13.0.0: 0)
  Expectedly Failed  :   295 (13.0.0:   295)
  Failed :   200 (13.0.0:   200)
  Unexpectedly Passed: 1 (13.0.0: 1)


Uploaded:
SHA256 (clang+llvm-13.0.1-rc1-amd64-unknown-freebsd12.tar.xz) = 
11b38cf5de77b72881b8eaed9571af32993ed2aea6c93b6826f0e1dc9ab65f76
SHA256 (clang+llvm-13.0.1-rc1-amd64-unknown-freebsd13.tar.xz) = 
5320a41da51ba451f080c10d2a16d195265e4e5756642863daeb0be9734c3124
SHA256 (clang+llvm-13.0.1-rc1-i386-unknown-freebsd12.tar.xz) = 
175ef64e9922783ecb2e0b3b67f39f9a64de22f1f3ecf3548a8d14912a93
SHA256 (clang+llvm-13.0.1-rc1-i386-unknown-freebsd13.tar.xz) = 
84d49f5bce83fed921c002e4e4b57315623875c4b7f27ff376bc1eab31850551

-Dimitry



signature.asc
Description: Message signed with OpenPGP
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-02 Thread David Spickett via lldb-dev
> Right now, the idea is that when the kernel crashes, the developer can
> take the vmcore file use LLDB to look the kernel state up.

Thanks for the explanation. (FWIW your first email is clear now that I
read it properly but this still helped me :))

> 2) How to integrate "live kernel" support into the current user
> interface?  I don't think we should make major UI modifications to
> support this specific case but I'd also like to avoid gross hacks.

Do you think it will always be one or the other, corefile or live
memory? I assume you wouldn't want to fall back to live memory because
that memory might not have been in use at the time of the core dump.
But I'm thinking about debuggers where they use the ELF file as a
quicker way to read memory. Not sure if lldb does this already but you
could steal some ideas from there if so.

Using /dev/mem as the path seems fine unless you do need some
combination of that and a corefile. Is /dev/mem format identical to
the corefile format? (probably not an issue anyway because the plugin
is what will decide how to use it)

Your plans B and C seem like they are enablement of the initial use
case but have limited scope for improvements. The gdb-remote wrapper
for example would work fine but would you hit issues where the current
FreeBSD plugin is making userspace assumptions? For example the
AArch64 Linux plugin assumes that addresses will be in certain ranges,
so if you connected it to an in kernel stub you'd probably get some
surprises.

So I agree a new plugin would make the most sense. Only reason I'd be
against it is if it added significant maintenance or build issues but
I'm not aware of any. (beyond checking for some libraries and plenty
of bits of llvm do that) And it'll be able to give the best
experience.

Do you have a plan to test this if it is an in tree plugin? Will the
corefiles take up a lot of space or would you be able to craft minimal
files just for testing?

On Thu, 2 Dec 2021 at 10:03, Michał Górny  wrote:
>
> On Thu, 2021-12-02 at 09:40 +, David Spickett wrote:
> > Can you give an example workflow of how these core files are used by a
> > developer? For some background.
>
> Right now, the idea is that when the kernel crashes, the developer can
> take the vmcore file use LLDB to look the kernel state up.  Initially,
> this means reading the "raw" memory, i.e. looking up basic symbol values
> but eventually (like kGDB) we'd like to add basic support for looking up
> kernel thread states.
>
> > Most of my experience is in userspace, the corefile is "offline" debug
> > and then you have "live" debug of the running process. Is that the
> > same here or do we have a mix since you can access some of the live
> > memory after the core has been dumped?
>
> It's roughly the same, i.e. you either use a crash dump (i.e. saved
> kernel state) or you use /dev/mem to read memory from the running
> kernel.
>
> > I'm wondering if a FreeBSD Kernel plugin would support these corefiles
> > and/or live debug, or if they are just two halves of the same
> > solution. Basically, would you end up with a FreeBSDKernelCoreDump and
> > a FreeBSDKernelLive plugin?
>
> I think one plugin is the correct approach here.  Firstly, because
> the interface for reading memory is abstracted out to a single library
> and the API is the same for both cases.  Secondly, because the actual
> interpreting logic would also be shared.
>
> --
> Best regards,
> Michał Górny
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-12-02 Thread Michał Górny via lldb-dev
On Thu, 2021-12-02 at 09:40 +, David Spickett wrote:
> Can you give an example workflow of how these core files are used by a
> developer? For some background.

Right now, the idea is that when the kernel crashes, the developer can
take the vmcore file use LLDB to look the kernel state up.  Initially,
this means reading the "raw" memory, i.e. looking up basic symbol values
but eventually (like kGDB) we'd like to add basic support for looking up
kernel thread states.

> Most of my experience is in userspace, the corefile is "offline" debug
> and then you have "live" debug of the running process. Is that the
> same here or do we have a mix since you can access some of the live
> memory after the core has been dumped?

It's roughly the same, i.e. you either use a crash dump (i.e. saved
kernel state) or you use /dev/mem to read memory from the running
kernel.

> I'm wondering if a FreeBSD Kernel plugin would support these corefiles
> and/or live debug, or if they are just two halves of the same
> solution. Basically, would you end up with a FreeBSDKernelCoreDump and
> a FreeBSDKernelLive plugin?

I think one plugin is the correct approach here.  Firstly, because
the interface for reading memory is abstracted out to a single library
and the API is the same for both cases.  Secondly, because the actual
interpreting logic would also be shared.

-- 
Best regards,
Michał Górny

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-12-02 Thread Diana Picus via lldb-dev
Hi,

Uploaded armv7 & aarch64 Ubuntu binaries and also aarch64 Windows:
baa53279469e387f333cd90d9e8a30973c8a5d884dd893862435967d40c1a0e9
 clang+llvm-13.0.1-rc1-aarch64-linux-gnu.tar.xz
1a523df1cd1a8f64158f602a4e2ce1ad845d322d9dda11225b3d96b2c957203e
 clang+llvm-13.0.1-rc1-armv7a-linux-gnueabihf.tar.xz
5c363a8e145e4b9985403aa89f496c550ff68423febd350d2a70c0f993b00b29
 LLVM-13.0.1-rc1-woa64.zip

Same test results as 13.0.0.

Cheers,
Diana

On Tue, 30 Nov 2021 at 07:07, Tom Stellard via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> Hi,
>
> I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.
>
> There is still time to submit fixes for the final 13.0.1.  I'll give more
> details about timelines and how to do this once the bugzilla migration is
> complete.  Currently, bugzilla is read-only, so we can't submit any fixes
> there.
>
> -Tom
>
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-11-30 Thread Tom Stellard via lldb-dev

On 11/30/21 12:35, Brooks Davis wrote:

On Mon, Nov 29, 2021 at 10:07:52PM -0800, Tom Stellard via cfe-dev wrote:

Hi,

I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.


There don't seem be llvm-project tarballs.  I see:

https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1-rc1/llvm-13.0.1rc1.src.tar.xz

but not the expected:

https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1-rc1/llvm-project-13.0.1rc1.src.tar.xz



Thanks for catching that, I've uploaded this and the other missing files.

-Tom


Thanks,
Brooks



___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-11-30 Thread Brooks Davis via lldb-dev
On Mon, Nov 29, 2021 at 10:07:52PM -0800, Tom Stellard via cfe-dev wrote:
> Hi,
> 
> I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

There don't seem be llvm-project tarballs.  I see:

https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1-rc1/llvm-13.0.1rc1.src.tar.xz

but not the expected:

https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1-rc1/llvm-project-13.0.1rc1.src.tar.xz

Thanks,
Brooks


signature.asc
Description: PGP signature
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Adding support for FreeBSD kernel coredumps (and live memory lookup)

2021-11-30 Thread Michał Górny via lldb-dev
Hi,

I'm working on a FreeBSD-sponsored project aiming at improving LLDB's
support for debugging FreeBSD kernel to achieve feature parity with
KGDB.  As a part of that, I'd like to improve LLDB's ability of working
with kernel coredumps ("vmcores"), plus add the ability to read kernel
memory via special character device /dev/mem.


The FreeBSD kernel supports two coredump formats that are of interest to
us:

1. The (older) "full memory" coredumps that use an ELF container.

2. The (newer) minidumps that dump only the active memory and use
a custom format.

At this point, LLDB recognizes the ELF files but doesn't handle them
correctly, and outright rejects the FreeBSD minidump format.  In both
cases some additional logic is required.  This is because kernel
coredumps contain physical contents of memory, and for user convenience
the debugger needs to be able to read memory maps from the physical
memory and use them to translate virtual addresses to physical
addresses.

Unless I'm mistaken, the rationale for using this format is that
coredumps are -- after all -- usually created when something goes wrong
with the kernel.  In that case, we want the process for dumping core to
be as simple as possible, and coredumps need to be small enough to fit
in swap space (that's where they're being usually written).
The complexity of memory translation should then naturally fall into
userspace processes used to debug them.

FreeBSD (following Solaris and other BSDs) provides a helper libkvm
library that can be used by userspace programs to access both coredumps
and running kernel memory.  Additionally, we have split the routines
related to coredumps and made them portable to other operating systems
via libfbsdvmcore [1].  We have also included a program that can convert
minidump into a debugger-compatible ELF core file.


We'd like to discuss the possible approaches to integrating this
additional functionality to LLDB.  At this point, our goal is to make it
possible for LLDB to correctly read memory from coredumps and live
system.


Plan A: new FreeBSDKernel plugin

I think the preferable approach is to write a new plugin that would
enable out-of-the-box support for the new functions in LLDB.  The plugin
would be based on using both libraries.  When available, libfbsdvmcore
will be used as the primary provider for vmcore support on all operating
systems.  Additionally, libkvm will be usable on FreeBSD as a fallback
provider for coredump support, and as the provider of live memory
support.

support using system-installed libfbsdvmcore to read coredumps and
libkvm to read coredumps (as a fallback) and to read live memory.

The two main challenges with this approach are:

1) "Full memory" vmcores are currently recognized by LLDB's elf-core
plugin.  I haven't investigated LLDB's plugin architecture in detail yet
but I think the cleanest solution here would be to teach elf-core to
distinguish and reject FreeBSD vmcores, in order to have the new plugin
handle them.

2) How to integrate "live kernel" support into the current user
interface?  I don't think we should make major UI modifications to
support this specific case but I'd also like to avoid gross hacks.
My initial thought is to allow specifying "/dev/mem" as core path, that
would match how libkvm handles it.

Nevertheless, I think this is the cleanest approach and I think we
should go with it if possible.


Plan B: GDB Remote Protocol-based wrapper
=
If we cannot integrate FreeBSD vmcore support into LLDB directly,
I think the next best approach is to create a minimal GDB Remote
Protocol server for it.  The rough idea is that the server implements
the minimal subset of the protocol necessary for LLDB to connect,
and implements memory read operations via the aforementioned libraries.

The advantage of this solution is that it is still relatively clean
and can be implemented outside LLDB.  It still provides quite good
performance but probably requires more work than the alternatives
and does not provide out-of-box support in LLDB.


Plan C: converting vmcores
==
Our final option, one that's practically implemented already is to
require the user to explicitly convert vmcore into an ELF core
understood by LLDB.  This is the simplest solution but it has a few
drawbacks:

1. it is limited to minidumps right now

2. it requires storing a converted coredump which means that at least
temporarily it doubles the disk space use

3. there is possibility of cleanly supporting live kernel memory
operations and therefore reaching KGDB feature parity

We could create a wrapper to avoid having users convert coredumps
explicitly but well, we think other options are better.


WDYT?


[1] https://github.com/Moritz-Systems/libfbsdvmcore

-- 
Best regards,
Michał Górny


___
lldb-dev mailing list
lldb-dev@lists.llvm.org

Re: [lldb-dev] Release 13.0.1-rc1 has been tagged

2021-11-30 Thread Tom Stellard via lldb-dev

+ Release-testers

On 11/29/21 22:07, Tom Stellard wrote:

Hi,

I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

There is still time to submit fixes for the final 13.0.1.  I'll give more
details about timelines and how to do this once the bugzilla migration is
complete.  Currently, bugzilla is read-only, so we can't submit any fixes
there.

-Tom


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Release 13.0.1-rc1 has been tagged

2021-11-30 Thread Hans Wennborg via lldb-dev
On Tue, Nov 30, 2021 at 7:08 AM Tom Stellard via cfe-dev
 wrote:
>
> Hi,
>
> I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.
>
> There is still time to submit fixes for the final 13.0.1.  I'll give more
> details about timelines and how to do this once the bugzilla migration is
> complete.  Currently, bugzilla is read-only, so we can't submit any fixes
> there.

Windows is ready:

$ sha256sum LLVM-13.0.1-rc1-win*.exe
21d0829aa4c85b81a8c37fc6fa57152e5cb01eda4b2ebaab75700fdbdc469c71
LLVM-13.0.1-rc1-win32.exe
7b5b0a5c714c67bafc6f346c36ffc15d42d956603bfa30fa6e7c47aa89be2505
LLVM-13.0.1-rc1-win64.exe

Thanks,
Hans


build_llvm_1301-rc1._bat_
Description: Binary data
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Release 13.0.1-rc1 has been tagged

2021-11-29 Thread Tom Stellard via lldb-dev

Hi,

I've tagged 13.0.1-rc1.  Testers can begin testing and uploading binaries.

There is still time to submit fixes for the final 13.0.1.  I'll give more
details about timelines and how to do this once the bugzilla migration is
complete.  Currently, bugzilla is read-only, so we can't submit any fixes
there.

-Tom

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] lldb integration with (user mode) qemu

2021-11-24 Thread Pavel Labath via lldb-dev
For anyone following along, I have now posted the first patch for this 
feature here: <https://reviews.llvm.org/D114509>.


pl

On 08/11/2021 11:03, David Spickett wrote:

I actually did consider this, but it was not clear to me how this would tie in 
to the rest of lldb.
The "run qemu and connect to it" part could be reused, of course, but what else?


That part seems like a good start. I'm sure a lot of other things
would break/not work like you said but if I was shipping a modified
lldb anyway maybe I'd put the effort in to make it work nicely.

Again not something this work needs to consider. Just me relating the
idea to something I have more experience with and has some parallels
with the qemu-user idea.

On Fri, 5 Nov 2021 at 14:08, Pavel Labath via lldb-dev
 wrote:


On 04/11/2021 22:46, Jessica Clarke via lldb-dev wrote:

On Fri, Oct 29, 2021 at 05:55:02AM +, David Spickett via lldb-dev wrote:

I don't think it does. Or at least I'm not sure how do you propose to solve them (who is 
"you" in the paragraph above?).


I tend to use "you" meaning "you or I" in hypotheticals. Same thing as
"if I had" but for whatever reason I phrase it like that to include
the other person, and it does have its ambiguities.

What I was proposing is, if I was correct (which I wasn't) then having
the user "platform select qemu-user" would solve things. (which it
doesn't)


What currently happens is that when you open a non-native (say, linux) 
executable, the appropriate remote platform gets selected automatically.


...because of this. I see where the blocker is now. I thought remote
platforms had to be selected before they could claim.


If we do have a prompt, then this may not be so critical, though I expect that 
most users would still prefer it we automatically selected qemu.


Seems reasonable to put qemu-user above remote-linux. Only claiming if
qemu-user has been configured sufficiently. I guess architecture would
be the minimum setting, given we can't find the qemu binary without
it.

Is this similar in any way to how the different OS remote platforms
work? For example there is a remote-linux and a remote-netbsd, is
there enough information in the program file itself to pick just one
or is there an implicit default there too?
(I see that platform CreateInstance gets an ArchSpec but having
trouble finding where that comes from)


Please make sure you don't forget that bsd-user also exists (and after
living in a fork for many years for various boring reasons is in the
middle of being upstreamed), so don't tie it entirely to remote-linux.



I am. In fact one of the reason's I haven't started putting up patches
yet is because I'm trying to figure out the best way to handle this. :)

My understanding is (let me know if I'm wrong) is that user-mode qemu
can emulate a different arhitecture, but not a different os. So, the
idea is that the "qemu" platform would forward all operations that don't
need special handling to the "host" platform. That would mean you get
freebsd behavior when running on freebsd, etc.

pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 52591] New: LLDB failed to lookup method names in NativePDB plugin

2021-11-23 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=52591

Bug ID: 52591
   Summary: LLDB failed to lookup method names in NativePDB plugin
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: zequa...@google.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

The following lldb commands fail to lookup/set breakpoint on given method
names:
`image lookup -n A::foo()`
`b A::foo()`

The problem is that lldb passes the function base name(e.g. "foo()") to
Plugins' FindFunctions, and NativePDB plugin can only lookup functions by their
full names.

Is there a way to only pass functions' full names to FindFunctions?

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 52585] New: Crash when when genarating backtrace

2021-11-22 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=52585

Bug ID: 52585
   Summary: Crash when when genarating backtrace
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: achro...@gmail.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash
backtrace.
Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH
or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it):
0  lldb 0x0001042b8027
llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 39
1  lldb 0x0001042b76e5 llvm::sys::RunSignalHandlers() +
85
2  lldb 0x0001042b88d6 SignalHandler(int) + 278
3  libsystem_platform.dylib 0x7fff20368d7d _sigtramp + 29
4  libsystem_platform.dylib 0x538103a0f43e36a0 _sigtramp + 6016953833237227840
5  LLDB 0x000107c97279
lldb_private::Module::ResolveSymbolContextForAddress(lldb_private::Address
const&, lldb::SymbolContextItem, lldb_private::SymbolContext&, bool) + 409
6  LLDB 0x000107c5ee7c
lldb_private::Address::CalculateSymbolContext(lldb_private::SymbolContext*,
lldb::SymbolContextItem) const + 204
7  LLDB 0x000107fe9cb0
lldb_private::SwiftLanguageRuntime::GetRuntimeUnwindPlan(std::__1::shared_ptr,
lldb_private::RegisterContext*, bool&) + 432
8  LLDB 0x000107dba582
lldb_private::LanguageRuntime::GetRuntimeUnwindPlan(lldb_private::Thread&,
lldb_private::RegisterContext*, bool&) + 306
9  LLDB 0x000107de5b03
lldb_private::RegisterContextUnwind::InitializeNonZerothFrame() + 563
10 LLDB 0x000107de4bd0
lldb_private::RegisterContextUnwind::RegisterContextUnwind(lldb_private::Thread&,
std::__1::shared_ptr const&,
lldb_private::SymbolContext&, unsigned int, lldb_private::UnwindLLDB&) + 224
11 LLDB 0x000107e49683
lldb_private::UnwindLLDB::GetOneMoreFrame(lldb_private::ABI*) + 243
12 LLDB 0x000107e4911b
lldb_private::UnwindLLDB::AddOneMoreFrame(lldb_private::ABI*) + 331
13 LLDB 0x000107e4946d
lldb_private::UnwindLLDB::UpdateUnwindPlanForFirstFrameIfInvalid(lldb_private::ABI*)
+ 61
14 LLDB 0x000107e48fc0
lldb_private::UnwindLLDB::AddFirstFrame() + 832
15 LLDB 0x000107e49b82
lldb_private::UnwindLLDB::DoGetFrameInfoAtIndex(unsigned int, unsigned long
long&, unsigned long long&, bool&) + 50
16 LLDB 0x000107dfa864
lldb_private::StackFrameList::GetFramesUpTo(unsigned int) + 2308
17 LLDB 0x000107dfba69
lldb_private::StackFrameList::GetFrameAtIndex(unsigned int) + 153
18 LLDB 0x000107e24bee
lldb_private::Thread::SelectMostRelevantFrame() + 62
19 LLDB 0x000107e24ea9 lldb_private::Thread::WillStop()
+ 89
20 LLDB 0x000107e2df25
lldb_private::ThreadList::ShouldStop(lldb_private::Event*) + 1109
21 LLDB 0x000107dd6015
lldb_private::Process::ShouldBroadcastEvent(lldb_private::Event*) + 421
22 LLDB 0x000107dd2648
lldb_private::Process::HandlePrivateEvent(std::__1::shared_ptr&)
+ 280
23 LLDB 0x000107dd6df5
lldb_private::Process::RunPrivateStateThread(bool) + 1477
24 LLDB 0x000107dd6435
lldb_private::Process::PrivateStateThread(void*) + 21
25 LLDB 0x000107d1c617
lldb_private::HostNativeThreadBase::ThreadCreateTrampoline(void*) + 103
26 libsystem_pthread.dylib  0x7fff203238fc _pthread_start + 224
27 libsystem_pthread.dylib  0x7fff2031f443 thread_start + 15

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 52582] New: LLDB confuses multiple global variables with the same name (even when they are in different namespace)

2021-11-22 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=52582

Bug ID: 52582
   Summary: LLDB confuses multiple global variables with the same
name (even when they are in different namespace)
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: v...@google.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

Background
Our binary has multiple global variables with the same basename but in
different namespaces. (Eg., lld::macho::symtab vs lld::elf::symtab,
lld::macho::in vs lld::elf::in, ...)

Repro:
Run the LLD linker under LLDB: (best to use MachO port to demonstrate this
bug):


lldb -- ld64.lld.darwninnew <... rest of args>

Set a breakpoint anywhere - but for best effect, here:
https://github.com/llvm/llvm-project/blob/2782cb8da0b3c180fa7c8627cb255a026f3d25a2/lld/MachO/Driver.cpp#L1141

(ie., right after `symtab` is set)

Try and print `symtab` or `in.got`:

```
(lldb) b Driver.cpp:1139
Breakpoint 1: 3 locations.
(lldb) run
Process 13244 launched:
'/Users/vyng/repo/llvm-project/build_lld/bin/ld64.lld.darwinnew' (x86_64)
Process 13244 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.2
frame #0: 0x000100a0856f
ld64.lld.darwinnew`lld::macho::link(argsArr=ArrayRef @
0x7ff7bfefeeb0, canExitEarly=true, stdoutOS=0x0001093b83a8,
stderrOS=0x0001093b8410) at Driver.cpp:1139:12
   1136 
   1137   config = make();
   1138   symtab = make();
-> 1139   target = createTargetInfo(args);
   1140   depTracker =
   1141  
make(args.getLastArgValue(OPT_dependency_info));
   1142   if (errorCount())
Target 0: (ld64.lld.darwinnew) stopped.
(lldb) print symtab
(lld::elf::SymbolTable *) $0 = nullptr
(lldb) print in.got
(lld::elf::GotSection *) $1 = nullptr
(lldb) print lld::macho::in.got
```

Expected:
Since the code that the breakpoint stopped on referred to the variable as
simply `symtab`, I would have expected LLDB's print to allow me to also use
`symtab`.

What actually happened:
LLDB always picked the globals from lld::elf .
(presumably because it's alphabetically before ::macho)


-
(Tested this with GDB and it didn't have this issue)

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 49018] Incorrect help text for memory write -f and -s options

2021-11-22 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=49018

Venkata Ramanaiah Nalamothu  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |ramana.venka...@gmail.com
 Status|NEW |CONFIRMED

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Why can't I break on an address resulting in unresolved?

2021-11-17 Thread Pi Pony via lldb-dev
Hello,

why does lldb can't break on an address? What does it say when it says
unresolved? And how can I fix it?

Thanks in advance

See this for more details: https://bugs.llvm.org/show_bug.cgi?id=22323
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


  1   2   3   4   5   6   7   8   9   10   >