Akihiko Odaki <akihiko.od...@daynix.com> writes:

(re-adding qemu-devel which my mail client dropped a few messages ago, sorry)

> On 2023/11/06 19:46, Alex Bennée wrote:
>> Akihiko Odaki <akihiko.od...@daynix.com> writes:
>> 
>>> On 2023/11/06 18:30, Alex Bennée wrote:
>>>> Akihiko Odaki <akihiko.od...@daynix.com> writes:
>>>>
>>>>> On 2023/11/04 4:59, Alex Bennée wrote:
>>>>>> We can only request a list of registers once the vCPU has been
>>>>>> initialised so the user needs to use either call the find function on
>>>>>> vCPU initialisation or during the translation phase. We don't expose
>>>>>> the reg number to the plugin instead hiding it behind an opaque
>>>>>> handle. This allows for a bit of future proofing should the internals
>>>>>> need to be changed while also being hashed against the CPUClass so we
>>>>>> can handle different register sets per-vCPU in hetrogenous situations.
>>>>>> Having an internal state within the plugins also allows us to expand
>>>>>> the interface in future (for example providing callbacks on register
>>>>>> change if the translator can track changes).
>>>>>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1706
>>>>>> Cc: Akihiko Odaki <akihiko.od...@daynix.com>
>>>>>> Based-on: <20231025093128.33116-18-akihiko.od...@daynix.com>
>>>>>> Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
>>>>>> +struct qemu_plugin_register;
>>>>>> +
>>>>>> +/**
>>>>>> + * typedef qemu_plugin_reg_descriptor - register descriptions
>>>>>> + *
>>>>>> + * @name: register name
>>>>>> + * @handle: opaque handle for retrieving value with 
>>>>>> qemu_plugin_read_register
>>>>>> + * @feature: optional feature descriptor, can be NULL
>>>>>> + */
>>>>>> +typedef struct {
>>>>>> +    char name[32];
>>>>>> +    struct qemu_plugin_register *handle;
>>>>>> +    const char *feature;
>>>>>> +} qemu_plugin_reg_descriptor;
>>>>>> +
>>>>>> +/**
>>>>>> + * qemu_plugin_find_registers() - return register list
>>>>>> + * @vcpu_index: vcpu to query
>>>>>> + * @reg_pattern: register name pattern
>>>>>> + *
>>>>>> + * Returns a GArray of qemu_plugin_reg_descriptor or NULL. Caller
>>>>>> + * frees. As the register set of a given vCPU is only available once
>>>>>> + * the vCPU is initialised if you want to monitor registers from the
>>>>>> + * start you should call this from a qemu_plugin_register_vcpu_init_cb()
>>>>>> + * callback.
>>>>>> + */
>>>>>> +GArray * qemu_plugin_find_registers(unsigned int vcpu_index, const char 
>>>>>> *reg_pattern);
>>>>>
>>>>> A pattern may be convenient for humans but not for machine. My
>>>>> motivation to introduce the feature is to generate traces consumable
>>>>> by trace-based simulators. Such a plugin needs an exact match of
>>>>> registers.
>>>> That's true - but we don't have any such users in the code base.
>>>> However
>>>> for exact matches the details are in qemu_plugin_reg_descriptor so you
>>>> can always filter there if you want.
>>>
>>> I designed the feature to read registers for users outside of the code
>>> base so the code base has no practical user.
>>> I added the feature to log register values to execlog but it's only
>>> for demonstration and it is useless for practical use;
>> I wouldn't say its useless - and it is important to have in-tree
>> code to
>> exercise the various parts of the API we expose.
>
> I mean it is useless except for demonstration. Having some code for
> demonstration is good but we shouldn't overfit the API to it.
>
>> To be clear is your objection just to the way
>> qemu_plugin_find_registers() works or the whole concept of using a
>> handle instead of the register number? I'm certainly open to better ways
>> of doing the former but as explained in the commit I think the handle
>> based approach is a more hygienic interface that gives us scope to
>> improve it going forward.
>
> Yes, my major concern is that the pattern matching.

OK. Another potential consumer I thought about during implementing the
internal API was HMP which would also benefit from a more human wildcard
type search. So I think the resolution of this is having two APIs, one
returning a list of qemu_plugin_reg_descriptor and one returning a
single descriptor only with an exact match.

I thought exposing features and registers in two calls was a bit clunky
though so how about:

  struct qemu_plugin_reg_descriptor *
    qemu_plugin_find_register(unsigned int vcpu_index,
                              const char *name,
                              const char *gdb_feature);

which will only reply on an exact match (although I still think
register names are unique enough you can get away without gdb_feature).

> I'm fine with the use of a pointer instead of the register number. A
> pointer is indeed more random for each run so it will prevent the user
> from hardcoding it.
>
>> As we are so close to soft-freeze I suggest I re-spin the series to
>> include 1-12/29 and the windows bits and we can work on a better API for
>> the 9.0 release.
>
> I'm not in haste either and fine to delay it for the 9.0 release.

OK I'll get as much as I can merged now and leave the final API bits for
when the tree opens up. I don't suppose you have any idea why:

  target/riscv: Use GDBFeature for dynamic XML

caused a regression? The XML generated looks identical but the
communication diverges with the riscv-csr response:

  gdbstub_io_command Received: qXfer:features:read:riscv-csr.xm   
gdbstub_io_command Received: qXfer:features:read:riscv-csr.xm
  gdbstub_io_binaryreply 0x0000: 6c 3c 3f 78  6d 6c 20 76  65 7   
gdbstub_io_binaryreply 0x0000: 6c 3c 3f 78  6d 6c 20 76  65 7
  gdbstub_io_binaryreply 0x0010: 31 2e 30 22  3f 3e 3c 21  44 4   
gdbstub_io_binaryreply 0x0010: 31 2e 30 22  3f 3e 3c 21  44 4
  gdbstub_io_binaryreply 0x0020: 66 65 61 74  75 72 65 20  53 5   
gdbstub_io_binaryreply 0x0020: 66 65 61 74  75 72 65 20  53 5
  gdbstub_io_binaryreply 0x0030: 67 64 62 2d  74 61 72 67  65 7   
gdbstub_io_binaryreply 0x0030: 67 64 62 2d  74 61 72 67  65 7
  gdbstub_io_binaryreply 0x0040: 3c 66 65 61  74 75 72 65  20 6   
gdbstub_io_binaryreply 0x0040: 3c 66 65 61  74 75 72 65  20 6
  gdbstub_io_binaryreply 0x0050: 72 67 2e 67  6e 75 2e 67  64 6   
gdbstub_io_binaryreply 0x0050: 72 67 2e 67  6e 75 2e 67  64 6
  gdbstub_io_binaryreply 0x0060: 2e 63 73 72  22 3e 3c 72  65 6   
gdbstub_io_binaryreply 0x0060: 2e 63 73 72  22 3e 3c 72  65 6
  gdbstub_io_binaryreply 0x0070: 22 66 66 6c  61 67 73 22  20 6   
gdbstub_io_binaryreply 0x0070: 22 66 66 6c  61 67 73 22  20 6
  gdbstub_io_binaryreply 0x0080: 3d 22 36 34  22 20 72 65  67 6   
gdbstub_io_binaryreply 0x0080: 3d 22 36 34  22 20 72 65  67 6
  gdbstub_io_binaryreply 0x0090: 22 2f 3e 3c  72 65 67 20  6e 6 | 
gdbstub_io_binaryreply 0x0090: 22 20 74 79  70 65 3d 22  69 6
  gdbstub_io_binaryreply 0x00a0: 6d 22 20 62  69 74 73 69  7a 6 | 
gdbstub_io_binaryreply 0x00a0: 65 67 20 6e  61 6d 65 3d  22 6
  gdbstub_io_binaryreply 0x00b0: 72 65 67 6e  75 6d 3d 22  36 3 | 
gdbstub_io_binaryreply 0x00b0: 74 73 69 7a  65 3d 22 36  34 2
  gdbstub_io_binaryreply 0x00c0: 67 20 6e 61  6d 65 3d 22  66 6 | 
gdbstub_io_binaryreply 0x00c0: 6d 3d 22 36  38 22 20 74  79 7
  gdbstub_io_binaryreply 0x00d0: 74 73 69 7a  65 3d 22 36  34 2 | 
gdbstub_io_binaryreply 0x00d0: 22 2f 3e 3c  72 65 67 20  6e 6
  gdbstub_io_binaryreply 0x00e0: 6d 3d 22 36  39 22 2f 3e  3c 7 | 
gdbstub_io_binaryreply 0x00e0: 73 72 22 20  62 69 74 73  69 7
  gdbstub_io_binaryreply 0x00f0: 65 3d 22 63  79 63 6c 65  22 2 | 
gdbstub_io_binaryreply 0x00f0: 20 72 65 67  6e 75 6d 3d  22 3
  gdbstub_io_binaryreply 0x0100: 65 3d 22 36  34 22 20 72  65 6 | 
gdbstub_io_binaryreply 0x0100: 65 3d 22 69  6e 74 22 2f  3e 3
  gdbstub_io_binaryreply 0x0110: 31 33 38 22  2f 3e 3c 72  65 6 | 
gdbstub_io_binaryreply 0x0110: 6d 65 3d 22  63 79 63 6c  65 2
  gdbstub_io_binaryreply 0x0120: 22 74 69 6d  65 22 20 62  69 7 | 
gdbstub_io_binaryreply 0x0120: 7a 65 3d 22  36 34 22 20  72 6
  gdbstub_io_binaryreply 0x0130: 36 34 22 20  72 65 67 6e  75 6 | 
gdbstub_io_binaryreply 0x0130: 33 31 33 38  22 20 74 79  70 6
  gdbstub_io_binaryreply 0x0140: 22 2f 3e 3c  72 65 67 20  6e 6 | 
gdbstub_io_binaryreply 0x0140: 2f 3e 3c 72  65 67 20 6e  61 6
  gdbstub_io_binaryreply 0x0150: 73 74 72 65  74 22 20 62  69 7 | 
gdbstub_io_binaryreply 0x0150: 65 22 20 62  69 74 73 69  7a 6
  gdbstub_io_binaryreply 0x0160: 36 34 22 20  72 65 67 6e  75 6 | 
gdbstub_io_binaryreply 0x0160: 72 65 67 6e  75 6d 3d 22  33 3
  gdbstub_io_binaryreply 0x0170: 22 2f 3e 3c  2f 66 65 61  74 7 | 
gdbstub_io_binaryreply 0x0170: 70 65 3d 22  69 6e 74 22  2f 3
                                                                > 
gdbstub_io_binaryreply 0x0180: 61 6d 65 3d  22 69 6e 73  74 7
                                                                > 
gdbstub_io_binaryreply 0x0190: 74 73 69 7a  65 3d 22 36  34 2
                                                                > 
gdbstub_io_binaryreply 0x01a0: 6d 3d 22 33  31 34 30 22  20 7
                                                                > 
gdbstub_io_binaryreply 0x01b0: 6e 74 22 2f  3e 3c 2f 66  65 6
  gdbstub_io_command Received: qTStatus                           
gdbstub_io_command Received: qTStatus
  gdbstub_io_reply Sent:                                          
gdbstub_io_reply Sent: 
  gdbstub_io_command Received: ?                                  
gdbstub_io_command Received: ?
  gdbstub_io_reply Sent: T05thread:p2003b4.2003b4;              | 
gdbstub_io_reply Sent: T05thread:p2011b6.2011b6;

Was this the reason for the misa_max cleanups?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro

Reply via email to