CV-Bowen opened a new pull request, #18036:
URL: https://github.com/apache/nuttx/pull/18036

   ## Summary
   This PR replaces the `metal/atomic.h` include with `nuttx/atomic.h` in the 
RPMSG header file to maintain consistency with NuttX's atomic operations 
implementation.
   
   The atomic operations in NuttX have been migrated to use NuttX's native 
atomic implementation. This change updates the include directive to reflect 
this migration:
   
   ## Impact
   Rpmsg
   
   ## Testing
   qemu-armv8a:rpserver and rpproxy
   ```c
   ❯ qemu-system-aarch64 -cpu cortex-a53 -nographic \
   -machine virt,virtualization=on,gic-version=3 \
   -chardev stdio,id=con,mux=on -serial chardev:con \
   -object 
memory-backend-file,discard-data=on,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem0,size=4194304,share=yes
 \
   -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,addr=0xb \
   -device virtio-serial-device,bus=virtio-mmio-bus.0 \
   -chardev socket,path=/tmp/rpmsg_port_uart_socket,server=on,wait=off,id=foo \
   -device virtconsole,chardev=foo \
   -mon chardev=con,mode=readline -kernel ./nuttx/cmake_out/v8a_server/nuttx \
   -gdb tcp::7775
   [    0.000000] [ 0] [  INFO] [server] pci_register_rptun_ivshmem_driver: 
Register ivshmem driver, id=0, cpuname=proxy, master=1
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: pci_scan_bus for bus 0
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000600, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:00 [1b36:0008]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000200, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:08 [1af4:1000]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: 
mask64=fffffffe 32bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1: 
mask64=fffffff0 4096bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4: 
mask64=fffffffffffffff0 16384bytes
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000500, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:58 [1af4:1110]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: 
mask64=fffffff0 256bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2: 
mask64=fffffffffffffff0 4194304bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [server] ivshmem_probe: shmem addr=0x10400000 
size=4194304 reg=0x10008000
   [    0.000000] [ 3] [  INFO] [server] rptun_ivshmem_probe: shmem 
addr=0x10400000 size=4194304
   
   NuttShell (NSH) NuttX-12.10.0
   server> 
   server> 
   server> [    0.000000] [ 0] [  INFO] [proxy] 
pci_register_rptun_ivshmem_driver: Register ivshmem driver, id=0, 
cpuname=server, master=0
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: pci_scan_bus for bus 0
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000600, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:00 [1b36:0008]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000200, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:08 [1af4:1000]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: 
mask64=fffffffe 32bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1: 
mask64=fffffff0 4096bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4: 
mask64=fffffffffffffff0 16384bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000500, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:58 [1af4:1110]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: 
mask64=fffffff0 256bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2: 
mask64=fffffffffffffff0 4194304bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] ivshmem_probe: shmem addr=0x10400000 
size=4194304 reg=0x10008000
   [    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: shmem 
addr=0x10400000 size=4194304
   [    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: Start the wdog
   
   server> uname -a
   NuttX server 12.10.0 04a9df8e34f Jan 20 2026 17:02:08 arm64 qemu-armv8a
   server> rpmsg ping all 1 1 1 1
   [    0.000000] [ 7] [ EMERG] [server] ping times: 1
   [    0.000000] [ 7] [ EMERG] [server] buffer_len: 1520, send_len: 17
   [    0.000000] [ 7] [ EMERG] [server] avg: 0 s, 15176896 ns
   [    0.000000] [ 7] [ EMERG] [server] min: 0 s, 15176896 ns
   [    0.000000] [ 7] [ EMERG] [server] max: 0 s, 15176896 ns
   [    0.000000] [ 7] [ EMERG] [server] rate: 0.008960 Mbits/sec
   [    0.000000] [ 7] [ EMERG] [server] ping times: 1
   [    0.000000] [ 7] [ EMERG] [server] buffer_len: 2024, send_len: 17
   [    0.000000] [ 7] [ EMERG] [server] avg: 0 s, 8375440 ns
   [    0.000000] [ 7] [ EMERG] [server] min: 0 s, 8375440 ns
   [    0.000000] [ 7] [ EMERG] [server] max: 0 s, 8375440 ns
   [    0.000000] [ 7] [ EMERG] [server] rate: 0.016237 Mbits/sec
   server> rpmsg dump all
   [    0.000000] [ 7] [ EMERG] [server] Dump rpmsg info between cpu (master: 
yes)server <==> proxy:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg vq RX:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg vq TX:
   [    0.000000] [ 7] [ EMERG] [server]   rpmsg ept list:
   [    0.000000] [ 7] [ EMERG] [server]     ept NS
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-sensor
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-ping
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-syslog
   [    0.000000] [ 7] [ EMERG] [server]   rpmsg buffer list:
   [    0.000000] [ 7] [ EMERG] [server]     RX buffer, total 8, pending 0
   [    0.000000] [ 7] [ EMERG] [server]     TX buffer, total 8, pending 0
   [    0.000000] [ 7] [ EMERG] [server] Remote: proxy2 state: 1
   [    0.000000] [ 7] [ EMERG] [server] ept NS
   [    0.000000] [ 7] [ EMERG] [server] ept rpmsg-sensor
   [    0.000000] [ 7] [ EMERG] [server] ept rpmsg-ping
   [    0.000000] [ 7] [ EMERG] [server] rpmsg_port queue RX: {used: 0, avail: 
8}
   [    0.000000] [ 7] [ EMERG] [server] rpmsg buffer list:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg_port queue TX: {used: 0, avail: 
8}
   [    0.000000] [ 7] [ EMERG] [server] rpmsg buffer list:
   server> 
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to