michallenc opened a new pull request, #3040:
URL: https://github.com/apache/nuttx-apps/pull/3040
## Summary
Commit 5330966 replaced local static buffer with malloc allocated
`lib_get_pathbuffer`. This however prevents the usage of `sizeof(buffer)` in
the follow up read function, therefore the read size has to be specified as
`PATH_MAX - 1`.
This fixes the functionality of `pidof` function call (and subsequently
`pkill` call that uses `pidof`).
## Impact
Fixes the behavior of `pidof` and `pkill` calls.
## Testing
Tested with `sim:nsh` configuration.
Before the change
```
NuttShell (NSH) NuttX-12.8.0
nsh> ps
PID GROUP PRI POLICY TYPE NPX STATE EVENT SIGMASK
STACK COMMAND
0 0 0 FIFO Kthread - Ready 0000000000000000
0069616 Idle_Task
1 0 224 FIFO Kthread - Waiting Signal 0000000000000000
0067528 loop_task
2 0 224 FIFO Kthread - Waiting Semaphore 0000000000000000
0067504 hpwork 0x4013d6c0 0x4013d708
3 3 100 FIFO Task - Running 0000000000000000
0067536 nsh_main
nsh> pidof Idle_Task
nsh: pidof: no such task: Idle_Task
nsh> pidof loop_task
nsh: pidof: no such task: loop_task
nsh> pidof hpwork
2
nsh> pidof nsh_main
nsh: pidof: no such task: nsh_main
```
After the change
```
nsh> ps
PID GROUP PRI POLICY TYPE NPX STATE EVENT SIGMASK
STACK COMMAND
0 0 0 FIFO Kthread - Ready 0000000000000000
0069616 Idle_Task
1 0 224 FIFO Kthread - Waiting Signal 0000000000000000
0067528 loop_task
2 0 224 FIFO Kthread - Waiting Semaphore 0000000000000000
0067504 hpwork 0x4013d6c0 0x4013d708
3 3 100 FIFO Task - Running 0000000000000000
0067536 nsh_main
nsh> pidof Idle_Task
0
nsh> pidof loop_task
1
nsh> pidof hpwork
2
nsh> pidof nsh_main
3 0
```
`pidof hpwork` worked because of 8 byte pointer on my 64 bit machine and the
entire name fits into 7 bytes long read. 32 bit platforms would read only 3
bytes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]