Cynerd opened a new pull request, #17278:
URL: https://github.com/apache/nuttx/pull/17278
## Summary
This fixes a few issues that I discovered to be present in NuttX's CAN
driver.
They are:
* buffer overrun when a custom RTR frame with non-zero DLC is written
* buffer overrun if buffer len is smaller than the DLC specified on write
* ready to read, being signalled even if no frame was available when the
previous read took more than one frame out of the FIFO
## Impact
This is a bugfix of the existing behavior. It won't be encountered if the
application doesn't contain these types of bugs, but it is really wrong if the
application causes NuttX's CAN driver to misbehave, such as sending zero-length
frames with ID 0.
## Testing
Tested on SAMv7 custom board.
The testing code for writes:
```c
int fd = open("/dev/can0", O_RDWR);
if (fd < 0) {
syslog(LOG_ERR, "Failed to open can: %s", strerror(errno));
return -1;
}
/* Invalid dlc overrun */
struct can_msg_s frame = {
.cm_hdr.ch_id = 0,
.cm_hdr.ch_dlc = 1,
};
write(fd, &frame, CAN_MSGLEN(2));
/* Test nonzero size RTR */
frame = (struct can_msg_s){
.cm_hdr.ch_id = 0,
.cm_hdr.ch_rtr = 1,
.cm_hdr.ch_dlc = 1,
};
write(fd, &frame, CAN_MSGLEN(0));
syslog(LOG_INFO, "RTR sent");
close(fd);
```
Output without patches (failing on first frame write):
```
[1762249616.433500] hpwork: dump_assert_info: Current Version: NuttX
12.11.0 0.13.1-15af3a0-dirty Jan 1 1980 00:00:00 arm
[1762249616.433500] hpwork: dump_assert_info: Assertion failed : at file:
chip/sam_mcan.c:3103 task: hpwork process: Kernel 0x423a91
[1762249616.433500] hpwork: up_dump_register: R0: 204050a0 R1: 00000000 R2:
00000000 R3: 00000007
[1762249616.433500] hpwork: up_dump_register: R4: 204014eb R5: 20409658 R6:
204050a0 FP: 00000000
[1762249616.433500] hpwork: up_dump_register: R8: 004ec1f9 SB: 20404a50 SL:
00000080 R11: 00000006
[1762249616.433500] hpwork: up_dump_register: IP: 00000000 SP: 2040a4d0 LR:
00422e93 PC: 00422e93
[1762249616.433500] hpwork: up_dump_register: xPSR: 60000000 BASEPRI:
00000080 CONTROL: 00000006
[1762249616.433500] hpwork: up_dump_register: EXC_RETURN: 00000000
[1762249616.433500] hpwork: dump_stackinfo: User Stack:
[1762249616.433500] hpwork: dump_stackinfo: base: 0x20409770
[1762249616.433500] hpwork: dump_stackinfo: size: 00004032
[1762249616.433500] hpwork: dump_stackinfo: sp: 0x2040a4d0
[1762249616.433500] hpwork: stack_dump: 0x2040a4b0: 004f4861 2040a4d0
00000fc0 204040c8 00000fc0 20409770 004e9e37 00422fa5
[1762249616.433500] hpwork: stack_dump: 0x2040a4d0: 004ec1f9 00000c1f
20409708 004e846a 00423a91 78bea109 5a4f3326 2040a4d0
[1762249616.433500] hpwork: stack_dump: 0x2040a4f0: 20409658 204050a0
00000000 004ec1f9 00000c1f 7474754e 20400058 20401740
[1762249616.433500] hpwork: stack_dump: 0x2040a510: 00000000 004555cd
20400038 00000000 20401494 20401468 00000000 20401494
[1762249616.433500] hpwork: stack_dump: 0x2040a530: 2040a738 00446bc9
2e323100 302e3131 20401400 20401494 2040146c 2e3081eb
[1762249616.433500] hpwork: stack_dump: 0x2040a550: 312e3331 6135312d
30613366 7269642d 4a207974 20206e61 39312031 30203038
[1762249616.433500] hpwork: stack_dump: 0x2040a570: 30303a30 0030303a
20401468 00000000 6d7261c4 20401400 20401468 2040163f
[1762249616.433500] hpwork: stack_dump: 0x2040a590: 20401468 00447d85
20401468 204014eb 20401740 20401468 20401748 00000001
[1762249616.433500] hpwork: stack_dump: 0x2040a5b0: 004f6458 20401758
00000080 004288c1 000042f0 0045565b 20401468 00000002
[1762249616.433500] hpwork: stack_dump: 0x2040a5d0: 00000000 20401468
fffffff0 204014eb 20401494 2040146c 004f6458 20401758
[1762249616.433500] hpwork: stack_dump: 0x2040a5f0: 00000080 00447d6d
00000000 20401468 00000080 20401468 20401494 004481d7
[1762249616.433500] hpwork: stack_dump: 0x2040a610: dfbfeb55 20401740
20401468 20401748 00000000 004557d9 20401468 00000003
[1762249616.433500] hpwork: stack_dump: 0x2040a630: 00000000 20401468
fffffff0 204014a7 20401494 2040146c 004f6458 20401758
[1762249616.433500] hpwork: stack_dump: 0x2040a650: 00000080 00447d6d
00000000 20401468 00000080 20401468 20401494 004481d7
[1762249616.433500] hpwork: stack_dump: 0x2040a670: dfbfe979 20401740
20401468 20401748 00000000 004557d9 20401468 00000001
[1762249616.433500] hpwork: stack_dump: 0x2040a690: 00000000 20401468
fffffff0 20401683 20401494 00447ecd 20401468 00000000
[1762249616.433500] hpwork: stack_dump: 0x2040a6b0: 00000080 00447d6d
00000006 20401468 00000000 20400e28 00000000 00447ef7
[1762249616.433500] hpwork: stack_dump: 0x2040a6d0: 00000000 20400dd8
00000000 00423b69 00424a88 01000000 00000000 20400de0
[1762249616.433500] hpwork: stack_dump: 0x2040a6f0: 20400de8 20400e30
00000000 00000002 20409658 00000000 00000000 00000000
[1762249616.433500] hpwork: stack_dump: 0x2040a710: 00000000 00000000
00000000 00424ac3 00000000 00000000 00000000 00000000
[1762249616.433500] hpwork: stack_dump: 0x2040a730: 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000
[1762249616.433500] hpwork: dump_tasks: PID GROUP PRI POLICY TYPE
NPX STATE EVENT SIGMASK STACKBASE STACKSIZE COMMAND
[1762249616.433500] hpwork: dump_tasks: ---- --- --- -------- -------
--- ------- ---------- ---------------- 0x204040c8 2048 irq
[1762249616.433500] hpwork: dump_task: 0 0 0 FIFO Kthread -
Ready 0000000000000000 0x20408d90 1008 Idle_Task
[1762249616.433500] hpwork: dump_task: 1 0 224 RR Kthread -
Running 0000000000000000 0x20409770 4032 hpwork 0x20400dd8
0x20400e28
[1762249616.433500] hpwork: dump_task: 2 2 100 RR Task -
Waiting Semaphore 0000000000000000 0x2040ab08 4056 startup_main
```
Multiple frames are sent on CAN until an invalid data is encountered on the
stack:
```
can0 000 [1] 00 '.'
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
can0 000 [0] ''
```
With the first patch (fixing the overrun) the RTR is not sent (due to the
first commit fix, without it it would cause overrun):
```
[1762249722.433200] startup_main: RTR sent
```
```
can0 000 [1] 00 '.'
```
With the second commit added:
```
can0 000 [1] 00 '.'
can0 000 [1] remote request
```
Now the read. Notice that read is for a slightly largerger buffer than what
is needed. Thus only one message is placed in the buffer but semaphore is
posted and thus incremented too much. The effect is that read will return `0`
afterwards.
```c
int fd = open("/dev/can0", O_RDWR);
if (fd < 0) {
syslog(LOG_ERR, "Failed to open can: %s", strerror(errno));
return -1;
}
while (true) {
sleep(1); /* Give us some time to use cansend multiple times*/
struct can_msg_s frame;
size_t n = read(fd, &frame, CAN_MSGLEN(1));
syslog(LOG_INFO, "Read %d id%d", n, frame.cm_hdr.ch_id);
}
```
```bash
cansend can0 '001##0' && for i in {1..2}; do cansend can0 '002##0'; done
```
```
[1762251826.461100] Idle_Task: can_receive: ID: 1 DLC: 0
[1762251826.461100] startup_main: can_receive: ID: 2 DLC: 0
[1762251826.461100] startup_main: can_receive: ID: 2 DLC: 0
[1762251826.461200] startup_main: Read 4 id1
[1762251827.461600] startup_main: can_read: buflen: 5
[1762251827.461900] startup_main: Read 4 id2
[1762251828.462300] startup_main: can_read: buflen: 5
[1762251828.462600] startup_main: Read 4 id2
[1762251829.463000] startup_main: can_read: buflen: 5
[1762251829.463300] startup_main: can_read: RX FIFO sem posted but FIFO is
empty.
[1762251829.463400] startup_main: Read 0 id2
[1762251830.463800] startup_main: can_read: buflen: 5
```
With the fix:
```
[1762251980.824100] Idle_Task: can_receive: ID: 1 DLC: 0
[1762251980.824100] startup_main: can_receive: ID: 2 DLC: 0
[1762251980.824100] startup_main: can_receive: ID: 2 DLC: 0
[1762251980.824200] startup_main: Read 4 id1
[1762251981.824600] startup_main: can_read: buflen: 5
[1762251981.824900] startup_main: Read 4 id2
[1762251982.825300] startup_main: can_read: buflen: 5
```
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]