xiaotailang commented on issue #10236:
URL: https://github.com/apache/nuttx/issues/10236#issuecomment-1683688637
After further debugging the issue, I found that after powering on, when I
first successfully download the file "1.txt", I can see it when I execute "ls
/mnt/w25a/". However, when I execute "ls /mnt/w25a/" for the second time, the
"1.txt" file is no longer present. I also tested with "cat /mnt/w25a/" and "cd
/mnt/w25a/", and the same phenomenon occurs. I traced the execution and found
that there is a point where the node of the file is freed. It seems that after
the first "ls" command, the node of the file is freed, causing subsequent "ls"
commands to not be able to see "1.txt". However, I didn't fully understand the
logic and reason behind this. Do any developers in the community know why this
is happening?
The specific
phenomenon:pi_send:43
spi_send:43
spi_send:43
spi_send:43
========================end close=============================
receive file name: 1.txt
receive file size 5
sh> ls /mnt/w25/
nxffs_stat: Entry
/mnt/w25:
nxffs_opendir: relpath: ""
nxffs_nextentry: Found a valid f
ileheader, offset: 10140 nxffs_readdir: Offset 10140:
"1.txt" 1.txt
w25_bread: startblock:
00000028 nblocks: 1 w25_read: offset:
00002800 nbytes: 256 w25_byteread:
address: 00002800 nbytes: 256 w25_read:
return nbytes: 256
nxffs_nextentry: No entry found
sh>
sh>
sh> ls /mnt/w25/
nxffs_stat: Entry
/mnt/w25:
nxffs_opendir: relpath: ""
w25_bread: startblock: 00000027 nblocks: 1
w25_read: offset: 00002700 nbytes: 256
w25_byteread: address: 00002700 nbytes: 256
w25_read: return nbytes: 256
w25_bread: startblock: 00000028 nblocks: 1
w25_read: offset: 00002800 nbytes: 256
w25_byteread: address: 00002800 nbytes: 256
w25_read: return nbytes: 256
nxffs_nextentry: No entry found
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]