Re: nsh: i2c: command not found

2024-04-18 Thread Gregory Nutt

'nsh: i2c: command not found' means that the i2c app was not built into the 
image and/or was not registered at build time.  NSH is simply complaining that 
you are trying to run an app that does not exist.

Did you remember to include logic in your board/src file in order to initialize 
and register the i2c application?  If you don't register it, then it will not 
exist.  You can look at other board-level initialization to see how this is 
done.

On 4/18/2024 9:30 AM, Gustavo Soares wrote:

Hi everyone!

I've built an application that sends i2c set/get commands to NuttX using the 
system() function in a loop that breaks after a timer runs out. After some i2c 
requests, NuttX prompts the message 'nsh: i2c: command not found' and the only 
way to solve it is rebooting the ESP32.

This is what I tried so far:
1. Enabling i2c reset command and sending it every time i2c get/set fails, but 
it won't work because it depends on i2c driver;
2. Enabling the reboot command, but of course it resets the board so the app 
stops;

Some more details:
* I2c debug tool shows no error (only that one message);
* Killing the program and sending any i2c command from console works fine;
* Depending on the scenario, the error raises after sending around 3000 i2c 
requests.

Is there any other way how to reset the i2c driver without rebooting the board? 
Forcing a delay at every request significantly reduces the amount of i2c 
requests, but it does not solve the problem.








Re: Problem with mmcsd

2024-04-14 Thread Gregory Nutt

On 4/14/2024 7:02 AM, Ahmad Noori wrote:

Hi

I have a custom board with stm32h743vit6
I port nuttx to this board and i am using it.
I have problem with mmcsd sdio and i cant mount sd card. I get error 19
when mount sd card.

I test my hardware with stm32cubeide and its ok.

Please help me to solve this problem.
Thank you


In errno.h:

    92 #define ENODEV  19
    93 #define ENODEV_STR  "No such device"

The logic in drivers/mmcsd/mmcds_sdio.c reports ENODEV when there is no 
card detected in the slot (and possibly other reasons). First step is to 
find out which condition generates the ENODEV.




Re: userspace IO support

2024-03-28 Thread Gregory Nutt



On 3/28/2024 9:21 PM, yfliu2008 wrote:

> I am still trying to understand existing userland address envrionment 
in NuttX kernel mode, I saw the following regions defined in 
"arch/Kconfig": config ARCH_TEXT_VBASE, config ARCH_DATA_VBASE, config 
ARCH_HEAP_VBASE, config ARCH_SHM_VBASE, config ARCH_KMAP_VBASE, config 
ARCH_STACK_VBASE, config ARCH_PGPOOL_VBASE.  I can understand most of 
above, except the "ARCH_KMAP_VBASE" thing. But it doesn't look like 
relating to UIO though.


I added most of those definitions back in 2014.  They are used in the 
addrenv logic for addrenv creation.  ARCH_KMAP_VBASE was added later and 
is used in arch/risc-v/src/common/riscv_addrenv_pgmap.c


There is inconsistent logic.  For the older SAMA5, for example, all 
fixed virtual address are listed in header files under 
arch/arm/src/sama5/*_memorymap.h


I think that the use of Kconfig variables is fine for major landmarks in 
the virtual address space, but a finer grained virtual memory map 
probably should go in a header file as offsets from the configurable 
VBASE addresses.  Anything not dynamically allocated from the user heap 
or via mmap probably should be a constant offset.  The user head 
allocations are already virtual, but not physically contiguous and 
hence, not useful for UIO.




Re: userspace IO support

2024-03-28 Thread Gregory Nutt



On 3/28/2024 8:23 PM, Gregory Nutt wrote:

...

On all platforms that I am aware of, the virtual address of the 
framebuffer is just a configurable constant.  This makes sense if the 
framebuffer is in a fixed virtual address in every process. The kernel 
can manage the virtual address space anyway that it likes.  A fixed 
virtual position for the framebuffer is a reasonable decision.   More 
logic is need to manage the virtual address space to handle things 
like pthread stacks, heap, shared libraries, framebuffers, etc.  The 
logic and virtual memory layout in Linux is quite complex even though 
all of these are at fixed virtual addresses in every process.


pthread stacks have some particular complexities.  First, their virtual 
address be aligned to a large value.  Old versions of Linux that I 
worked with in the past aligned the pthread virtual addresses to 2 
MByte.  This permits TLS to work by simply ANDing an address (vs 
performing a system call).


Right now, pthread stacks are malloc'ed and, as a result, cannot exploit 
this fast TLS access.


Old versions of Linux supported an array of pthread stacks (hence, 
limiting the number of pthreads to something like 200). Each stack was 
2MByte of virtual address.  All fixed, constant virtual addresses.






Re: userspace IO support

2024-03-28 Thread Gregory Nutt


On 3/28/2024 7:05 PM, yfliu2008 wrote:

Yes the user land only use standard file operations like 
"mmap()/read()/write()/munmap()/ioctl()" etc to access the device memory and interruption 
status via the exposed device files like "/dev/uio3" etc.


Sorry it took me so long to catch on.


My original question was about things in Nuttx Kernel side to support implement the 
required "mmap()/munmap()" operations. It seems that we need:

A way to mark the user address regions so that to distinguish free and in use 
mapped regions. We also need separate this space from existing user space 
regions like TEXT, DATA, HEAP, SHM etc as it is for UIO purposes.

A way to set the user space mappings via "int up_addrenv_uio_map(va, pa, 
length, flags)" and the counter part. These mainly operates the MMU page table entries 
with proper flags.

Add an UIO component in "drivers/uio/uio.c" which exposes the "int 
uio_register(path, paddr, length, irq)" method for BSP to use to instantiate the needed 
devices.

My purpose wasn't about video display, but this is what I got from the 
"fb" example in kernel mode now: ...


You might also want to look at the (fake) file mapping. It doesn't do much.


...

Currently the video device driver returns physical address directly in the 
"mmap()", which should be mapped in the user space for kernel build however.


The driver level should return the virtual address.  The actual 
framebuffer driver is in drivers/video/fb.c.  It is a character driver 
that is more-or-less an upper half for the lcd driver.  It gets the 
"planeinfo" from the lcd driver getplaneinfo() method which includes the 
virtual address  of the framebuffer (line 1184) and eventually returns 
that address with the offset in the fb_mmap method (line 1035). [The 
offset means something different in UIO]


On all platforms that I am aware of, the virtual address of the 
framebuffer is just a configurable constant.  This makes sense if the 
framebuffer is in a fixed virtual address in every process. The kernel 
can manage the virtual address space anyway that it likes.  A fixed 
virtual position for the framebuffer is a reasonable decision.   More 
logic is need to manage the virtual address space to handle things like 
pthread stacks, heap, shared libraries, framebuffers, etc.  The logic 
and virtual memory layout in Linux is quite complex even though all of 
these are at fixed virtual addresses in every process.


I would do the same. It is more efficient and guarantees no virtual 
address overlap.



I am unsure if video display is a good use case for UIO but that is what I can test now 
with "rv-virt".


It is not a bad case either.  It does a lot of what you want to do.


Re: userspace IO support

2024-03-28 Thread Gregory Nutt
This could actually be very interesting.  mmap() really does all of the 
work.  But notice that mmap() it is not passed any address information 
(other than a "suggested" mapped, user-space address).  It also received 
an instance of the file descriptor of the driver.


The Linux file_operations structure is here: 
https://elixir.bootlin.com/linux/latest/source/include/linux/fs.h#L1983


Notice that there is a mmap method in the file operations.  So, mmap(), 
it seems, calls the mmap method on the driver which performs the user 
spacemapping of the device resource, much as you suggested in your 
original email.  The mmap() offset field specifies which device resource 
to map.


That actually sounds very elegant to me and meets all of my concerns:

 * The application cannot map any arbitrary kernel address to user
   space so the implementation is secure.
 * It uses the standard POSIX mmap() to accomplish the mapping.


On 3/28/2024 3:01 PM, Gregory Nutt wrote:


I found this document which describes the use cases for UIO in Linux: 
https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html . They 
use mmap() to addess device memory: "|/dev/uioX| is used to access the 
address space of the card. Just use |mmap()| to access registers or 
RAM locations of your card."


There is a paragraph with a little more info: 
https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#mmap-device-memory


My only concerns here are:  (1) We use POSIX/Linux compatibility in 
interface definitions and (2) We assure security of kernel resources.  
I am not sure how mmap() knows which kernel mappings are secure and 
which are insecure.  But we would need to address that.  If secure, 
standard interfaces are used, then I am supportive (and you can ignore 
many of my other comments which were just me fumbling to understand 
the proposal).



On 3/28/2024 12:25 PM, Gregory Nutt wrote:


A more interesting task would be to port NxLib to run on top of the 
existing NuttX NX Windows System:


  * https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=139629474
  * https://cwiki.apache.org/confluence/display/NUTTX/NX+Graphics+Subsystem

That would not be particularly easier, however.  The NX Window System 
is purely a windowing system and does not provide any significant 
graphics support.  X11, by comparison, is a complete graphics system 
and user libraries are really only needed to simplify the interfaces 
and to extend the graphics capabilities.


You would have to come up with some way to provide the graphics 
capability in Nano-X not included in NX Windows.


In the NX Windows world, graphics was intended to be provided by 
NxWidgets.  That, however, is a work that was never really completed 
to production quality.  There are also several window managers that 
use NX Windows and NxWidgets:  NX and Twm4Nx.  The latter is a TWM 
inspired window manager (but a unique C++ development).


Re: userspace IO support

2024-03-28 Thread Gregory Nutt
I found this document which describes the use cases for UIO in Linux: 
https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html . They 
use mmap() to addess device memory: "|/dev/uioX| is used to access the 
address space of the card. Just use |mmap()| to access registers or RAM 
locations of your card."


There is a paragraph with a little more info: 
https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#mmap-device-memory


My only concerns here are:  (1) We use POSIX/Linux compatibility in 
interface definitions and (2) We assure security of kernel resources.  I 
am not sure how mmap() knows which kernel mappings are secure and which 
are insecure.  But we would need to address that.  If secure, standard 
interfaces are used, then I am supportive (and you can ignore many of my 
other comments which were just me fumbling to understand the proposal).



On 3/28/2024 12:25 PM, Gregory Nutt wrote:


A more interesting task would be to port NxLib to run on top of the 
existing NuttX NX Windows System:


  * https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=139629474
  * https://cwiki.apache.org/confluence/display/NUTTX/NX+Graphics+Subsystem

That would not be particularly easier, however.  The NX Window System 
is purely a windowing system and does not provide any significant 
graphics support.  X11, by comparison, is a complete graphics system 
and user libraries are really only needed to simplify the interfaces 
and to extend the graphics capabilities.


You would have to come up with some way to provide the graphics 
capability in Nano-X not included in NX Windows.


In the NX Windows world, graphics was intended to be provided by 
NxWidgets.  That, however, is a work that was never really completed 
to production quality.  There are also several window managers that 
use NX Windows and NxWidgets:  NX and Twm4Nx.  The latter is a TWM 
inspired window manager (but a unique C++ development).


Re: userspace IO support

2024-03-28 Thread Gregory Nutt
A more interesting task would be to port NxLib to run on top of the 
existing NuttX NX Windows System:


 * https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=139629474
 * https://cwiki.apache.org/confluence/display/NUTTX/NX+Graphics+Subsystem

That would not be particularly easier, however.  The NX Window System is 
purely a windowing system and does not provide any significant graphics 
support.  X11, by comparison, is a complete graphics system and user 
libraries are really only needed to simplify the interfaces and to 
extend the graphics capabilities.


You would have to come up with some way to provide the graphics 
capability in Nano-X not included in NX Windows.


In the NX Windows world, graphics was intended to be provided by 
NxWidgets.  That, however, is a work that was never really completed to 
production quality.  There are also several window managers that use NX 
Windows and NxWidgets:  NX and Twm4Nx.  The latter is a TWM inspired 
window manager (but a unique C++ development).


Re: Nano-x porting

2024-03-28 Thread Gregory Nutt

On 3/28/2024 11:17 AM, Shijo George wrote:

I was planning to port NanoX to NuttX as part of GSoC. Is anyone familiar
with NanoX? Any suggestions and help would be much appreciated.


Regards ,
Shijo george

This will likely be a very difficult job.  Nano-X is a complete X-11 
replacement from the the level of the video, mouse/touchpad, and 
keyboard drivers and up.


The first challenge will be getting all of the drivers working. I see 
that there there is a support for RTEMS in Nano-X (although it has not 
changed for 19 years).  RTEMS is older RTOS and you should be able to 
search for occurrences of RTEMS in the repository.  Each of those 
probably has a corresponding change needed for NuttX.


RTEMS Drivers: 
https://github.com/ghaerr/microwindows/tree/master/src/drivers


Nano-X is the kernel portion of the solution and is probably not so 
useful without the microwindows NxLib for user space support. See 
http://www.microwindows.org/ (from 2018)  and 
https://github.com/ghaerr/nxlib .  That hasn't been updated in years.  
NxLib runs on top of MS Windows, X Windows, or Nano-X. That former link 
includes references to NxLib and Nano-X documents from 2000.


I see that you have already started discussions with people on the 
Nano-X team: https://github.com/ghaerr/microwindows/issues/85  Those are 
the right people that you need to talk to.  No one on this list can 
really help you much other than with driver and architectural issues.






Re: userspace IO support

2024-03-28 Thread Gregory Nutt

On 3/28/2024 1:03 AM, yfliu2008 wrote:

...
Have "up_addrenv_map(), up_addrenv_unmap()" alike interfaces added to 
encapsulate MMU mapping operations of different chip architectures. These are used by UIO 
component to a.


As an architectural description it is confusing to understand what you 
see in user space and what you see in kernel space.  Some of my 
responses may be incorrect because I am not sure what you are describing 
and I am probably just responding to buzzwords.


For example, non-standard interfaces are fine if they are used only 
internally in the OS.  None of the internal OS interfaces conform to any 
standard.






Re: userspace IO support

2024-03-28 Thread Gregory Nutt

On 3/28/2024 8:21 AM, Gregory Nutt wrote:

On 3/28/2024 1:03 AM, yfliu2008 wrote:
Have UIO framework added which exposes an "uio_register_device()" 
interface to BSP so that to instantiate different UIO devices 
with specific device path, address range and IRQ data.

Then userland apps can use standard file operation interfaces to access the UIO 
devices in a portable manner. We proabably can start with simpler devices with 
only one memory range and one IRQ.
You are proposing that the application control the virtual address 
space?  That is a really bad idea.  The OS should (and does) control 
the virtual address space)


One thing to be careful with is the security issue:  No operation by an 
application should permit visibility of the kernel address space to an 
application.  That would be a gaping security hole and must be avoided.





Re: userspace IO support

2024-03-28 Thread Gregory Nutt

On 3/28/2024 1:03 AM, yfliu2008 wrote:

Dear experts,

I am wondering if we can have an Userspace IO alike component added to NuttX 
kernel? It can help exposing video frame buffers or neural network 
accelerators alike devices to user land services. Often kernel has little to do 
with such devices and their management are offloaded to user land.


I think that this is already fully supported.  Video frame buffers are 
matured have been in use by NX and LVGL for years.


General shared memory device support is also provided by standard POSIX 
shared memory interfaces.



Guess that we may need following things in NuttX kernel:


Have "CONFIG_ARCH_UIO_VBASE" alike Kconfig items added to give virtual addresse 
for devices memory in user space.

Have "up_addrenv_map(), up_addrenv_unmap()" alike interfaces added to 
encapsulate MMU mapping operations of different chip architectures. These are used by UIO 
component to a.


I am VERY opposed to adding non-standard, non-portable application 
interfaces.  That is forbidden in a POSIX OS (unless there is no 
option).  Standard shard memory access is already supported as are 
interfaces for all devices that have dedicated memory to be shared.


If we allow non-standard application interfaces into the we are, in the 
long run, destroying the OS.



Have UIO framework added which exposes an "uio_register_device()" interface 
to BSP so that to instantiate different UIO devices with specific device path, 
address range and IRQ data.

Then userland apps can use standard file operation interfaces to access the UIO 
devices in a portable manner. We proabably can start with simpler devices with 
only one memory range and one IRQ.
You are proposing that the application control the virtual address 
space?  That is a really bad idea.  The OS should (and does) control the 
virtual address space)

I am unsure if some one have some considerations or there is better route in 
NuttX kernel mode for such things? If so, please feel free to teach me. 
Comments are highly welcomed.


A kernel-based solution that uses only POSIX interfaces (like shared 
memory interfaces or IOCTLs) would be preferable.




Re: Options to contribute to NuttX without github?

2024-03-21 Thread Gregory Nutt
The more I think about, I believe you don't have to be concerned about 
accessing github for Apache NuttX.  Certainly, I respect your choice to 
keep your personal code out of "The Stack".  But NuttX will most likely 
be in the Stack and any changes you submit to NuttX will be in the 
Stack.  So it may be awkward for you, but I don't see any reason for you 
to avoid Apache NuttX in github while moving your personal code elsewhere.


Am I missing something?  You would probably like to make a clean break 
with github, but I think continuing to use the Apache NuttX repository 
(only) would be the more practical way to accomplished what you want to do.


On 3/21/2024 4:43 PM, Gregory Nutt wrote:
I think we need to continue to use PRs at some point in the process.  
PRs are so tied into our CI that I don't think we could work around it.


Can you use another GIT repository?  If you create a PR on gitlab or 
bitbucket, someone could move the PR to github.  Or perhaps we could 
take commits from a local repository clone?  We haven't done it for a 
while, but patches could also be converted to github PRs with a little 
more effort.


I think there are options, but without PRs on github, I don't see how 
you could independently make changes.


On 3/21/2024 4:08 PM, Sebastien Lorquet wrote:

Hi,

I plan to leave github completely and delete my account, because of 
this: https://huggingface.co/spaces/bigcode/in-the-stack


It is the last of reasons that make me want to leave this place for 
good.


How can we imagine a process so I (and others) can continue to 
contribute to NuttX without using pull requests from github? or any 
public "forge" for that matter.


Sebastien



Re: Options to contribute to NuttX without github?

2024-03-21 Thread Gregory Nutt
I think we need to continue to use PRs at some point in the process.  
PRs are so tied into our CI that I don't think we could work around it.


Can you use another GIT repository?  If you create a PR on gitlab or 
bitbucket, someone could move the PR to github.  Or perhaps we could 
take commits from a local repository clone?  We haven't done it for a 
while, but patches could also be converted to github PRs with a little 
more effort.


I think there are options, but without PRs on github, I don't see how 
you could independently make changes.


On 3/21/2024 4:08 PM, Sebastien Lorquet wrote:

Hi,

I plan to leave github completely and delete my account, because of 
this: https://huggingface.co/spaces/bigcode/in-the-stack


It is the last of reasons that make me want to leave this place for good.

How can we imagine a process so I (and others) can continue to 
contribute to NuttX without using pull requests from github? or any 
public "forge" for that matter.


Sebastien



Re: Rust in the OS?

2024-03-16 Thread Gregory Nutt

On 3/15/2024 11:10 PM, Saurav Pal wrote:

All in all, I am *against* Rust being introduced in the kernel. However, I
am fine with the language being in the very leaf nodes. Like some drivers,
userspace applications, or userspace filesystems (FUSE)... especially those
that only interact with the kernel over a fixed interface, and are not
depended on by anything other than its own subprogram in the kernel.


That was my initial impression as well.  Certainly no one wants to see 
Rust scattered all over the OS like explosive diarrhea. But I have had 
time to think more about it.


I think that a better use of effort would be to re-implement Rust OS 
drivers in C than to implement Rust environmental support for smaller 
leaf functions.  A C re-implementation would be a welcomed by all, would 
not add any OS build complexity or design complexity, and would only 
require one toolchain and debugger (for the OS), and no changes to the 
OS build system.  As a bonus, the re-implemented C driver could probably 
be considered an original work and more easily incorporated as a true 
part of the OS*


*Disclaimer: I am not an attorney.




Re: Rust in the OS?

2024-03-13 Thread Gregory Nutt
No one said "full community support" or "unanimity".  That would be 
nice.  There are Apache rules for determining technical direction that 
defines "community support": 
https://www.apache.org/foundation/voting.html under "Code 
Modifications".  This prohibits any small group from commandeering the 
technical direction of the project.  Apache prohibits that and requires 
that any change must be at least acceptable to all voting members.  That 
is the Apache way.


On 3/13/2024 4:40 PM, Alan C. Assis wrote:

I think we will never have "full community support" because it means
something like "unanimity" and as a guy called Nelson Rodrigues once said:
"All unanimity is dumb".

Although (fortunately) we don't have full community support, it seems we
have a direction: only application support for now.

Best Regards,

Alan

On Wed, Mar 13, 2024 at 7:17 PM Gregory Nutt  wrote:


On 3/13/2024 4:11 PM, Alan C. Assis wrote:

I think we are having a CMakefile deja-vu here, don't we? (I hope we

don't

lose any developer this time)

Let's make sure that we have full concurrence from the community.  Our
responsibility is to serve the whole community and not just the special
interests of a few.  That is what it means to be an Apache project.

The CMake effort did not have full community support.  Let's make sure
that we do this time.




Re: Rust in the OS?

2024-03-13 Thread Gregory Nutt



On 3/13/2024 4:11 PM, Alan C. Assis wrote:

I think we are having a CMakefile deja-vu here, don't we? (I hope we don't
lose any developer this time)


Let's make sure that we have full concurrence from the community.  Our 
responsibility is to serve the whole community and not just the special 
interests of a few.  That is what it means to be an Apache project.


The CMake effort did not have full community support.  Let's make sure 
that we do this time.




Re: Rust in the OS?

2024-03-13 Thread Gregory Nutt


On 3/13/2024 2:42 PM, Tomek CEDRO wrote:

You want Rust in the core go ahead write RustOS have fun maintaining it for 5 
years and show us its better :-)


You are probably right in that.  It would probably be necessary have to 
be a different OS if any extensive amount of Rust is used.  POSIX 
defines a C interface to the OS with C function prototypes and C data 
types.  I haven't looked at this carefully, but a significant use of 
Rust might jeopardize POSIX compatibility (or require a mess of C 
conversion wrappers).  It would probably be better to have a pure RustOS 
with a non-POSIX, but POSIX-like OS interface.


I am not a language chauvinist, but I think we should avoid the 
complexity and maintenance issues of a mixed language solution (as 
enumerated by others in this thread).


Rust in the OS?

2024-03-13 Thread Gregory Nutt
There is some discussion in Issue #11907 proposing to use the Rust 
language within the OS (vs Rust applications on a pure C OS).  If anyone 
has any feelings, Pro or Con,  you should participate in this 
discussion.  This kind of decision impacts the entire community and 
should have the input of all of the community that has an opinion.





Re: mm/mm_heap assertion error

2024-03-12 Thread Gregory Nutt

On 3/12/2024 5:12 AM, Nathan Hartman wrote:

Try Alan's suggestion to use stack monitor, and that will help understand
if there is something wrong. (If it shows that old stack size was OK, while
we know corruption was happening, then we will know to look for some out of
bound write.)
Does the stack monitor work in kernel mode?  ie., the stack monitor runs 
in user space.  Is the kernel heap exposed to the applications?   If it 
is, that could be a security issue, couldn't it?

Re: mm/mm_heap assertion error

2024-03-12 Thread Gregory Nutt

After enlarging the stack size of "AppBringUp" thread, the remote node can boot NSH 
on RPMSGFS now. I am sorry for not trying this earlier. I was browsing the "rpmsgfs.c" 
blindly and noticed a few auto variables defined in the stack... then I thought it might worth a try so 
I did it.


That is good news!  I usually try increasing stack sizes first thing.  
Because it it easy to do and by far the most common cause of memory 
corruption.


Is this a configuration option?  If not, it should be.


Now I am still unclear about why small stack leads to heap corruption? Also how we 
can read this stack issue from stackdump logs? Let me know if you have 
any hints.
For a kernel thread, the stack is allocated on the heap.  When you 
overrun the stack, the metadata at the end of stack allocation may be 
clobbered.  My confusing, the meta data or actual data of of the 
preceding (victim) chunk may be corrupted.  Often the the symptoms of 
the failure are even more obscure than these.


Re: mm/mm_heap assertion error

2024-03-12 Thread Gregory Nutt

On 3/12/2024 1:10 AM, yfliu2008 wrote:

On the other hand, if we choose not mounting NSH from the RPMSGFS, it can boot 
smoothly and after boot we can manually mount the RPMSGFS for playing.
That sounds like an initialization sequencing problem.  Perhaps 
something is getting used before it has been initialized?




Re: mm/mm_heap assertion error

2024-03-11 Thread Gregory Nutt
meminfo() can be helpful too.  It detects many heap corruption problems 
(but perhaps not all?).  By sprinkling a few calls to kmm_meminfo() in 
choice locations, you should also be able to isolate the culprit.  
Perhaps after each time the lopri worker runs or after each rpmsg.


On 3/11/2024 1:20 PM, Simon Filgis wrote:

Is there a way to colorize heap to track down the bandid? Like CRC pattern
on all the spaces around and check on every call that the CRC pattern ist
still OK?

Gregory Nutt  schrieb am Mo., 11. März 2024, 19:27:


If the memory location that is corrupted is consistent, then you can
monitor that location to find the culprit (perhaps using debug output).
If your debugger supports it then setting a watchpoint could also
trigger a break when the corruption occurs.

Maybe you can also try disabling features until you find the feature
logic that is corrupting the heap.  There is no easy way to accomplish
this.

On 3/11/2024 11:27 AM, Nathan Hartman wrote:

What's needed is some way to binary search where the culprit is.

If I understand correctly, it looks like the crash is happening in the
later stages of board bring-up? What is running before that? Can parts
be disabled or skipped to see if the problem goes away?

Another idea is to try running a static analysis tool on the sources
and see if it finds anything suspicious to be looked into more
carefully.


On Mon, Mar 11, 2024 at 10:00 AM Gregory Nutt 

wrote:

The reason that the error is confusing is because the error probably did
not occur at the time of the assertion; it probably occurred much

earlier.

In most crashes due to heap corruption there are two players:  the
culprit and the victim threads.  The culprit thread actually cause the
corruption.  But at the time of the corruption, no error occurs.  The
error will not occur until later.

So sometime later, the victim thread runs, encounters the clobbered heap
and crashes.  In this case, "AppBringup" and "rptun" are potential
victim threads.  The fact that they crash tell you very little about the
culprit.

On 3/10/2024 6:51 PM, yfliu2008 wrote:

Gregory, thank you for the analysis.




The crashes happened during system booting up, mostly at "AppBringup"

or "rptun" threads, as per the assertion logs. The other threads existing
are the "idle" and the "lpwork" threads as per the sched logs. There should
be no other threads as NSH creation is still ongoing. As for
interruptions, the UART and IPI are running in kernel space and MTIMER are
in NuttSBI space. The NSH is loaded from a RPMSGFS volume, thus there
are a lot RPMSG communications.




Is the KASAN proper for use in Kernel mode?


With MM_KASAN_ALL it reports a read access error:



BCkasan_report: kasan detected a read access error, address at

0x708fe90,size is 8, return address: 0x701aeac

_assert: Assertion failed panic: at file: kasan/kasan.c:117 task:

Idle_Task process: Kernel 0x70023c0


The call stack looks like:


#0 _assert (filename=0x7060f78 "kasan/kasan.c", linenum=117,

msg=0x7060ff0 "panic", regs=0x7082720 
#2 0x070141d6 in kasan_report (addr=0x708fe90, size=8,

is_write=false, return_address=0x701aeac 
#3 0x07014412 in kasan_check_report (addr=0x708fe90,

size=8, is_write=false, return_address=0x701aeac 
#4 0x0701468c in __asan_load8_noabort (addr=0x708fe90)

at kasan/kasan.c:315

#5 0x0701aeac in riscv_swint (irq=0, context=0x708fe40,

arg=0x0) at common/riscv_swint.c:133

#6 0x0701b8fe in riscv_perform_syscall (regs=0x708fe40)

at common/supervisor/riscv_perform_syscall.c:45

#7 0x07000570 in sys_call6 ()



With MM_KASAN_DISABLE_READ_CHECKS=y, it reports:


_assert: Assertion failed : at file: mm_heap/mm_malloc.c:245 task:

rptun process: Kernel 0x704a030


The call stack is:


#0 _assert (filename=0x7056060 "mm_heap/mm_malloc.c",

linenum=245, msg=0x0, regs=0x7082720 
#2 0x07013082 in mm_malloc (heap=0x7089c00, size=128) at

mm_heap/mm_malloc.c:245

#3 0x07011694 in kmm_malloc (size=128) at

kmm_heap/kmm_malloc.c:51

#4 0x0704efd4 in metal_allocate_memory (size=128) at

.../nuttx/include/metal/system/nuttx/alloc.h:27

#5 0x0704fd8a in rproc_virtio_create_vdev (role=1,

notifyid=0,

  rsc=0x80200050, rsc_io=0x7080408 
priv=0x708ecd8,

  notify=0x704e6d2 
rst_cb=0x0)

  at open-amp/lib/remoteproc/remoteproc_virtio.c:356
#6 0x0704e956 in remoteproc_create_virtio

(rproc=0x708ecd8,

  vdev_id=0, role=1, rst_cb=0x0) at

open-amp/lib/remoteproc/remoteproc.c:957

#7 0x0704b1ee in rptun_dev_start (rproc=0x708ecd8)
  at rptun/rptun.c:757
#8 0x07049ff8 in rptun_start_worker (arg=0x708eac0)
  at rptun/rptun.c:233
#9 0x0704a0ac in rptun_thread (argc=3, argv=0x7092010)
  at rptun/rptun.c:253
#10 0x0700437e in nxtask_start () at task/task_start.c:107


This looks like already corrupted.



I also notic

Re: mm/mm_heap assertion error

2024-03-11 Thread Gregory Nutt
If the memory location that is corrupted is consistent, then you can 
monitor that location to find the culprit (perhaps using debug output).  
If your debugger supports it then setting a watchpoint could also 
trigger a break when the corruption occurs.


Maybe you can also try disabling features until you find the feature 
logic that is corrupting the heap.  There is no easy way to accomplish this.


On 3/11/2024 11:27 AM, Nathan Hartman wrote:

What's needed is some way to binary search where the culprit is.

If I understand correctly, it looks like the crash is happening in the
later stages of board bring-up? What is running before that? Can parts
be disabled or skipped to see if the problem goes away?

Another idea is to try running a static analysis tool on the sources
and see if it finds anything suspicious to be looked into more
carefully.


On Mon, Mar 11, 2024 at 10:00 AM Gregory Nutt  wrote:

The reason that the error is confusing is because the error probably did
not occur at the time of the assertion; it probably occurred much earlier.

In most crashes due to heap corruption there are two players:  the
culprit and the victim threads.  The culprit thread actually cause the
corruption.  But at the time of the corruption, no error occurs.  The
error will not occur until later.

So sometime later, the victim thread runs, encounters the clobbered heap
and crashes.  In this case, "AppBringup" and "rptun" are potential
victim threads.  The fact that they crash tell you very little about the
culprit.

On 3/10/2024 6:51 PM, yfliu2008 wrote:

Gregory, thank you for the analysis.




The crashes happened during system booting up, mostly at "AppBringup" or "rptun" threads, as per the 
assertion logs. The other threads existing are the "idle" and the "lpwork" threads as per the sched logs. 
There should be no other threads as NSH creation is still ongoing. As for interruptions, the UART and IPI are running 
in kernel space and MTIMER are in NuttSBI space. The NSH is loaded from a RPMSGFS volume, thus there are a lot RPMSG 
communications.




Is the KASAN proper for use in Kernel mode?


With MM_KASAN_ALL it reports a read access error:



BCkasan_report: kasan detected a read access error, address at 0x708fe90,size 
is 8, return address: 0x701aeac

_assert: Assertion failed panic: at file: kasan/kasan.c:117 task: Idle_Task 
process: Kernel 0x70023c0


The call stack looks like:


#0 _assert (filename=0x7060f78 "kasan/kasan.c", linenum=117, msg=0x7060ff0 "panic", regs=0x7082720 




Re: mm/mm_heap assertion error

2024-03-11 Thread Gregory Nutt
The reason that the error is confusing is because the error probably did 
not occur at the time of the assertion; it probably occurred much earlier.


In most crashes due to heap corruption there are two players:  the 
culprit and the victim threads.  The culprit thread actually cause the 
corruption.  But at the time of the corruption, no error occurs.  The 
error will not occur until later.


So sometime later, the victim thread runs, encounters the clobbered heap 
and crashes.  In this case, "AppBringup" and "rptun" are potential 
victim threads.  The fact that they crash tell you very little about the 
culprit.


On 3/10/2024 6:51 PM, yfliu2008 wrote:

Gregory, thank you for the analysis.




The crashes happened during system booting up, mostly at "AppBringup" or "rptun" threads, as per the 
assertion logs. The other threads existing are the "idle" and the "lpwork" threads as per the sched logs. 
There should be no other threads as NSH creation is still ongoing. As for interruptions, the UART and IPI are running 
in kernel space and MTIMER are in NuttSBI space. The NSH is loaded from a RPMSGFS volume, thus there are a lot RPMSG 
communications.




Is the KASAN proper for use in Kernel mode?


With MM_KASAN_ALL it reports a read access error:



BCkasan_report: kasan detected a read access error, address at 0x708fe90,size 
is 8, return address: 0x701aeac

_assert: Assertion failed panic: at file: kasan/kasan.c:117 task: Idle_Task 
process: Kernel 0x70023c0


The call stack looks like:


#0 _assert (filename=0x7060f78 "kasan/kasan.c", linenum=117, msg=0x7060ff0 "panic", regs=0x7082720 
        
Original

  


From:"Gregory Nutt"< spudan...@gmail.com ;

Date:2024/3/11 1:43

To:"dev"< dev@nuttx.apache.org ;

Subject:Re: mm/mm_heap assertion error


On 3/10/2024 4:38 AM, yfliu2008 wrote:
 Dear experts,




 When doing regression check on K230 with a previously working Kernel mode 
configuration, I got assertion error like below:



 #0 _assert (filename=0x704c598 "mm_heap/mm_malloc.c", linenum=245, 
msg=0x0,regs=0x7082730  #2 0x070110f0 in mm_malloc (heap=0x7089c00, size=112) 
at mm_heap/mm_malloc.c:245
 #3 0x0700fd74 in kmm_malloc (size=112) at 
kmm_heap/kmm_malloc.c:51
 #4 0x07028d4e in elf_loadphdrs (loadinfo=0x7090550) at 
libelf/libelf_sections.c:207
 #5 0x07028b0c in elf_load (loadinfo=0x7090550) at 
libelf/libelf_load.c:337
 #6 0x070278aa in elf_loadbinary (binp=0x708f5d0, filename=0x704bca8 
"/system/bin/init", exports=0x0, nexports=0) at elf.c:257
 #7 0x070293ea in load_absmodule (bin=0x708f5d0, filename=0x704bca8 
"/system/bin/init", exports=0x0, nexports=0) at binfmt_loadmodule.c:115
 #8 0x07029504 in load_module (bin=0x708f5d0, filename=0x704bca8 
"/system/bin/init", exports=0x0, nexports=0) at binfmt_loadmodule.c:219
 #9 0x07027674 in exec_internal (filename=0x704bca8 
"/system/bin/init", argv=0x70907a0, envp=0x0, exports=0x0, nexports=0, actions=0x0, 
attr=0x7090788, spawn=true) at binfmt_exec.c:98
 #10 0x0702779c in exec_spawn (filename=0x704bca8 
"/system/bin/init", argv=0x70907a0, envp=0x0, exports=0x0, nexports=0, actions=0x0, 
attr=0x7090788) at binfmt_exec.c:220
 #11 0x0700299e in nx_start_application () at init/nx_bringup.c:375
 #12 0x070029f0 in nx_start_task (argc=1, argv=0x7090010) at 
init/nx_bringup.c:403
 #13 0x07003f84 in nxtask_start () at task/task_start.c:107



 It looks like mm/mm_heap data structure consistency was broken. As I am 
unfamilar with these internals, I am looking forward to any hints about how 
to find the root cause.







 Regards,

 yf

This does indicate heap corruption:

 240 /* Node next must be alloced, 
otherwise it should be merged.
 241 * Its prenode(the founded 
node) must be free and
 preceding should
 242 * match with nodesize.
 243 */
 244
 245 DEBUGASSERT(MM_NODE_IS_ALLOC(next) 

 MM_PREVNODE_IS_FREE(next) 
 
246
 next-preceding == nodesize);

Heap corruption normally occurs when that this a wild write outside of
the allocated memory region. These kinds of wild writes may clobber
some other threads data and directory or indirectly clobber the heap
meta data. Trying to traverse the damages heap meta data is probably
the root cause of the problem.

Only a kernel thread or interrupt handler could damage the heap.

The cause of this corruption can be really difficult to find because the
reported error does not occur when the heap is damaged but may not
manifest itself until sometime later.

It is unlikely that anyone will be able to solve this by just talking
about it. It might be worth increasing some kernel thread heap sizes
just to eliminate that common cause.





Re: mm/mm_heap assertion error

2024-03-10 Thread Gregory Nutt

On 3/10/2024 4:38 AM, yfliu2008 wrote:

Dear experts,




When doing regression check on K230 with a previously working Kernel mode 
configuration, I got assertion error like below:



#0 _assert (filename=0x704c598 "mm_heap/mm_malloc.c", linenum=245, msg=0x0,regs=0x7082730 


This does indicate heap corruption:

   240   /* Node next must be alloced, otherwise it should be merged.
   241    * Its prenode(the founded node) must be free and
   preceding should
   242    * match with nodesize.
   243    */
   244
   245   DEBUGASSERT(MM_NODE_IS_ALLOC(next) &&
   MM_PREVNODE_IS_FREE(next) &&
   246   next->preceding == nodesize);

Heap corruption normally occurs when that this a wild write outside of 
the allocated memory region.  These kinds of wild writes may clobber 
some other threads data and directory or indirectly clobber the heap 
meta data.  Trying to traverse the damages heap meta data is probably 
the root cause of the problem.


Only a kernel thread or interrupt handler could damage the heap.

The cause of this corruption can be really difficult to find because the 
reported error does not occur when the heap is damaged but may not 
manifest itself until sometime later.


It is unlikely that anyone will be able to solve this by just talking 
about it.  It might be worth increasing some kernel thread heap sizes 
just to eliminate that common cause.





Re: Deleting files with wildcard

2024-02-27 Thread Gregory Nutt



On 2/24/2024 11:42 AM, Gregory Nutt wrote:

On 2/24/2024 11:39 AM, Jernej Turnsek wrote:
@Tomek No, my SD card filesystem is notu mounted read-only, because I 
can

delete single files. I believe that rm * is just not implemented.
There is no logic in NSH that expands "*" to a list of all files in 
the current directory.  It is passed on to unlink as the literal 
string "*" which is not the name of an element in the current directory.
I added this as a github issue for any future discussions. 
https://github.com/apache/nuttx-apps/issues/2311


Re: Deleting files with wildcard

2024-02-24 Thread Gregory Nutt

On 2/24/2024 11:39 AM, Jernej Turnsek wrote:

@Tomek No, my SD card filesystem is notu mounted read-only, because I can
delete single files. I believe that rm * is just not implemented.
There is no logic in NSH that expands "*" to a list of all files in the 
current directory.  It is passed on to unlink as the literal string "*" 
which is not the name of an element in the current directory.


Re: Deleting files with wildcard

2024-02-23 Thread Gregory Nutt

Makes sense since mount points should be "removed" with umount.


rm on a mount point should work.  rm does not necessarily remove it.  It 
maps to unlink which removes the name from mount point, or file.  Logic 
in the VFS will detect detect the open count.  The entity will exist 
nameless until all references are closed, then it will finally delete 
the mountpoint.


That is why on a POSIX system you can unlink (rm) a file open by another 
task.  The file will still persist (without a name) until that other 
task closes it.



About removing "." I think Linux is even more restrictive:

$ mkdir /tmp/test
$ cd /tmp/test
$ echo "T1" > test1
$ echo "T2" > test2
$ echo "T3" > test3

$ rm -r .
rm: refusing to remove '.' or '..' directory: skipping '.'


Same behavior, just a better error message.  Another difference is that 
fat does not have "." or ".." but does have some lame logic to make fat 
behave similarly when they are used.


The logic that implements rm -r * is part of apps/, not the NuttX vfs.  
It is function unlink_recursive in nsh_fscmds.c.   It doesn't look like 
it handles the "*" expansion at all.   May 'rm -r'  would work


Does 'ls -r *' work?  The logic is very similar.




Re: Deleting files with wildcard

2024-02-23 Thread Gregory Nutt


On 2/23/2024 6:19 PM, Gregory Nutt wrote:



On 2/23/2024 5:57 PM, Alan C. Assis wrote:

/tmp:
  test1
  test2
  test3
nsh> rm -r .
nsh: rm: unlink failed: 1
nsh> ls
/tmp:
nsh>

It reported error "unlink failed: 1", but removed all the files.


1 = EPERM, Operation not permitted

I suspect that occurred when it tried to rm ".".   Try putting a 
subdirectory under /tmp.  Does vfat even support the "." hardlink?


"." corresponds to the root directory of the mounted file system.  
Trying to delete  "." will result in an EPERM error on VFAT.  See 
fat/fs_fat32dirent.c, function fat_remove().


It seems like there is some issue with unlink.



Re: Deleting files with wildcard

2024-02-23 Thread Gregory Nutt


On 2/23/2024 5:57 PM, Alan C. Assis wrote:

/tmp:
  test1
  test2
  test3
nsh> rm -r .
nsh: rm: unlink failed: 1
nsh> ls
/tmp:
nsh>

It reported error "unlink failed: 1", but removed all the files.


1 = EPERM, Operation not permitted

I suspect that occurred when it tried to rm ".".   Try putting a 
subdirectory under /tmp.  Does vfat even support the "." hardlink?


Re: syncfs() method of fs

2024-02-15 Thread Gregory Nutt


However, I think it is a mistake to put the hooks in for syncfs and 
not implement syncfs for other file systems that need it.   From what 
you say, syncfs() would be a no-op for those other file systems which 
is (usually) wrong.  Perhaps they could just call fsync() from 
sync()?  Doing nothing would be an error.


If that is true, then you might want to open an issue.  Or perhaps 
just add to https://github.com/apache/nuttx/issues/3399


Actually it won't "do nothing."  It should return EBADF if the syncfs() 
method is not provided.    But that is error too.  That is not the 
correct return value.  EBADF should signify "/fd/ is not a valid file 
descriptor." which is not the case here.  It should return ENOTSUP




Re: syncfs() method of fs

2024-02-15 Thread Gregory Nutt
I have no idea what is going on with syncfs.  There have been some 
recent changes:


   Documentation/ReleaseNotes/NuttX-12.1.0:*
   [#8092](https://github.com/apache/nuttx/pull/8092) fs: Map syncfs to
   fsync
   Documentation/ReleaseNotes/NuttX-12.3.0:*
   [#10764](https://github.com/apache/nuttx/pull/10764) add syncfs api
   for sync whole fs data

syncfs was very recently added:

   commit 2bce0f404c37826bf2eae59cfe42fe1c80bd0c22
   Author: guohao15 
   Date:   Fri Sep 15 16:22:13 2023 +0800

    fs:add syncfs api for sync whole fs data

    Signed-off-by: guohao15 

I suspect that syncfs is needed for some 3rd party file system, perhaps 
littlefs?  Or maybe it was just needed to port some application.


syncsfs() and fsync() would apparently be interchangeable on other 
filesystems based on these comments and changes.


   --- a/include/unistd.h
   +++ b/include/unistd.h
   @@ -258,7 +258,6 @@

 /* Helpers and legacy compatibility definitions */

   -#define syncfs(f)    fsync(f)
 #define fdatasync(f) fsync(f)
 #define getdtablesize(f) ((int)sysconf(_SC_OPEN_MAX))
 #define getpagesize(f) ((int)sysconf(_SC_PAGESIZE))
   @@ -441,6 +440,7 @@ int setregid(gid_t rgid, gid_t egid);
 int getentropy(FAR void *buffer, size_t length);

 void    sync(void);
   +int syncfs(int fd);

 #if CONFIG_FORTIFY_SOURCE > 0
 fortify_function(getcwd) FAR char *getcwd(FAR char *buf,

However, I think it it is mistake to put the hooks in for syncfs and not 
implement syncfs for other file systems that need it.   From what you 
say, syncfs() would be a no-op for those other file systems which is 
(usually) wrong. Perhaps they could just call fsync() from sync()?  
Doing nothing would be an error.


If that is true, then you might want to open an issue.  Or perhaps just 
add to https://github.com/apache/nuttx/issues/3399


On 2/15/2024 3:16 AM, Saurav Pal wrote:

Hi Greg,

Thanks for the info! Can you tell me the reason why none of the existing
file systems in NuttX implement it?

Regards,
SP

On Wed, Feb 14, 2024 at 9:27 PM Gregory Nutt  wrote:


It implementes the syncfs file system interface:
https://linux.die.net/man/2/syncfs

On 2/14/2024 8:16 AM, Saurav Pal wrote:

Hi all,

I was going through the codebase of different file systems, and I haven't
yet understood the syncfs() method of mountpt operations.

There aren't any existing filesystems in the codebase that implement it,
and I couldn't understand what exactly is required from this method (like
what it does, what side effects it's supposed to have and what is the
intended reason for this method to exist).

Can you please help me understand it? Thanks in advance.

Regards,
SP





Re: syncfs() method of fs

2024-02-14 Thread Gregory Nutt
It implementes the syncfs file system interface:  
https://linux.die.net/man/2/syncfs


On 2/14/2024 8:16 AM, Saurav Pal wrote:

Hi all,

I was going through the codebase of different file systems, and I haven't
yet understood the syncfs() method of mountpt operations.

There aren't any existing filesystems in the codebase that implement it,
and I couldn't understand what exactly is required from this method (like
what it does, what side effects it's supposed to have and what is the
intended reason for this method to exist).

Can you please help me understand it? Thanks in advance.

Regards,
SP





Re: Addition of New Chip STM32F427AI

2024-02-12 Thread Gregory Nutt
The STM32F427AI is in the same family as STM32F427xx (STM32F427V, 
STM32F427Z, STM32F427I) and STM32F429xx, which are already supported.


The first thing to do is to decide which is of the supported parts is 
most like the STM32F427AI.  Mostly likely it is just a difference in 
memory size or something similar.


The package STM32F427AI has more pins so some other functionality must 
be brought out.


On 2/12/2024 2:06 PM, Alan C. Assis wrote:

Hi Janardhan,

You can start looking at which STM32F4xx chip existent on NuttX is more
similar to STM32F427AI and start your port based on it.

You can do it using the chip datasheet and/or reference manual, writing
down the difference in peripherals, registers, etc.

Normally STM32 devices share many features, but sometimes they have small
incompatibilities.

So, look the existing chips on NuttX (nuttx/arch/arm/src/stm32/hardware/
and nuttx/arch/arm/include/stm32/

Best Regards,

Alan

On Mon, Feb 12, 2024 at 8:50 AM Janardhan Silwal <
janardhansil...@outlook.com> wrote:


Hi Community,

I have a board which uses STM32F427AI, and was planning on running a
middleware which runs over nuttx, but only noticed, after the board arrived
from manufacturing that there is no option to select, STM32F427AI from the
menuconfig/qconfig.

I don't have an option to replace the chip to the ones which are currently
supported and thus, want to add  the support for STM32F427AI, but don't
know where to start.

Hoping to get some direction, from where I can start.

Thanks in advance.
janardhan



Re: Detecting FS

2024-02-12 Thread Gregory Nutt
You would need to mount with -t vfat.  That is how it the system knows 
that the file system is vfat.


It seems like you should be able to omit the file system type when 
mounting and let the system figure out the file system type, but that 
does not appear to be supported.


On 2/12/2024 2:30 PM, Saurav Pal wrote:

Hi Alan,

Thanks for the reply. I had gone through that global array but I couldn't
find what I was looking for.

In a different phrasing, what I don't understand is how is it detected
which file system was used to format a storage device.

Like, let's say I have a HDD in my PC formatted with VFAT, and if I use it
with NuttX, how will it detect VFAT is being used?

Magic numbers are unique for FSs, but I have not seen / missed where they
are being stored on the disk for identification.

Regards,
SP

On Tue, 13 Feb, 2024, 01:42 Alan C. Assis,  wrote:


Hi Saurav,

Please take a look at "static const struct fsmap_t g_bdfsmap[]" in the file
nuttx/fs/mount/fs_mount.c

You will figure-out how the "magic" happens.

Best Regards,

Alan

On Mon, Feb 12, 2024 at 9:20 AM Saurav Pal  wrote:


Hi all,

I am trying to write some docs for VFS in NuttX, but there's one thing

I've

been stumbling on for some time.

How are the file systems detected from the storage devices? For example,

if

I have a storage device which has been formatted using VFAT...after (say)
power on, how does NuttX detect that vfat was used to format the storage
device?

 From what I can see, NuttX does not have a "superblock" concept

explicitly

defined, nor does it seem to require the file systems to store magic
numbers in the disk in any way.

Thanks in advance.
Regards,
SP



Re: AVR32 on NuttX

2024-02-08 Thread Gregory Nutt

On 2/8/2024 2:50 PM, Alan C. Assis wrote:
Thank you for clarifying Greg, but I think PIC32 is using little 
endian, at least I can't see CONFIG_ENDIAN_BIG defined to them at boards/


You are right.  MIPS is traditionally big-endian or powers up 
big-endian, but Microchip opted to force little endian operation. I 
forgot that (even though I did the initial port on that family).





Re: STM32H7 board with ethernet

2024-02-06 Thread Gregory Nutt
The network monitor is part of apps/netutils/netinit so it is not a part 
of NSH.  NSH can automatically perform the network initialization if so 
configured and which, optionally, starts the network monitor thread.  
But the logic is not architecturally a part of NSH nor does it depend on 
N SH.


On 2/6/2024 9:32 AM, Nathan Hartman wrote:

On Tue, Feb 6, 2024 at 8:45 AM Sebastien Lorquet 
wrote:


However, the default network configuration provided in NuttX examples is
cumbersome and too much linked with NSH

It can work for simple tests and demos, but you will have to write a
proper network management daemon if you plan to use more than one
network app.



It would be a nice thing if the network management daemon could be factored
out of NSH so that boards that don't run NSH could have the same network
management without implementing it again.

Cheers
Nathan





Re: NuttX Workshop 2024 - event planning

2024-01-29 Thread Gregory Nutt
The names NuttX and Apache NuttX are both names that are trademarked by 
the ASF.  That certainly can be used for external events, but not 
without explicit permission from the ASF:


   https://apache.org/foundation/marks/resources
   https://apache.org/foundation/marks/events

"The VP, Brand Management, or the VP of the relevant Apache project, 
must approve the use of Apache marks in any events run by third parties. 
This includes conferences, meetups, get-togethers, and any other types 
of events. Events must comply with this policy and have permission to 
use Apache marks in the name of the event or in any of the primary 
branding, including website domain names, for the event."


In any event, we must have such permission to use the trademark for any 
event at Sony.  It is a legal requirement.


On 1/29/2024 12:52 PM, Alan C. Assis wrote:

Good question Greg!

I think it is this way for legal reasons: this is not an event released or
promoted by Apache Software Foundation (ASF).

And although the official name of the project is Apache NuttX, we can't use
the Apache NuttX Workshop, because that would emphasize an Apache event,
but it is a community driven event.

Maybe in the future this event could be driven by ASF, like Linux
Foundation does.

Best Regards,

Alan

On Mon, Jan 29, 2024 at 10:30 AM Gregory Nutt  wrote:


Shouldn't this be "Apache NuttX Workshop 2024" instead of "Nuttx
Workshop 2024".  Apache NuttX is the full, legal name of the RTOS; Use
of NuttX alone is considered ambiguous and is discouraged by the ASF.




Re: NuttX Workshop 2024 - event planning

2024-01-29 Thread Gregory Nutt
Shouldn't this be "Apache NuttX Workshop 2024" instead of "Nuttx 
Workshop 2024".  Apache NuttX is the full, legal name of the RTOS; Use 
of NuttX alone is considered ambiguous and is discouraged by the ASF.





Re: [OT] Projects for GSoC 2024

2024-01-27 Thread Gregory Nutt

Like

 * 
https://riscv.org/blog/2023/02/porting-nuttx-real-time-operating-system-on-polarfire-soc-fpga/
 * 
https://www.hackster.io/lupyuen/8-risc-v-sbc-on-a-real-time-operating-system-ox64-nuttx-474358
 * https://twitter.com/btashton/status/1243699309117235200

On 1/27/2024 3:58 PM, Victor Suarez Rovere wrote:

The innovation won't be to run NuttX in a RISC-V (soft-core or not) but
using a FPGA for its flexibility to add any kind of peripherals, one of the
main ones to be useful in my view will be a high-resolution framebuffer and
USB mouse/keyboard for a complete UI

On Sat, Jan 27, 2024 at 4:31 PM Gregory Nutt  wrote:


Aren't most CPUs available as soft cores?  Certainly Xtensa was intended
for that purpose.  ARM and MIPS have been common soft cores in ASICs for
more than a decade. As is RISC-V soft core in FPGAs.
https://en.wikipedia.org/wiki/Soft_microprocessor

In the past, there was some interest in ports of NuttX to softcore's
like MicroBlaze.  But there hasn't been that kind of interest in recent
times.

This would have been an innovation a decade or so ago, but I wonder
about that now.



On 1/27/2024 1:18 PM, Victor Suarez Rovere wrote:

Tomek, as I clarified, porting NuttX to a FPGA will require a soft-core
CPU. I don't envision an opertaing system without a CPU, I see that like

a

bad design choice if possible at all
Using a soft core and custom peripherals seems more valuable, even
including video output and USB host for mouse/keyboard handling. I've

done

that for Micropython

El sáb., 27 ene. 2024 13:23, Tomek CEDRO  escribió:


Okay Victor, I was thinking about toolchain that you present in
"Sphery vs. Shapes" [1] to be adopted for NuttX on FPGA conversion
without a CPU design.. could you please send your full detailed
proposal then? :-)

[1]https://www.youtube.com/watch?v=hn3sr3VMJQU

--
CeDeROM, SQ7MHZ,http://www.tomek.cedro.info

On Sat, Jan 27, 2024 at 7:22 AM Victor Suarez Rovere
  wrote:

Just clarifying, the idea to run NuttX on a FPGA is to instantiate a

CPU

and peripherals on the FPGA and then run normally as if it were a MCU
Good thing is that you can change the CPU, add/remove peripherals, etc.

On Sat, Jan 27, 2024 at 12:32 AM Tomek CEDRO  wrote:


Hey there Victor! Thanks for your interest in NuttX port to FPGA!! :-)

No there is no such design yet.. you would have to create everything
from scratch.. so there is some serious amount of work to do.. but
imagine the results.. there will be just one step to ASIC!! :-)

I could  reconsider my mentor position in this kind of project because
I would really love to see the internals first hand.. with a help of
more experienced NuttX'er for sure as second mentor :-) :-)

I did a PONG on FPGA over 10 years ago but I would never dare to run
CPU-less-program directly on FPGA.. then RTOS.. then lets say Atari
emulator.. chip module player.. open source smart debug probe.. a
neural interface.. who knows.. would that even fit into the FPGA? :-)
:-)

I have a strong feeling this may be important.. but I leave the whole
decision to the PMC :-)

Have a good weekend my friends :-)
Tomek

--
CeDeROM, SQ7MHZ,http://www.tomek.cedro.info

On Sat, Jan 27, 2024 at 2:55 AM Victor Suarez Rovere
  wrote:

I can certainly port NuttX to run on some FPGA boards too
Is any board already supported?

On Fri, Jan 26, 2024 at 4:40 PM Alan C. Assis

wrote:

Hi Tomek,

His toolchain is focused on FPGA, but he is interested in

participating in

other projects for GSoC.

Also we need NuttX mentors, I will participate, but for each

project we

need two mentors, please let me know who could be interested to

help.

Best Regards,

Alan

On Fri, Jan 26, 2024 at 3:33 PM Tomek CEDRO

wrote:

On Fri, Jan 26, 2024 at 2:07 PM Alan C. Assis wrote:

Dear NuttXers,
Please find below some ideas of projects to improve NuttX

during

the

GSoC2024:


https://cwiki.apache.org/confluence/display/COMDEV/GSoC+2024+Ideas+list

If you have some other ideas, please let me know.

I would like to propose Victor Suarez (CC) idea for porting

toolchain

NuttX RTOS directly to FPGA :-)




https://www.tomshardware.com/news/fpga-demo-shows-efficiency-gains-compared-to-x86-chip

Tomek

--
CeDeROM, SQ7MHZ,http://www.tomek.cedro.info



Re: [OT] Projects for GSoC 2024

2024-01-27 Thread Gregory Nutt
Aren't most CPUs available as soft cores?  Certainly Xtensa was intended 
for that purpose.  ARM and MIPS have been common soft cores in ASICs for 
more than a decade. As is RISC-V soft core in FPGAs. 
https://en.wikipedia.org/wiki/Soft_microprocessor


In the past, there was some interest in ports of NuttX to softcore's 
like MicroBlaze.  But there hasn't been that kind of interest in recent 
times.


This would have been an innovation a decade or so ago, but I wonder 
about that now.




On 1/27/2024 1:18 PM, Victor Suarez Rovere wrote:

Tomek, as I clarified, porting NuttX to a FPGA will require a soft-core
CPU. I don't envision an opertaing system without a CPU, I see that like a
bad design choice if possible at all
Using a soft core and custom peripherals seems more valuable, even
including video output and USB host for mouse/keyboard handling. I've done
that for Micropython

El sáb., 27 ene. 2024 13:23, Tomek CEDRO  escribió:


Okay Victor, I was thinking about toolchain that you present in
"Sphery vs. Shapes" [1] to be adopted for NuttX on FPGA conversion
without a CPU design.. could you please send your full detailed
proposal then? :-)

[1] https://www.youtube.com/watch?v=hn3sr3VMJQU

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info

On Sat, Jan 27, 2024 at 7:22 AM Victor Suarez Rovere
 wrote:

Just clarifying, the idea to run NuttX on a FPGA is to instantiate a CPU
and peripherals on the FPGA and then run normally as if it were a MCU
Good thing is that you can change the CPU, add/remove peripherals, etc.

On Sat, Jan 27, 2024 at 12:32 AM Tomek CEDRO  wrote:


Hey there Victor! Thanks for your interest in NuttX port to FPGA!! :-)

No there is no such design yet.. you would have to create everything
from scratch.. so there is some serious amount of work to do.. but
imagine the results.. there will be just one step to ASIC!! :-)

I could  reconsider my mentor position in this kind of project because
I would really love to see the internals first hand.. with a help of
more experienced NuttX'er for sure as second mentor :-) :-)

I did a PONG on FPGA over 10 years ago but I would never dare to run
CPU-less-program directly on FPGA.. then RTOS.. then lets say Atari
emulator.. chip module player.. open source smart debug probe.. a
neural interface.. who knows.. would that even fit into the FPGA? :-)
:-)

I have a strong feeling this may be important.. but I leave the whole
decision to the PMC :-)

Have a good weekend my friends :-)
Tomek

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info

On Sat, Jan 27, 2024 at 2:55 AM Victor Suarez Rovere
 wrote:

I can certainly port NuttX to run on some FPGA boards too
Is any board already supported?

On Fri, Jan 26, 2024 at 4:40 PM Alan C. Assis 

wrote:

Hi Tomek,

His toolchain is focused on FPGA, but he is interested in

participating in

other projects for GSoC.

Also we need NuttX mentors, I will participate, but for each

project we

need two mentors, please let me know who could be interested to

help.

Best Regards,

Alan

On Fri, Jan 26, 2024 at 3:33 PM Tomek CEDRO 

wrote:

On Fri, Jan 26, 2024 at 2:07 PM Alan C. Assis wrote:

Dear NuttXers,
Please find below some ideas of projects to improve NuttX

during

the

GSoC2024:


https://cwiki.apache.org/confluence/display/COMDEV/GSoC+2024+Ideas+list

If you have some other ideas, please let me know.

I would like to propose Victor Suarez (CC) idea for porting

toolchain

NuttX RTOS directly to FPGA :-)




https://www.tomshardware.com/news/fpga-demo-shows-efficiency-gains-compared-to-x86-chip

Tomek

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info





Re: observations on kernel memory

2024-01-25 Thread Gregory Nutt
One possibility is that you are not properly harvesting thread or child 
exit status.  That is not a memory leak exactly.  You have to start 
threads in a detached state to prevent the return status from collected.


Could something like that be happening?

On 1/23/2024 2:45 AM, yfliu2008 wrote:

Geogory,


The leak happens to the kernel side, not to apps side. As with PROTECTED or KERNEL build, it is the "used 
Kmem" keeps growing, where the "used Umem" don't grow. As FLAT mode has only "Umem" so 
it grows in FLAT mode.


I don't know how to instrument ostest yet. With the help of "echo used  
/proc/memdump", I can see used memory nodes list added.



If needed I can create a Github ticket to track this so that some experts can 
reveal the root cause later.



Regards,
yf








Original
    
      


From:"Gregory Nutt"< spudan...@gmail.com ;

Date:2024/1/19 22:07

To:"dev"< dev@nuttx.apache.org ;

Subject:Re: observations on kernel memory


There is instrumentation in ostest that prints heap usage after each
test test. That should isolate the memory leak.

On 1/19/2024 2:07 AM, yfliu2008 wrote:
 Dear experts,




 With "rv-virt/knsh32", I noticed the "used Kmem" shown by "free" command keeps 
growing after each run of "ostest", below is numbers I recorded during five runs:





  10652 12756 12804 12852 12900 12948




 The first number is taken after boot, other numbers are taken after running 
"ostest" once. Except for the increasion of the first run, increasion of other runs 
is about 48 bytes, though not big but it may eventually lead to crash.




 Similar observations can be got from "rv-virt/nsh", "canmv230/knsh", 
"canmv230/pnsh" etc.





 Is this a known behavior?





 Regards,

 yf


Re: On-demand paging and lazy loading on NuttX;

2024-01-24 Thread Gregory Nutt



On 1/24/2024 3:44 PM, Gregory Nutt wrote:
The on-demand paging logic is obsolete and incomplete as well as 
incorrect for the current architecture.  It was an experiment only for 
the LPC3131 when I was running out of a small SRAM with dynamic paging 
from SPI flash.  It really should be removed.


True on-demand paging as you envision should be implemented in the 
context of the kernel build mode (which did not exist in the time 
frame of the LPC3131) which already supports dynamic paging of the 
heap, but not the stack or instruction space.


There is also: 
https://cwiki.apache.org/confluence/display/NUTTX/On-Demand+Paging


There is also this better reference: 
https://cwiki.apache.org/confluence/display/NUTTX/Memory+Configurations 
.  It address both the on-demand paging featured (briefly) and compares 
this to Kernel mode (in depth).





Re: On-demand paging and lazy loading on NuttX;

2024-01-24 Thread Gregory Nutt
The on-demand paging logic is obsolete and incomplete as well as 
incorrect for the current architecture.  It was an experiment only for 
the LPC3131 when I was running out of a small SRAM with dynamic paging 
from SPI flash.  It really should be removed.


True on-demand paging as you envision should be implemented in the 
context of the kernel build mode (which did not exist in the time frame 
of the LPC3131) which already supports dynamic paging of the heap, but 
not the stack or instruction space.


There is also: 
https://cwiki.apache.org/confluence/display/NUTTX/On-Demand+Paging


On 1/24/2024 2:14 PM, Tiago Medicci Serrano wrote:

Hi!

The topic is *on-demand paging and "lazy loading" on NuttX*.

Well, the on-demand paging article
 (and `
CONFIG_PAGING
`)
seems to have implemented the dynamic loading of instruction sections
(*.text) from a non-volatile media to internal RAM (able to execute
instructions). This was done mainly for LPC31xx and stated that it isn't
complete yet (a kind of experiment) and it does contain a page fault
handling logic.

On the other hand, one could use the Shared Memory Support
 mechanism of
paging allocation to implement on-demand paging. It even states that:


NOTE: An improved implementation my perform a “lazy” back up of the
physical memory, i.e., do not allocate the physical memory until the memory
is required, for example, when a page fault occurs when a application tries
to allocate the memory.


But lacks the "page fault" mechanism to provide a newly mapped page for the
task that caused the exception.

I may be missing something about this subject, but have you thought about
it? The idea is to allocate bigger virtual memory to the tasks' stack (with
few mapped pages) and treat page fault exceptions when they occur (and
freeing physical memory when it isn't needed anymore, making it possible to
be mapped to other pages if needed). How far are we from providing that?

Any thoughts about it?

Best regards,



Re: Builtin app with its own command processor on stdin

2024-01-23 Thread Gregory Nutt

Are you running app in foreground or background from NSH, i.e.,

nsh> app
nsh> app &

The second should show this kind of interleaved behavior.  In the first, 
nsh should block until the app exits.


Do you have waitpid() enabled?  The first behavior depends on the 
availability of waitpid()


Does you app exit and return to nsh after each command?

On 1/23/2024 11:19 AM, Jernej Turnsek wrote:

Hi,
I would like to develop my own builtin application to process some custom
commands. This app is started from nsh over the serial console. I have
duplicated the code from ftpc example and when application is started it
start to process stdin for commands. But this functionality is mixed with
nsh own processing of commands, thus when I hit Enter, I get app> line,
when hit Enter again I get nsh> command line. Entering commands like
"help", I get hl on nsh and ep on my app command processor. I am confused
here. I did not test ftpc example, but I presume it is not working either.
Both apps are using readline_stream API for getting the commands.

Regards,
Jernej





Re: ideas for managing the two cores of K230

2024-01-21 Thread Gregory Nutt
In general you would have to have two copies of NuttX and run them AMP.  
A single OS with multiple CPUs would require SMP.  SMP generally 
requires that the CPUs appear identical in the two CPU address spaces 
(with the exception of interrupts).


There was a long email thread discussing this exact topic a couple of 
months back.  That email list is difficult to search, however.  
https://lists.apache.org/list.html?dev@nuttx.apache.org I couldn't find it.


There are lots of issues that you would have to address.  Like cache 
coherency and that fact that that one CPU is slower than the other 
(destroying deterministic, real time behavior).  There were other 
issues, but I don't recall.


The best solution if the CPUs are not identical is to use AMP. NuttX 
supports OpenAMP.  you can see that in nuttx/OpenAMP.  There are a few 
examples in the tree too.


On 1/21/2024 8:22 PM, yfliu2008 wrote:

Dear experts,





The K230 chip I am using has two RISC-V cores with separate L2 cache, CLINT 
PLIC, their frequencies are different, the big core has vector extension which 
the little one doesn't have. The big core can be turned on/off by the little 
core.







How can we manage these two cores with NuttX? do we have to run an instance of 
NuttX on each core?






Is OpenAMP also needed?







I guess some other multi-core chips might also have similar questions, so 
please teach if there are any ideas.





Regards,

yf


Re: observations on kernel memory

2024-01-19 Thread Gregory Nutt
There is instrumentation in ostest that prints heap usage after each 
test test.  That should isolate the memory leak.


On 1/19/2024 2:07 AM, yfliu2008 wrote:

Dear experts,




With "rv-virt/knsh32", I noticed the "used Kmem" shown by "free" command keeps growing 
after each run of "ostest", below is numbers I recorded during five runs:





 10652 12756 12804 12852 12900 12948




The first number is taken after boot, other numbers are taken after running 
"ostest" once. Except for the increasion of the first run, increasion of other 
runs is about 48 bytes, though not big but it may eventually lead to crash.




Similar observations can be got from "rv-virt/nsh", "canmv230/knsh", 
"canmv230/pnsh" etc.





Is this a known behavior?





Regards,

yf





Re: debugging user app crashes

2024-01-11 Thread Gregory Nutt
Everything you need to know is in that referenced wiki page: 
https://cwiki.apache.org/confluence/display/NUTTX/Debugging+ELF+Loadable+Modules


I don't know anything about QEMU or RISC-V.  But assuming that NSH is 
your init application...


 * NSH is not in memory when you start the OS; it resides in your init
   file.  The OS will automatically start NSH as the init application
   (if properly configured).
 * Since there is no gdbserver or ptrace, there is not elegant way to
   break into the program.  Some kludgey ways to do that are described
   in the referenced wiki page.
 * Then just follow the instructions in the wiki page:
 o Start the debugger
 o Load the symbol table of the NSH application.  You probably
   don't need as load offset as described in the wiki page.
 o Set a breakpoint at nsh_parse, and
 o Continue

I have nothing to add to that.

On 1/10/2024 7:36 PM, yfliu2008 wrote:

Spudaneco,



Can you share more details about how to break in "nsh_parse()" with 
"rv-virt/knsh64"?


Regards,
yf





Original

  


From:"spudaneco"



Re: debugging user app crashes

2024-01-09 Thread Gregory Nutt



On 1/9/2024 8:06 PM, yfliu2008 wrote:

Yes, in kernel mode each user space program has similar but isolated virtual 
address spaces. So it seems that we can't set break points to them like FLAT or 
PROTECTED build.


Not sure if we need some sort of debug agent in KERNEL build so that it can help 
debugging user space tasks. When use of kernel build araises, this need 
will increase I guess.


Regards,
yf
People have periodically suggested implementing ptrace for NuttX. For 
example, https://github.com/apache/nuttx/issues/2028


Re: debugging user app crashes

2024-01-09 Thread Gregory Nutt


On 1/9/2024 7:28 PM, Alan C. Assis wrote:

Hi Yf,
Just look at your System.map and you will see that all those main() became
_main.

It isn't possible to have a unique main() symbol, otherwise we will have a
duplicated symbol, also even if the compiler accepted that, how could NuttX
know you want to call the main from Hello instead of the main from Alarm?

BR,

Alan
This is not true in the kernel build mode.  All applications run at 
exactly the same virtual address and the entry point is main() for all 
of them.  In the kernel build, all applications are loadable modules.  
None of the symbols in the loadable modules will appear in System.map 
since each application module is separately compiled and linked.

Re: debugging user app crashes

2024-01-09 Thread Gregory Nutt

This is old and does not specifically address kernel mode, but I think most of 
the debug tips do apply in kernel mode too. The load address is a constant for 
kernel ELF modules so at least that part doesn't apply.  But the rest probably 
does.

On 1/9/2024 7:20 PM, yfliu2008 wrote:

Alan,


Thanks a lot for your warm replies.


These apps are built with kernel mode export libraries, their entrances are all 
"main" as they each run in their user address spaces.


So when gdb connected with QEMU, command "break qemu_rv_start" can work but "break 
main" simply got function not defined even though I did "load ../apps/bin/init". It seems to me that 
current gdb session is bound to the kernel ELF, not to the apps ELF.




Regards,
yf





Original

  


From:"Alan C. Assis"< acas...@gmail.com ;

Date:2024/1/10 8:11

To:"dev"< dev@nuttx.apache.org ;

Subject:Re: debugging user app crashes


Hi Yf Liu,
I don't know if this minidump server works on 64-bit and with QEMU, but
about the other question how to setup breakpoint in the NSH user space
program, just set the break point to the nsh_main if you want to debug the
NSH itself or set it to program_main to put the breakpoint to some user
program called from NSH. For example: setting it to hello_main will cause
the GDB to stop at hello_main when you type "nsh hello" and press enter.

BR,

Alan

On Mon, Jan 8, 2024 at 9:13 PM yfliu2008  wrote:

 Dear community,




 I have NSH command crashes within qemu-system-riscv64 using a testing
 rv-virt/knsh configuration, I wonder what tools can I use to find the crash
 stack?




 I have feed console crash.log to minidumpserver.py as per this guide:
 https://nuttx.apache.org/docs/12.3.0/guides/minidumpserver.htm,; but
 it seems the tool might not support 64bit yet?




 I've tried gdb connection with qemu-system-riscv64 as per this
 guide: https://nuttx.apache.org/docs/12.3.0/guides/gdbwithpython.html,
 but I don't know how to set a break point in the user space NSH program.




 Can anyone give me a pointer?




 Regards,

 yf


Re: debugassert vs assert in apps

2024-01-03 Thread Gregory Nutt

+1

On 1/3/2024 10:43 AM, Nathan Hartman wrote:

On Wed, Jan 3, 2024 at 11:22 AM Gregory Nutt  wrote:

On 1/3/2024 10:11 AM, Fotis Panagiotopoulos wrote:

That would seem a little odd since there was a PR a few years ago to

change all instances of assert/ASSERT to DEBUGASSERT to save code size.

How is that so?

As I see here:
https://github.com/apache/nuttx/blob/master/include/assert.h#L122
assert defined exactly as DEBUGASSERT.

There shouldn't be any code size difference at all.

When CONFIG_DEBUG_ASSERTIONS is not defined, then all occurrences of
DEBUGASSERT compile to nothing (actually the current version compiles to
an expression that is optimized out):

 #undef DEBUGASSERT  /* Like ASSERT, but only if
 CONFIG_DEBUG_ASSERTIONS is defined */

 #ifdef CONFIG_DEBUG_ASSERTIONS
 #  define DEBUGASSERT(f) _ASSERT(f, __DEBUG_ASSERT_FILE__,
 __DEBUG_ASSERT_LINE__)
 #else
 #  define DEBUGASSERT(f) ((void)(1 || (f)))
 #endif

This value, ((void)(1 || (f))), is completely removed by the optimizer
because of short-circuiting and dead code removal.  So the code is much
smaller if CONFIG_DEBUG_ASSERTIONS is not enabled.  If DEBUGASSERT() is
replaced with assert() than that code bloat would be unconditional,
although somewhat less than when assertions are enabled.

This same kind of logic also applies to  DEBUGPANIC and DEBUGVERIFY.

Xiao Xiang made that change to reduce the size as needed by their
products.  He is the person you should be talking to.


Maybe we need NX_DEBUGASSERT, NX_DEBUGPANIC, NX_DEBUGVERIFY. The NX
prefix would make it more clear that this is NuttX-specific. These
would be used in the OS only, not in applications, and
CONFIG_DEBUG_ASSERTIONS would continue to control if these are real or
optimized out.

Applications that need their own specific, Kconfig-controlled debug
assertion, should define one themselves, and their own Kconfig to
optimize it out. Rationale: If you are debugging an application,
enable assertions only in that application, not everywhere throughout
the system.

Cheers
Nathan





Re: debugassert vs assert in apps

2024-01-03 Thread Gregory Nutt

On 1/3/2024 10:11 AM, Fotis Panagiotopoulos wrote:

That would seem a little odd since there was a PR a few years ago to

change all instances of assert/ASSERT to DEBUGASSERT to save code size.

How is that so?

As I see here:
https://github.com/apache/nuttx/blob/master/include/assert.h#L122
assert defined exactly as DEBUGASSERT.

There shouldn't be any code size difference at all.


When CONFIG_DEBUG_ASSERTIONS is not defined, then all occurrences of 
DEBUGASSERT compile to nothing (actually the current version compiles to 
an expression that is optimized out):


   #undef DEBUGASSERT  /* Like ASSERT, but only if
   CONFIG_DEBUG_ASSERTIONS is defined */

   #ifdef CONFIG_DEBUG_ASSERTIONS
   #  define DEBUGASSERT(f) _ASSERT(f, __DEBUG_ASSERT_FILE__,
   __DEBUG_ASSERT_LINE__)
   #else
   #  define DEBUGASSERT(f) ((void)(1 || (f)))
   #endif

This value, ((void)(1 || (f))), is completely removed by the optimizer 
because of short-circuiting and dead code removal.  So the code is much 
smaller if CONFIG_DEBUG_ASSERTIONS is not enabled.  If DEBUGASSERT() is 
replaced with assert() than that code bloat would be unconditional, 
although somewhat less than when assertions are enabled.


This same kind of logic also applies to  DEBUGPANIC and DEBUGVERIFY.

Xiao Xiang made that change to reduce the size as needed by their 
products.  He is the person you should be talking to.




Re: debugassert vs assert in apps

2024-01-03 Thread Gregory Nutt

On 1/3/2024 5:12 AM, Fotis Panagiotopoulos wrote:

Hello everyone,

I am glad that we all agree on this matter.

We can handle this in the following steps:

1. Ensure that any new PRs and apps follow this convention.
This is up to the reviewers, to enforce.

2. Get rid of all DEBUGASSERTs in apps.
Unfortunately, a quick grep yielded 3410 results...

How can this be managed?
   A: Find-and-replace DEBUGASSERT with assert. Is this safe to do so?
   Can I assume that assert is 100% safe to use where now DEBUGASSERT is
used?
That would seem a little odd since there was a PR a few years ago to 
change all instances of assert/ASSERT to DEBUGASSERT to save code size.

   B: Modify checkpatch.sh to catch this on modified files?
   Just like we gradually fixed licenses, coding style etc?
Any other ideas?

3. When the above are ready, move these definitions in nuttx/assert.h


An alternative:

3. Move definitions to nuttx/assert.h

4. create an apps/include/assert.h that also defines DEBUGASSERT.  When 
the above are ready, just remove apps/include/assert.


In fact just keeping DEBUGASSERT defined in an apps-specific 
apps/include/assert.h is fine with me.  The problem now is that 
DEBUGASSERT is treated as if it were a standard OS interface.  But 
nothing in apps/include/ is standard and a DEBUGASSERT definition there 
is completely healthy.





Re: debugassert vs assert in apps

2024-01-02 Thread Gregory Nutt

On Tue, Jan 2, 2024 at 11:16 AM Fotis Panagiotopoulos
  wrote:

DEBUGASSERT shall only be used for the kernel.
assert shall only be used for apps.

Rationale:

DEBUGASSERT is a custom macro that only exists in the NuttX world.
As such it is best being used in NuttX-specific code.

assert on the other hand is a standard function, it exists in all systems.
It seems better suited for generic and portable code (i.e. apps).


We have no way to enforce this.  What we do do is at least make it clear 
that the application is using a kernel-only macro or function 
inappropriately by putting the macro definition or function prototype in 
NuttX private file.  For example,


 * Remove the DEBUGASSERT (and ASSERT?) macros from include/assert.h
 * Add them to include/nuttx/assert.h
 * All applications that use DEBUGASSERT (or ASSERT) should be changed
   to use assert()
 * Kernel functions that use DEBUGASSERT (or ASSERT) would have to
   explicitly include 

Not foolproof, but that is how we have dealt this this in the past.  It 
if is in include/assert.h, is is essentially being advertised as OK for 
use by applications.  This is logically equivalent to the Linux/LibC 
conventions of putting Linux-specific files in /usr/include/linux vs. 
normal, standard application files in /usr/include.


There may be some issues in common application/kernel LibC files that 
use DEBUGASSERT (or ASSERT)


This same discussion should also apply to include/debug.h which contains 
only nonstandard debug macros.  Shouldn't it also be moved to 
include/nuttx/debug.h?




Re: NuttX on i486?

2023-12-28 Thread Gregory Nutt

There is a QEMU port to the i486 under boards/x86/qemu-i486

I am not sure of the state that was left in.  I don't think that has 
been touched for a while so I would also expect some bit rot.


That port was tested  on a retro hardware board.  The person that 
developed that board reported that it worked fine.  But I don't have any 
definitive information on that so that is not helpful.


On 12/28/2023 4:58 PM, Nathan Hartman wrote:

Has anyone tried to run NuttX on a physical x86 i486 machine?

(I haven't tried yet. Asking first. :-)

Cheers,
Nathan


Re: Read response from console

2023-12-20 Thread Gregory Nutt
Might be easier to use if the OPEN option SELECTED the FIFO and named 
pipes.  That way the POPEN option will appear in the menus when FIFOs 
are disabled.



On 12/20/2023 1:45 PM, Gustavo Soares wrote:
After enabling 'FIFO and named pipe drivers' I was able to use popen() 
and pclose() functions. Just like Nathan said... shouldn't appear it's 
dependency on this config?


Thank you!

*De:* Nathan Hartman 
*Enviado:* quarta-feira, 20 de dezembro de 2023 16:40
*Para:* dev@nuttx.apache.org 
*Assunto:* Re: Read response from console
So, in Kconfig, Example/popen should "depends on" FIFO and named pipe 
drivers?



On Wed, Dec 20, 2023 at 2:08 PM Alan C. Assis  wrote:

That is because you forgot to enable:

Device Drivers --->
    [*] FIFO and named pipe drivers  --->

Remember: "git grep" is your friend!

BR,

Alan

On Wed, Dec 20, 2023 at 9:53 AM Gustavo Soares
 wrote:

Hi Alan!

 Yes, this message pops up when trying to compile:

the problem seems to be in the apps/system/popen.
Should any other configuration be applied besides enabling the
popen()/pclose() functions?


*De:* Alan C. Assis 
*Enviado:* quarta-feira, 20 de dezembro de 2023 08:15
*Para:* dev@nuttx.apache.org 
*Assunto:* Re: Read response from console
Hi Gustavo,

Do you mean apps/examples/popen is not working?

I remember using popen() in the past to get the result of
other program output.

Best Regards,

Alan

On Mon, Dec 18, 2023 at 3:39 PM Gustavo Soares
 wrote:

Hello everyone!

 I have some i2c devices connected to my ESP32 and I can
get their values by using the i2c get command from NuttX.
Now I need my app to extract the value from the response,
so I enabled the popen() and pclose() functions but it
still not possible to use them, is there any other config
that must be enabled? Even the popen example can't find
the method reference.


BR,
Gustavo.


Re: [Article] NuttX on Ox64 BL808 RISC-V SBC: Bare Metal Experiments

2023-12-16 Thread Gregory Nutt
The Zigbee support is interesting.  NuttX does not have a Zigbee stack, 
but does have IEEE 802.15.4 which works with the same physical radio.


The IEEE 802.15.4 logic is quite an accomplishment.  The original 
development was done by  Anthony Merlino and Sebasiten Lorquet.  I see 
that there have been many recent commits for Xiaomi folks too.


When Anthony used the xbee which is a native Zigbee radio, he had to 
re-flash the part for use with IEEE 802.14.5


I also ported/re-implemented 6LoWPAN for use with IEEE 802.14.5. That is 
a very powerful solution.


I had intended to support 6LoWPAN with Bluetooth radios too, but never 
got around to it.


On 12/16/2023 5:03 PM, Lee, Lup Yuen wrote:

Pine64 Ox64 BL808 (64-bit RISC-V SBC) is now supported in NuttX Mainline!
(Many thanks to my reviewers)

Ox64 BL808 is an affordable $8 SBC with:
- 3 x RISC-V Cores (T-Head 64-bit / 32-bit)
- Networking: Ethernet, WiFi, Bluetooth LE, Zigbee
- Peripheral Interfaces: UART (Serial), I2C, SPI, USB, MIPI CSI
- Interrupt Controller: CLINT and PLIC
- Memory Management: Sv39 MMU
- OpenSBI Supervisor Binary Interface and U-Boot Bootloader
- Bootable microSD: Boot NuttX and Linux

But there's a catch:
- Limited RAM (64 MB): It's not for gaming and servers
- Flash Memory is Empty: Needs a One-Time Flashing (via a USB Serial
Adapter)
- Barebones Form Factor: Fits on a Breadboard, Ethernet is optional

In this article we walk through the steps to:
(1) Flash into Ox64 the U-Boot Bootloader and OpenSBI
(2) Boot Linux on a microSD Card
(3) Do the same to boot NuttX RTOS
(4) Run some Bare Metal Experiments inside the NuttX Kernel

Check out the article here:
https://www.hackster.io/lupyuen/8-risc-v-sbc-on-a-real-time-operating-system-ox64-nuttx-474358

Lup



Re: Legal help to (Apache) NuttX RTOS: adding licenseed driver

2023-12-10 Thread Gregory Nutt



On 12/10/2023 4:05 PM, Shane Curcuru wrote:


I think we have to be careful with the word "contribution".  The ASF 
cannot accept any contribution that is licensed and copyrighted by 
some other entity.  To "contribute" the code is to donate the code to 
the ASF without retaining any claims to it. Then the code belongs to 
the ASF and can be re-licensed as Apache 2.0 with the ASF copyright.


Er, no, that's not how the ASF treats "contributions".  In general, 
when someone contributes their copyrighted work to an Apache project, 
they keep the copyright, and merely license enough rights to the ASF 
such that the ASF (through our projects) can then re-ship that 
contribution at any point in the future as part of a larger work, 
under terms like the Apache-2.0 license.

...[snip]...


That use of the word "contribution" is causing me a little cognitive 
dissonance, but I will adapt.  Synonyms for "contribution" include 
donation, gift, present, grant, etc., most which are not interchangeable 
with "contribution" in this context.


How does the copyright holder "license enough rights to the ASF"?  I 
suppose that is via the license that they chose for the work, rather 
than through any specific interaction with the ASF?


Greg



Re: Legal help to (Apache) NuttX RTOS: adding licenseed driver

2023-12-10 Thread Gregory Nutt

On 12/10/2023 7:15 AM, Alan C. Assis wrote:

I understand your point. And in fact I think the issue is not your
contribution itself, but the future contribution from developers of RTEMS
and Linux that are using GPL.


I think we have to be careful with the word "contribution".  The ASF 
cannot accept any contribution that is licensed and copyrighted by some 
other entity.  To "contribute" the code is to donate the code to the ASF 
without retaining any claims to it. Then the code belongs to the ASF and 
can be re-licensed as Apache 2.0 with the ASF copyright.


What you are talking about has to be treated as third party code that 
resides with the Apache NuttX code.  It is not Apache NuttX code.


Re: AVR32 on NuttX

2023-11-20 Thread Gregory Nutt

But again, I think we can avoid this tortuous path and fix the CMake or
Makefile to build on Windows.

Fully understand.


It is likely that you could create a pure Windows native build (using a 
Windows native toolchain) or a pure Cygwin build (using a tool chain 
built under Cygwin) with CMake.  But I suspect that CMake would have 
difficulties with a mixed environment:  A Cygwin POSIX environment with 
a Windows native tool chain.  I could be wrong, but that is a perverse 
environment so I wouldn't have high expectations.


Re: AVR32 on NuttX

2023-11-18 Thread Gregory Nutt
AFAIK, the Cygwin build using a windows native tool chain has been 
broken for years. There is no advice anyone can give that that is just 
going to make it work. It is broken and no one has interest in fixing 
that which they have broken. People make many changes to the build 
system but never test on Cygwin with a native toolchain.


The basic Cygwin environment is a GNU/POSIX environment equivalent to 
the GNU/Linux environment. It is slower for two reasons: The fork() 
emulation under Cygwin is notoriously slow and the run-time virus 
detection interferes with every compilation. Tuning things and turning 
off virus checks can greatly improve performance. We have had 7 second 
Cygwin builds in the distant past.


The other issue is the toolchain that you use. If you build a GCC 
toolchain from scratch under Cygwin, then the build probably still 
works. That is a pure POSIX environment. But most people want to use a 
Windows native toolchain like the one that comes with Microchip Studio. 
That does not work in a POSIX environment and the complexity skyrockets:


 * The configuration has to be changed to specify that you use a
   Windows native tool chain (as reflected by the build WINTOOL setting).
 * All of the build paths are standard POSIX paths like
   /cygdrive/c/a/b/c. All of the paths provided as input to the Windows
   native tool have to be converted to Windows paths like C:\a\b\c
 * Similarly, all paths generated by the Windows native toolchain will
   be such Windows paths and must be converted to POSIX paths to be
   used by the build system.
 * In order for the back slash to be usable by make system or passed on
   as a command line argument, it has to be quoted in some fashion,
   perhaps like "C:\a\b\c"
 * When a quoted path is passed on the command line, it gets de-quoted.
   That is, if you passed "C:\a\b\c", it will be received like
   C:\a\b\c. It has been modified. If you tried using it again, it
   becomes C:abc which is useless.

This was all carefully balanced and used for many years. NuttX was 
developed primarily on Cygwin with a native tool chain. But that has 
been thrown in the shitter. At this point, it is not not worth 
resurrecting it. Some one will just break it again since it does not get 
testing by the CI.


Xiaomi Press Release

2023-11-17 Thread Gregory Nutt

Worth taking a look at: https://news.itsfoss.com/xiaomi-vela-open-source/



Re: ID page of EEPROM

2023-11-07 Thread Gregory Nutt

On 11/4/2023 6:12 PM, Robert Middleton wrote:

I'm a little confused as to what the difference is between the MTD folder
and the EEPROM folder. It seems that both folders have support for the
at24xx and at25xx series of chips, so it's not obvious which one is
better.


MTD is more of a "heavyweight" solution that can support a higher level 
of functionality on Memory Technology Devices (MTDs) such as file 
systems or wear-leveling layers.


Most EEPPROMs are small, however, and it may not be practical to support 
a file system on top of the EEPROM.  So the lightweight character driver 
layer from Sebastien Lorquet.  This lightweight layer will support only 
character-oriented I/O via lseek, read, and write




Re: ID page of EEPROM

2023-11-03 Thread Gregory Nutt


On 11/3/2023 4:02 PM, Robert Middleton wrote:

Hi,

I am working on a project that will need to be able to read/write the
ID page of an EEPROM(currently a M95M02 SPI device from ST).  From the
code that I have seen, it seems that this chip is already supported,
but there is no code to read/write the device ID page.  I'm looking to
add support, but I have some questions:

1. There are currently no IOCTLs defined for EEPROM devices, should I
add a new IOCTL base or add an extra one to the MTD IOCTLs?

See below

2. The default IOCTL for the EEPROM returns -ENOTTY which seems weird,
should that be something else?


ENOTTY is correct.  See https://en.wikipedia.org/wiki/Not_a_typewriter 
or 
https://cwiki.apache.org/confluence/display/NUTTX/ENOTTY+ioctl%28%29+Return+Value


This confuses a lot of people and there are numerous stack overflow 
questions like yours.



3. Since you could read/write the ID page, should this really be
implemented as an IOCTL?  I would assume that the functions would need
to be more like the read/write syscalls where you pass in a buffer and
the length(assuming you don't want to read or write the entire page).
Would it make sense to make another node in /dev to access this page?


This is related logic for the AT24 EEPROM.  That part has an extended 
memory region that holds configuration data like that factory 
initialized MAC address of the SAMv71-Xult board. 
See/boards/arm/samv7/samv71-xult/src/sam_ethernet.c:


  /* Configure the AT24 to access the extended memory region */

  ret = at24->ioctl(at24, MTDIOC_EXTENDED, 1);

That is not exactly the same functionality, but is, I think sufficiently 
related to justify using IOCTLs.


NOTE:  That only uses the IOCTL to switch modes of EEPROM. Normal reads 
and writes are then used to access the extended range:


      /* Read the MAC address */

      nread = at24->read(at24, AT24XX_MACADDR_OFFSET, 6, mac);

Using IOCTLs for reads and writes would be awkward.

Optionally, you could treat the control memory as a separate partition, 
registering the ID page like it were a different device.


Notice that  the AT24 does use an MTD IOCTL.  I don't know if we want to 
proliferate that naming or not?  It would good to re-use it if the 
semantics of the name would permit you to re-use it.


Re: Multiple IP addresses with a single GMAC (RMII PHY)

2023-10-26 Thread Gregory Nutt


On 10/26/2023 4:48 PM, Nathan Hartman wrote:

I wonder if this hasn't been solved already in some industry-accepted
manner. For example (not saying this is the solution, just thinking out
loud) there is a redundancy mechanism that makes two network interfaces
look like one, with automatic failover. What the OP wants looks like the
reverse of that: make one network interface look like two. It just feels
like a problem that might have been solved already.


As the OP mentioned in the first email, Linux already provides an 
ifconfig interface to do this.  I don't know how Linux implements 
that,but I am pretty certain that it would not involve the MAC layer.


https://www.garron.me/en/linux/add-secondary-ip-linux.html

Perhaps this is implemented via an IOCTL command or maybe via something 
like a netlink socket.


A network interface connects only to a single network so I don't think 
that the objective is to make it look like two networks. At least that 
doesn't make any sense to me. I don't even know what that would mean. 
The objective is, I think, to support two IP addresses on that one 
network. IP is not in the MAC layer. The MAC driver is part of OSI Layer 
2, the "Data Link" layer. NuttX supports a very modular interface 
between the network and the MAC driver. We don't want to violate that 
modularity.


https://en.wikipedia.org/wiki/OSI_model

IP is part of Layer 3, the network layer (or, in another conception, the 
TCP/IP layer). Within the network, the additional layers are not well 
modularized.


Re: Multiple IP addresses with a single GMAC (RMII PHY)

2023-10-26 Thread Gregory Nutt



On 10/25/2023 10:41 PM, Zhe Weng 翁喆 wrote:

But I do have another idea:
Maybe you can try to register two network devices even if you only have a 
single GMAC, just dup all the rx packets to both interfaces, and let all tx 
packets go out.  Since the IP addresses are different on each interface, the 
stack may drop packets on the incorrect interface and accept the correct one.  
It may have some problems in some situations like broadcast I guess, but I 
think it may have a chance to work.


I think there might be a couple of minor issues with that. First, the 
MAC should not know anything about IP address.  The OSI Seven Layer 
Model prescribes that architecture and we really should make some effort 
to conform to it: https://en.wikipedia.org/wiki/OSI_model


Without getting involved with IP routing, a single MAC driver could not 
handle multiple IP address.  It would have to parse the incoming 
packets, look up the IP address in the packet, match it with a device 
driver, then forward the the correct driver structure with the packet.


Replies to the IP message could be awkward as well.  We could not reply 
using the same device.  The network would always have to notify the 
driver of data availability using that IP address to identify the 
correct device structure to use.


What about non-IP packets, they would probably have to use the same 
device structure?


This would not be a systematic solution; it would apply to only a single 
MCU type.  A better solution would be generalized to the MAC of all 
current and future MCUs.  The solution should be generalized and common.


I think the lack of architectural modularity, generality, and overall 
complexity would make this approach undesirable.





Re: Multiple IP addresses with a single GMAC (RMII PHY)

2023-10-25 Thread Gregory Nutt

On 10/25/2023 9:27 AM, Matthias Roosz wrote:

Hi all,



forgive my ignorance, I’ve just started diving into networking topics
(including handling connections via Berkeley sockets). My final goal is to
make my device reachable via two separate IP addresses with a single PHY.
Once there are two IP addresses set up I'd bind to each specific address to
get two socket handles with which my application could work - does that
make sense (the use case would be in connection with an integrated switch,
e.g. a KSZ8463)?

In linux systems I think this can be achieved via ifconfig eth0:0
[newAddress] netmask [mask] up or by persisting the configuration in
/etc/network/interfaces (i.e. in my understanding by creating an alias
network interface).

I think you would need to implement something like this in NuttX too.

Now in Nuttx ifconfig does not allow this and there is no persistent
configuration. I can't imaging calling netlib_set_ipv4addr() /
netlib_set_dripv4addr () / netlib_set_ipv4netmask() twice with the same
ifname will work so the question remains: how do I get a second interface I
can work with? Do I even need to integrate more potent hardware such as a
KSZ9477S that has multiple physical GMACS?


The network interface is controlled by the network MAC driver. Its 
device structure carries the IP address and netmask that are associated 
with the MAC via IOCTLs, but I don't think that the driver itself knows 
anything about IP (it shouldn't!)


The IP address and netmask normally defines a subnet and, hence, the 
network.  Normally two IP addresses would reside on the same network.  
In this case do you want to support two different IP address from 
different networks?  I hope not (only because I don't understand that 
model, perhaps it doesn't matter to the software).


The IP routing is purely based on the IP address stored in the device 
structure.  If you want routing to two different address on the subnet, 
then you would need two IP address in the device structure.  Again, 
these should have nothing to do with the MAC driver.  You would need (1) 
a network IOCTL and NSH ifconfig extension to set the second IP address 
(following the Linux convention), and (2) logic throughout the network 
code to handle the second address.  The IP routing is one place, but 
there are lots of others.  In most of these, where the first IP address 
is referenced, you would need to add logic to use the first IP address 
OR the second IP address.  This would mean both IPv4 and IPv6.


Supporting the second address has size/performance implications and 
probably should be a Kconfig option.


That would be a significant effort, but not overwhelmingly impossible.




Re: nsh_fileapps and usage of sched_lock()

2023-10-25 Thread Gregory Nutt

On 10/25/2023 8:48 AM, Gregory Nutt wrote:

On 10/25/2023 8:18 AM, Alan C. Assis wrote:

On 10/25/23, Nathan Hartman  wrote:

On Wed, Oct 25, 2023 at 5:16 AM Ville Juven  wrote:


Hi all,

I noticed that when spawning a new task/process from a file in
nsh_fileapps, the scheduler is locked prior to calling posix_spawn(),
which
does the file loading into memory etc.

I noticed one issue with this; when the file size is large (in the order
of
MB) the scheduler is locked for very long periods at a time, in the order
of hundreds of milliseconds.
The same logic exists in apps/nshlib/nsh_builtin.c.  In fact, it looks 
like one was just cloned from the other.  Both should behave the same way.


Re: nsh_fileapps and usage of sched_lock()

2023-10-25 Thread Gregory Nutt

On 10/25/2023 8:18 AM, Alan C. Assis wrote:

On 10/25/23, Nathan Hartman  wrote:

On Wed, Oct 25, 2023 at 5:16 AM Ville Juven  wrote:


Hi all,

I noticed that when spawning a new task/process from a file in
nsh_fileapps, the scheduler is locked prior to calling posix_spawn(),
which
does the file loading into memory etc.

I noticed one issue with this; when the file size is large (in the order
of
MB) the scheduler is locked for very long periods at a time, in the order
of hundreds of milliseconds.



This sounds like a bug. The scheduler should not be locked during IO-bound
operations, since there is no way to know how long they will take. Loading
from flash could take hundreds of milliseconds (which is already terrible)
but imagine a different scenario where loading from a network with
connection problems outside of the device could lock the device for many
seconds!


If I understood this comment correctly:

   /* Lock the scheduler in an attempt to prevent the application from
* running until waitpid() has been called.
*/

then maybe instead of forcing a sched_lock() we could change the
task_state to TSTATE_TASK_INACTIVE or some other that prevent the task
to be scheduled again before the posix_spawnp() get finished.

BR,

Alan


I think that the sched_lock() is not necessary.   Notice that the this 
is only "an attempt" to keep the application from running and 
executing.  Without the sched_lock(), the task may run and exit before 
there is a change to call waitpid() which should effect only the user 
experience.


A good test to make sure that still works would be remove the 
sched_lock/unlock and add a test case like:


   int main(int argc, char **argv)
   {
  return 0;
   }

That is the case that the logic is trying to avoid, but it seems like it 
should work fine.  Subsequent logic handles this case (but provides no 
use feedback).  See comments:


  /* Wait for the application to exit.  We did lock the scheduler
   * above, but that does not guarantee that the application 
did not
   * already run to completion in the case where I/O was 
redirected.

   * Here the scheduler will be unlocked while waitpid is waiting
   * and if the application has not yet run, it will now be able to
   * do so.
   *
   * NOTE: WUNTRACED does nothing in the default case, but in the
   * case the where CONFIG_SIG_SIGSTOP_ACTION=y, the file app
   * may also be stopped.  In that case WUNTRACED will force
   * waitpid() to return with ECHILD.
   */

And

  /* If the child thread doesn't exist, waitpid() will 
return the

   * error ECHLD.  Since we know that the task was successfully
   * started, this must be one of the cases described above; we
   * have to assume that the task already exit'ed. In this 
case,

   * we have no idea if the application ran successfully or not
   * (because NuttX does not retain exit status of child 
tasks).

   * Let's assume that is did run successfully.
   */

So it looks to me like the case where the program exits is already handled.





Re: Could I use nuttx core to replace linux core in ubuntu OS?

2023-10-14 Thread Gregory Nutt



On 10/14/2023 1:48 AM, guangyuan wang wrote:

Hi
I note that the Nuttx supports POSIX well. And I'd like to check whether I
could use Nuttx core to replace the Linux core in Ubuntu OS.


No, probably not.  It depends on what you mean and how much effort you 
want to put into it.  Remember that you don't really interface with 
Linux at all.  Linux does not provide the POSIX application interface.  
That POSIX interface comes from the C-library, probably GLIBC.  Hence 
they are together referred to as GNU/Linux.


So maybe you are really asking, "Can the Linux GLIBC be used with 
NuttX?" and the answer is NO.  Why not?  The interface between GLIBC and 
Linux is not POSIX.  It is a custom, non-standard, non-portable 
interface based on Linux system calls.


And the same is true for NuttX.  The NuttX LibC interfaces with the RTOS 
via non-standard, incompatible system interfaces.


So no you cannot use GLIBC with NuttX and so you cannot "replace the 
Linux core in Ubuntu OS" with NuttX.


You possibly could replace Linux+GLIBC with NuttX+C library but most 
Ubuntu applications would not run for the same reason that you cannot 
run most applications with uCLinux without modification of the 
application.  That is due to differences in the hardware that works only 
in the FLAT build (no MMU).  Could you replace Linux+GLIBC if you use 
the KERNEL build?  In principle, you probably could.  But I imagine that 
would take some effort.  Other people have been developing with the 
KERNEL build recently, so I don't know the current state and cannot 
address that.




Re: execv obsolet ?

2023-10-12 Thread Gregory Nutt

Here is some history and justification for the naming.

Uros Platise created the built-in tasks in 2012 and provided the 
naming.  I would have preferred another name to avoid the conflicts.


The meaning of external only makes sense with respect what is internal.  
If you are think from the perspective of Bash, then internal is what is 
inside of Bash and external is what is outside of Bash, specifically in 
a file system.


If you are thinking about the whole FLASH image then what is internal is 
what is in the FLASH and external is what is outside of the FLASH, i.e., 
on a file system.  Uros was replacing file system tasks with internal 
flash tasks so from his point of view, the FLASH based tasks were 
internal (or built-in) and not external (on a file system).


On 10/12/2023 3:16 PM, Alan C. Assis wrote:

Hi Greg,

On 10/12/23, Gregory Nutt  wrote:

On 10/12/2023 2:18 PM, Alan C. Assis wrote:

The execv on Linux also fails when we can an builtin program, try it:

You are right, but the semantics are really confusing.  In Bash,
commands that are included inside of Bash are called built-in commands
and you cannot execute them.  We ignore that naming in NuttX because it
onflicts with some historical naming in NuttX:

Historically, external commands that are NOT part of NSH are called
built-in tasks.  You can execv them if they are in binfs (only).  But
you still can't execv the commands like ls that are a part of NSH.


Thanks for further explanation!

Yes, the naming is confusing, but I think it is difficult to find a
better name to those commands that are internal of nsh. At this point
calling them built-in like in Linux will make things more confuse
because the "external" apps are already called built-in.

It is something that everyone using NuttX get grasped too fast.

BR,

Alan


Re: execv obsolet ?

2023-10-12 Thread Gregory Nutt



On 10/12/2023 3:26 PM, MIGUEL ALEXANDRE WISINTAINER wrote:

Some idea how to capture the data returned by exec ?


exec does really return any data other than an int status code which 
only indicates if the task was correctly started.


If you want some text output, you would have to add instrumentation to 
the code of the task that is being started.  It should, for example, 
inherit stdout so printf() should work.




Re: execv obsolet ?

2023-10-12 Thread Gregory Nutt



On 10/12/2023 2:18 PM, Alan C. Assis wrote:

The execv on Linux also fails when we can an builtin program, try it:


You are right, but the semantics are really confusing.  In Bash, 
commands that are included inside of Bash are called built-in commands 
and you cannot execute them.  We ignore that naming in NuttX because it 
onflicts with some historical naming in NuttX:


Historically, external commands that are NOT part of NSH are called 
built-in tasks.  You can execv them if they are in binfs (only).  But 
you still can't execv the commands like ls that are a part of NSH.




Re: execv obsolet ?

2023-10-12 Thread Gregory Nutt


On 10/12/2023 2:05 PM, Alan C. Assis wrote:

Ah ok, I tested using the original code with "ls" instead "/bin/ls"
and the result as similar to NuttX: didn't print anything.


That is a different problem; it did not find ls.  On linux, use /execvp/ 
instead of /execv //or //execve/.  With execve, you need to provide the 
full path to the executable.  Otherwise is won't find ls.


Re: execv obsolet ?

2023-10-12 Thread Gregory Nutt


On 10/12/2023 2:31 PM, MIGUEL ALEXANDRE WISINTAINER wrote:

I can run (execv) i2c from hello 

works!

Now Gustavo can you this TIP to make work the PCA/PCF GPIO extender 

Thanks!

nsh> hello
Executing 1 /bin/i2c
Usage: i2c  [arguments]
Where  is one of:


There are several confusing things here.

/bin/i2c is the I2C tool, obviously.  How did it get into /bin? Are you 
using binfs? binfs will allow you to mount a "fake" file system that 
exposes the so-called "built-in tasks" as executable files of a file 
system. execv() and friends will only work on executable files.


More typically, executable files would be ELF modules in a real file 
system in FLASH or on and SD card but binfs fakes a compatible environment.


ls, one the other hand is not a built-in task.  It is not a task at all 
and can't be executed from execv().  It is an NSH command that can be 
executed only from the command line.  Even if you have binfs, it should 
not exist under /bin.


Re: Using static analysis to find (potential) bugs

2023-10-10 Thread Gregory Nutt
I'm not the right guy to talk about CI,but I believe that every modified 
C file is run through Lint.  If you have questions, I am sure someone 
else can expound.


On 10/10/2023 12:33 PM, Daniel Appiagyei wrote:

Hey,
I was running the [cppcheck](https://cppcheck.sourceforge.io/) static
analysis tool, found a few potential bugs, and wrote the following to share
how tools like this can help us ensure the integrity of our code. The
following are some bugs found. If anyone is interested in running cppcheck
on their project, scroll to the bottom for a HOW-TO. Is using a static
analysis tool like this something we'd be interested in adding to ci/cd?

*The following were found in NuttX 12.2.1*
## Null pointer dereference
 1: sched/signal/sig_dispatch.c:325
```
src/deps/nuttx/sched/signal/sig_dispatch.c:325:26: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 stcb, stcb->pid, info->si_signo, info->si_code,


  ^


src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant

   DEBUGASSERT(stcb != NULL && info != NULL);


^


src/deps/nuttx/sched/signal/sig_dispatch.c:325:26: note: Null pointer
dereference

 stcb, stcb->pid, info->si_signo, info->si_code,


  ^


```

  2: sched/signal/sig_dispatch.c:326
```
src/deps/nuttx/sched/signal/sig_dispatch.c:326:9: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 info->si_value.sival_int,


 ^
src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant
   DEBUGASSERT(stcb != NULL && info != NULL);
^
src/deps/nuttx/sched/signal/sig_dispatch.c:326:9: note: Null pointer
dereference
 info->si_value.sival_int,
 ^
```

 3: sched/signal/sig_dispatch.c:327
```
src/deps/nuttx/sched/signal/sig_dispatch.c:327:41: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 sigismember(>sigprocmask, info->si_signo) == 1 ? "YES" :
"NO");
 ^
src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant
   DEBUGASSERT(stcb != NULL && info != NULL);
^
src/deps/nuttx/sched/signal/sig_dispatch.c:327:41: note: Null pointer
dereference
 sigismember(>sigprocmask, info->si_signo) == 1 ? "YES" :
"NO");
 ^
```

 4: src/deps/nuttx/libs/libc/stdlib/lib_mbstowcs.c:42
```
src/deps/nuttx/libs/libc/stdlib/lib_mbstowcs.c:42:36: error: Null pointer
dereference [nullPointer]

   return mbsrtowcs(dst, , len, NULL);


^
```

 5: nuttx/libs/libc/stdlib/lib_wcstombs.c:38
```
src/deps/nuttx/libs/libc/stdlib/lib_wcstombs.c:38:36: error: Null pointer
dereference [nullPointer]
   return wcsrtombs(dst, , len, NULL);
```

 6: drivers/net/netdev_upperhalf.c:168
```
src/deps/nuttx/drivers/net/netdev_upperhalf.c:168:42: warning: Either the
condition 'dev' is redundant or there is possible null pointer dereference:
dev. [nullPointerRedundantCheck]
   FAR struct netdev_upperhalf_s *upper = dev->d_private;
  ^
src/deps/nuttx/drivers/net/netdev_upperhalf.c:170:15: note: Assuming that
condition 'dev' is not redundant
   DEBUGASSERT(dev && pkt);
   ^
src/deps/nuttx/drivers/net/netdev_upperhalf.c:168:42: note: Null pointer
dereference
   FAR struct netdev_upperhalf_s *upper = dev->d_private;
  ^
```

 7: sched/task/task_init.c:90
```
src/deps/nuttx/sched/task/task_init.c:90:19: warning: Either the condition
'tcb' is redundant or there is possible null pointer dereference: tcb.
[nullPointerRedundantCheck]
   uint8_t ttype = tcb->cmn.flags & TCB_FLAG_TTYPE_MASK;
   ^
src/deps/nuttx/sched/task/task_init.c:96:15: note: Assuming that condition
'tcb' is not redundant
   DEBUGASSERT(tcb && ttype != TCB_FLAG_TTYPE_PTHREAD);
   ^
```

## Signed integer overflow
The [C standard](
https://www.gnu.org/software/autoconf/manual/autoconf-2.63/html_node/Integer-Overflow-Basics.html)
treats _signed_ integer overflow as undefined behavior.
 8: arch/arm/src/armv7-m/arm_fpuconfig.c:75
```
src/deps/nuttx/arch/arm/src/armv7-m/arm_fpuconfig.c:75:15: error: Signed
integer overflow for expression '1<<31'. [integerOverflow]

   regval &= ~(NVIC_FPCCR_ASPEN | NVIC_FPCCR_LSPEN);
```

 9: arch/arm/src/armv7-m/arm_hardfault.c:169
```
src/deps/nuttx/arch/arm/src/armv7-m/arm_hardfault.c:169:19: error: Signed
integer overflow for expression '1<<31'. [integerOverflow]
   

Re: Using static analysis to find (potential) bugs

2023-10-10 Thread Gregory Nutt

These reports with regard to DEBUGASSERT have been reported before.

DEBUGASSERT is defined in a ccouple of ways.  Here is one way:

   do  \
    { \
  if (predict_false(!(f)))    \
    __assert(__ASSERT_FILE__, \
 __ASSERT_LINE__, 0); \
    } \
  while (0)
   #endif

   #ifdef CONFIG_DEBUG_ASSERTIONS
   #  define DEBUGPANIC()   PANIC()
   #  define DEBUGASSERT(f) ASSERT(f)
   #  define DEBUGVERIFY(f) VERIFY(f)
   #else
   #  define DEBUGPANIC()
   #  define DEBUGASSERT(f) ((void)(1 || (f)))
   #  define DEBUGVERIFY(f) ((void)(f))
   #endif

if CONFIG_DEBUG_ASSERTIONS  is not defined, then there is no test for 
NULL pointers, so this is not the case the tool is complaining about.


if CONFIG_DEBUG_ASSERTIONS then __assert() is called.  __assert() is a 
no-return function.  In that case, it is impossible to reach the code 
that uses the pointer.


So the tool is wrong.  I suspect that it is wrong because it does not 
understand no-return functions.  At any rate, no code change is appropriate.


On 10/10/2023 12:33 PM, Daniel Appiagyei wrote:

Hey,
I was running the [cppcheck](https://cppcheck.sourceforge.io/) static
analysis tool, found a few potential bugs, and wrote the following to share
how tools like this can help us ensure the integrity of our code. The
following are some bugs found. If anyone is interested in running cppcheck
on their project, scroll to the bottom for a HOW-TO. Is using a static
analysis tool like this something we'd be interested in adding to ci/cd?

*The following were found in NuttX 12.2.1*
## Null pointer dereference
 1: sched/signal/sig_dispatch.c:325
```
src/deps/nuttx/sched/signal/sig_dispatch.c:325:26: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 stcb, stcb->pid, info->si_signo, info->si_code,


  ^


src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant

   DEBUGASSERT(stcb != NULL && info != NULL);


^


src/deps/nuttx/sched/signal/sig_dispatch.c:325:26: note: Null pointer
dereference

 stcb, stcb->pid, info->si_signo, info->si_code,


  ^


```

  2: sched/signal/sig_dispatch.c:326
```
src/deps/nuttx/sched/signal/sig_dispatch.c:326:9: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 info->si_value.sival_int,


 ^
src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant
   DEBUGASSERT(stcb != NULL && info != NULL);
^
src/deps/nuttx/sched/signal/sig_dispatch.c:326:9: note: Null pointer
dereference
 info->si_value.sival_int,
 ^
```

 3: sched/signal/sig_dispatch.c:327
```
src/deps/nuttx/sched/signal/sig_dispatch.c:327:41: warning: Either the
condition 'info!=NULL' is redundant or there is possible null pointer
dereference: info. [nullPointerRedundantCheck]
 sigismember(>sigprocmask, info->si_signo) == 1 ? "YES" :
"NO");
 ^
src/deps/nuttx/sched/signal/sig_dispatch.c:329:36: note: Assuming that
condition 'info!=NULL' is not redundant
   DEBUGASSERT(stcb != NULL && info != NULL);
^
src/deps/nuttx/sched/signal/sig_dispatch.c:327:41: note: Null pointer
dereference
 sigismember(>sigprocmask, info->si_signo) == 1 ? "YES" :
"NO");
 ^
```

 4: src/deps/nuttx/libs/libc/stdlib/lib_mbstowcs.c:42
```
src/deps/nuttx/libs/libc/stdlib/lib_mbstowcs.c:42:36: error: Null pointer
dereference [nullPointer]

   return mbsrtowcs(dst, , len, NULL);


^
```

 5: nuttx/libs/libc/stdlib/lib_wcstombs.c:38
```
src/deps/nuttx/libs/libc/stdlib/lib_wcstombs.c:38:36: error: Null pointer
dereference [nullPointer]
   return wcsrtombs(dst, , len, NULL);
```

 6: drivers/net/netdev_upperhalf.c:168
```
src/deps/nuttx/drivers/net/netdev_upperhalf.c:168:42: warning: Either the
condition 'dev' is redundant or there is possible null pointer dereference:
dev. [nullPointerRedundantCheck]
   FAR struct netdev_upperhalf_s *upper = dev->d_private;
  ^
src/deps/nuttx/drivers/net/netdev_upperhalf.c:170:15: note: Assuming that
condition 'dev' is not redundant
   DEBUGASSERT(dev && pkt);
   ^
src/deps/nuttx/drivers/net/netdev_upperhalf.c:168:42: note: Null pointer
dereference
   FAR struct netdev_upperhalf_s *upper = dev->d_private;
  ^
```

 7: sched/task/task_init.c:90
```

Re: PCF8575 driver for NuttX

2023-10-04 Thread Gregory Nutt


On 10/4/2023 8:02 PM, Gustavo Soares wrote:

Hi Greg!

Yes, if I use #include  this problem is solved, but 
then the code loses the reference to that specific struct causing another error.

And how exactly "the file does not exist"? I can open it and it is at the nuttx 
repo, I shared it's link.
Because the compiler keeps a list of paths that it will include files 
from.  It will NOT include files from drivers/ioexpander because that is 
not in that include path list.  So "the file does not exists" in any 
directory in the list of include paths. You could hack up  the build 
system  (but no PRs for that please)or you could just copy the header 
file (a lot easier). But you cannot include it from drivers/ioexpander. 
Since this is necessarily throw-away code, the easier solution is 
recommended.

Re: PCF8575 driver for NuttX

2023-10-04 Thread Gregory Nutt



Actually, the problem is very simple.  This file does not exist.

#include 

You probably wanted

#include 


The more I look at what you are trying to do, the more confused I get.

There is a file at drivers/ioexpander/pcf8575.h, but you are not 
permitted to include that.  That contains internal driver definitions 
that are not available to applications.  For testing purposes (ONLY!!) 
you could copy those definitions into your app.  (Your app could never 
be included upstream anyway because it violates far too many basic POSIX 
interfacing principles.  But for testing, you can do anything you want.).


The directory nuttx/drivers would be nonsense in that case because there 
is no nuttx/ directory in the repository.  You made that up and there is 
no include path to access it.


The include file path corresponding to:

   #include 

is include/nuttx/drivers/ioexpander/pcf8575.h which does not exist.

This include file path

   #include 

corresponds to:

include/nuttx/drivers/ioexpander/pcf8575.h which exists and is legal to 
include.  But it is a different file.





Re: PCF8575 driver for NuttX

2023-10-04 Thread Gregory Nutt




Why wouldn't the compiler access nuttx/drivers/ioexpander/pcf8575.h?

Access to internal driver files by applications is specifically 
forbidden.  This is part of the enforcement of the modular design to 
assure that people do not do that kind of thing:  Applications must 
not have access to the internal implementation of drivers.



Never mind,  I misread the ultra tiny fonts in the image. 
include/nuttx/ holds the public interface to the driver (vs. the 
internal private definitions for the driver in drivers/). Anything 
under include/ should be include-able by your application.



Actually, the problem is very simple.  This file does not exist.


#include 


You probably wanted


#include 



Re: PCF8575 driver for NuttX

2023-10-04 Thread Gregory Nutt




Why wouldn't the compiler access nuttx/drivers/ioexpander/pcf8575.h?

Access to internal driver files by applications is specifically 
forbidden.  This is part of the enforcement of the modular design to 
assure that people do not do that kind of thing: Applications must not 
have access to the internal implementation of drivers.



Never mind,  I misread the ultra tiny fonts in the image. include/nuttx/ 
holds the public interface to the driver (vs. the internal private 
definitions for the driver in drivers/).  Anything under include/ should 
be include-able by your application.

Re: PCF8575 driver for NuttX

2023-10-04 Thread Gregory Nutt


On 10/4/2023 7:02 PM, Gustavo Soares wrote:

Why wouldn't the compiler access nuttx/drivers/ioexpander/pcf8575.h?

Access to internal driver files by applications is specifically 
forbidden.  This is part of the enforcement of the modular design to 
assure that people do not do that kind of thing:  Applications must not 
have access to the internal implementation of drivers.




Re: different IP than expected

2023-09-21 Thread Gregory Nutt

On 9/21/2023 6:34 AM, Simon Filgis wrote:

When I started with networking, I set the IP via kconfig. I remember that
it was necessary to enter the same IP at two different places. Netlib and
FTP server in my case. Maybe your IP is overwritten by another app? A
buildin app that you use for experiments?


Something as simple as stack overflow might be the issue.  There are 
some huge arrays on the stack that might be causing issues:


    struct sockaddr_in address;
    int opt = 1;
    int addrlen = sizeof(address);
/    char buffer[2048] = {0};//
//    char response[4096] = {0};/
    const char *file = "/data/client.html";

So it is going to need a stack allocation of something like 8Kb.


Re: different IP than expected

2023-09-21 Thread Gregory Nutt

On 9/21/2023 6:53 AM, Petro Karashchenko wrote:

setting "address.sin_addr.s_addr" + "bind" is kind of a way you can use to
receive requests only from an interface that has a specific IP address.
This is useful if your system has multiple network interfaces and you want
to provide some kind of isolation.
"address.sin_addr.s_addr = INADDR_ANY;" + "bind" is a way that you say "I
want to receive requests from any interface".


That is the way other server logic works.  See 
apps/netutils/netlib_server.c and apps/netutuils/netlib_listenon.c.


The only part that confuses me is that INADDR_ANY is zero. address is 
passed to bind() as read-only, const.  So address.sin_addr should still 
be zero.




Re: different IP than expected

2023-09-21 Thread Gregory Nutt



On 9/21/2023 5:06 AM, Gustavo Soares wrote:
Why is it a different IP? The server is listening to one but this 
method returns another one.


You provide almost no information for anyone to really help you.

Where is the server and client?  It sounds like you are saying that the 
HTTP server is on some remote platform.  Is that right? The client logic 
is on the NuttX target, correct?


You seem to be saying that you connect to the remote server, then do an 
HTTP GET.  The remote IP address in the returned message is not the same 
as the IP address that you connected to.  That is what I am hearing.


What does this have to do with NuttX?  I suggest that you run some 
sniffer like WireShark so you can see what the real point-to-point 
traffic.  The content returned to NuttX has nothing to do with NuttX.


Or have I got that wrong.  You need to communicate more.



Re: [EXT] Re: CTU CAN FD driver multi-licence for Nuttx

2023-09-14 Thread Gregory Nutt

On 9/14/2023 8:52 AM, Alan C. Assis wrote:

I think the authors of the driver can release the source code using
two or more licenses.
cd c
We already have an example of it in NuttX: SocketCAN

Since they add the OR license clause it is fine.


The part that bothers me is that I cannot tell which license applies to 
the code from looking at the SPDX.  If I am contemplating using NuttX, I 
could not say which license is selected in this context.  I suppose the 
authoritative document is the SPDX specification, but this does not make 
anything clear to me: 
https://spdx.github.io/spdx-spec/v2-draft/SPDX-license-expressions/#d42-disjunctive-or-operator.


apps/LICENSE lists this license for canutils/candump, cansend, and 
libcanutils, but does not clarify the OR condition.


I would be hesitant to use the code in my project because of this doubt.



Re: defconfig options being ignored

2023-09-13 Thread Gregory Nutt

On 9/13/2023 11:54 AM, Mike Moretti wrote:

Hi,

I'm trying to create a custom board config for our custom board that 
includes an ESP32S3.  I'm attempting to include all the pin and 
configuration definitions for the board in the defconfig file. 
Unfortunately, after running ./tools/configure.sh using my 
configuration, some of the options in my defconfig appear to be 
getting ignored and not ending up in .config.  Particularly, 
CONFIG_ESP32S3_SPI=y and CONFIG_ESP32S3_STORAGE_MTD_OFFSET and SIZE.  
I can't seem to figure out why this is happening.


Thanks,
-m


Did you add those settings to the defconfig by hand?  That usually 
doesn't work.


The behavior that you are seeing usually means that the settings in the 
defconfig are not valid or that they do not have their "depends on" 
prerequisites met.





Re: File transfer

2023-09-09 Thread Gregory Nutt

On 9/9/2023 9:43 AM, Gustavo Soares wrote:

So I have to create and mount a directory to my HTML file and tranfer it to 
this directory? Even if I have the HTML file embedded on compile?

No transfer is necessary.  The ROMFS file system can be built into the code.


Re: File transfer

2023-09-09 Thread Gregory Nutt

On 9/9/2023 9:21 AM, Gustavo Soares wrote:

Maybe I don't have to transfer the file to the tmp folder. I think the folder 
the app is reading is not the app folder but the root, so it won't find the 
file. Is it possible to access the built-in apps folder on NuttX?  So I can 
make the app to read its folder.


I don't understand this.  There is no built-in apps folder??? This works 
just like it does in Linux:


A folder is a directory in a volume.  You must explicitly mount the 
volume using the mount() interface (or perhaps the NSH mount command).  
Often mount() is called from board level logic on power up using a file 
system image in memory or on media like an SD card.  There are lots of 
examples in the board directory.


Only your logic knows where you mounted the volume.

Even the tmp/ directory will not exist unless you create and mount it 
yourself at tmp/.  It does not appear magically.




Re: File transfer

2023-09-09 Thread Gregory Nutt




*And about rz, do use the same console of shell ? Zmodem*

Yes.


That might be a problem with tat?  In the past, NSH used special logic 
that to handle the requirements of a COOKED mode terminal: character 
echo, CR-LF expansion, etc.  These changes were implemented (mostly) in 
NSH so that NSH could share the terminal with applications that needed 
RAW mode.


But that has changed recently.  The serial console is now in a true 
COOKED mode that is in the serial driver level and independent of NSH.  
That means that if you want to run the console in RAW mode, you will 
need to use termios commands to explicitly switch to RAW mode like:


 * Save current termios settings
 * Force RAW mode

And when finished:

 * Return to COOKED mode (using saved settings)

Do you think that could be part of the issue?


Re: [EXT] Re: CTU CAN FD driver multi-licence for Nuttx

2023-09-07 Thread Gregory Nutt




I think GPL code shouldn't be included directly, but I think it is
fair to allow GPL code be downloaded using the building system case
user selected it.


We created this directory specifically to hold forks of GPL code that 
can be used with NuttX:  https://github.com/NuttX .  I am not sure of 
the current state; it hasn't been used or maintained.





Re: CTU CAN FD driver multi-licence for Nuttx

2023-09-07 Thread Gregory Nutt

On 9/6/2023 5:15 AM, alin.jerpe...@sony.com wrote:

There are known CAN sources that have GPL code and have been documented in the 
LICENSE File

All this code is protected under the include GPL code config option and 
disabled by default

Is this approach approved or we should completely remove the GPL code from 
NuttX?


My understanding is that there can be no GPL code in any way in the 
Apache project repository.


In the case of the CANFD code, it has a dual license, GPL or BSD-3.  I 
don't recall all of the details but, as a podling at the time, we 
discussed this pretty thoroughly with our mentors and the inclusion of 
the dual licensed third party code was found acceptable.  Justin McClean 
was involved in this discussion.  I briefly looked for the e-mail thread 
that addressed this, but I could not find it so my recollection might be 
faulty,.


Any pure GPL should be removed in my opinion.



Re: 500ms delay between HTTP request execution

2023-08-31 Thread Gregory Nutt



On 8/31/2023 3:39 PM, Petro Karashchenko wrote:

Hello,

I'm having an issue with a network based application on NuttX.
I have a HTTP server that is built with the help of the "netlib_server"
interface. When I'm trying to access my server with curl multiple times in
a row I see that there is a gap close to 500ms on the PC side when
accessing the device.

I think maybe this is somehow related to a case where each time a new
request is accepted the client socket is created, handled and closed? I
will deepdive into this of course and will analyze wireshark logs, but
maybe someone has any guesses or met similar issues in the past? I mean
maybe there is some kind of "blacklist" or TCP/socket configuration that
prevents new connections from being established for a certain period of
time?

Best regards,
Petro


You will have to use wireshark to get to the bottom.  It sounds like 
there is some delay or timeout that is causing the issue. Perhaps in the 
3-way handshake.  It taks 500ms for a missing ACK to be detected.  Are 
you using delayed ACKs?  That delay is 500ms too.





Re: write method for audio driver?

2023-08-27 Thread Gregory Nutt



On 8/27/2023 7:49 PM, Nathan Hartman wrote:


I don't know whether you've had a chance to read [1] yet, but if not, it
might help visualize the two-part structure of device drivers in NuttX. (If
you've already read it, then never mind, and apologies for the noise. :-)


The audio subsystem is different from typical character drivers (I used 
the name lower half, but that really isn't really accurate).  It is a 
chain of audio processing nodes; audio buffers are passed from one node 
to the next.  Two special nodes are the source and the sink.  The Class 
D driver is a sink node.  It receives audio buffers and disposes of 
them... usually by interacting with hardware to "make sounds".


The source node is the one that receives unprocessed audio data. It is 
audio/audio.c and it registers itself as character driver. It received 
unprocessed audio data from the application and passes the audio buffers 
down the chain.


The board level logic can string together the audio processing nodes in 
most any order.  But only the source node is registered by calling 
audio_register() in audio/audio.c which registers the character driver 
in /dev/audio/XXX where only XXX is provided by the board logic.  None 
of the other nodes are capable of being registered as character devices.


Audio applications like nxplayer may, for example, open a file, read 
audio data, and pass the audio data to the audio/audio.c driver which 
will transfer it down the chain, eventually, "playing" the file.


Only the audio/audio.c is registered and it is the only character 
driver.  It has no lower half.





Re: write method for audio driver?

2023-08-27 Thread Gregory Nutt



On 8/27/2023 4:24 PM, Tim Hardisty wrote:

SAMA5D2 specific, I should add.


There is no support for the Class D driver for any SAMA5D2 board in the 
source tree.





Re: write method for audio driver?

2023-08-27 Thread Gregory Nutt


On 8/27/2023 4:22 PM, Tim Hardisty wrote:

The classD driver DOES register itself as /dev/audio/pcm0, and works correctly 
as such with nx_player.


You are mistaken.  The Class D driver does not register itself as a 
character driver.  grep -r register_driver arch/arm/src/sama5/ proves 
that is true.


The logic that registers /dev/audio/pcm0 is in audio/audio.c.

audio/Kconfig defines the directory.  Nothing is the code base except 
files in audio/ knows about that directory or is capable of registering 
a driver there.


   config AUDIO_DEV_PATH
    string "Base path for Audio devices"
    default "/dev/audio"

   audio/audio.c:

   audio/audio.c:  FAR const char *devname = CONFIG_AUDIO_DEV_PATH;

Most certainly the class D lower half cannot be register as a driver.  
That is totally impossible.  It is not a character driver; it is only 
the lower half of a driver.  It contains no usable driver logic; only 
the low level hardware interface.


You must be using audio_register() in your board start up logic. That is 
part of the configuration of the audio chain.  It does NOT produce a PCM 
driver.  The driver is the audio chain that starts in audio/audio.c.  
Class D is the terminus of the audio chain and cannot be accessed using 
/dev/audio/pcm0.  It has that name only because that is the name that 
you gave it when you called audio_register.  The entire audio chain 
supports PCM, but the driver is the audio.c driver.



he conversions are wav (or mp3 etc) to pcm.

Don’t if we’re talking cross purposes or the classd driver works in a way it 
shouldn’t!!!


In the previous email I listed the only place in the code base where a 
driver is registered at /dev/audio/pcm0 for any SAMA5 board.  The only 
cases in the code base do PCM to WAV conversion. That is done by 
audio/pcm_decode.c.  You may have some other audio logic that uses the 
Class D to handle PCM directly, I don't know. If so, it is not in the 
source tree.  Certainly that driver is not sam_classd.c.


sam_classd.c is not a driver in the POSIX sense.  It is only a hardware 
lower half.  It could never possibly be registered as a driver.  It 
doesn't support the standard driver calls.  In the terminology of the 
audio subsystem, it is called an audio driver. It is the sink at the end 
of the audio chain managed by the logic in audio/.




Re: write method for audio driver?

2023-08-27 Thread Gregory Nutt
Also, I don't think the /dev/audio/pcm0 device you are talking about is 
what you think it is.  It is a character driver but not the Class D 
lower half.  So, yes it can be opened.


/Caveat:  It has been ages since I worked with the audio subsystem so I 
might be completely wrong./


/dev/audio/pcm0  is the audio subsystem interface device.  The "pcm" 
indicates that it used the PCM software decoder (that will convert PCM 
file data to WAV).  It gets set up like:


   boards/arm/sama5/sama5d4-ek/src/sam_wm8904.c:

  pcm = pcm_decode_initialize(wm8904);
  snprintf(devname, 12, "pcm%d",  minor);
  ret = audio_register(devname, pcm);

And a character driver is registered by the audio subsystem in 
audio/audio.c:


   int audio_register(FAR const char *name, FAR struct
   audio_lowerhalf_s *dev)
   {
   ...

  audinfo("Registering %s\n", path);
  return register_driver(path, _audioops, 0666, upper);
   }

Where g_audioops is the character driver operations structure:

   static const struct file_operations g_audioops =
   {
  audio_open,  /* open */
  audio_close, /* close */
  audio_read,  /* read */
  audio_write, /* write */
  NULL,    /* seek */
  audio_ioctl, /* ioctl */
   };

So the registered pcm0 is the standard audio buffer chain configured for 
PCM file to WAV conversion and terminating with a wm8904 DAC.


I don't believe that there is any way to get the Class D audio_ops_s as 
a driver under /dev.  Nothing like that is supported.




Re: write method for audio driver?

2023-08-27 Thread Gregory Nutt
I suppose that the easiest thing of all would be to put the tones in 
small PCM files in a built-in ROMFS file system


On 8/27/2023 1:47 PM, Gregory Nutt wrote:

On 8/27/2023 5:41 AM, Tim Hardisty wrote:

Or am I completely missing the point and I just need to use the nxaudio 
system!!??


I think you have been missing point.  You have two options:

 1. As you suggest, extend nxaudio to be a source of tone data (just
like audio file data)  and uses the audio subsystem to pas  that
laong the audio chain as any other audio.  Or
 2. Create a character driver that wraps the audio_ops_s as a
character driver that can be directly accessed from applications. 
You could use tone.c and a model to get started.  This would not
be part of the OS audio system but a kludgy "bad" hanging to the
side of the audio subsystem.  But simpler.

One thing that is certain is that you cannot open and use the audio 
device interface directly from the application.  Only via the audio 
subsystem or via some hack custom logic.


That is equivalent to saying that you want to use the Ethernet driver 
directly without going though the network layer.  For the network 
layer, you would use the raw socket interface to accomplish the same 
thing without using the Ethernet driver directly.


We had the same kind of discussion about radios a few years back.  
Capable radios are treated as network devices and support full TCP/UDP 
communications.  Less capable radios use simpler character drivers.  
And, worse, some radios use both network and character drivers which 
ends up being quite a mess.


  1   2   3   4   5   6   7   8   9   10   >