On Tue, Dec 30, 2025 at 5:22 PM raiden00pl <[email protected]> wrote:

> Proof of concept for running NTFC on CI is available with this PR:
> https://github.com/apache/nuttx/pull/17728
>
> In this PR only `sim/citest` has been migrated to NTFC.
> I'll take care of the rest of `citest` configuration once this PR is
> accepted.
>
> Here is the test report from CI:
>
> https://github.com/szafonimateusz-mi/nuttx/actions/runs/20580444513/job/59106747416#step:11:154
>
> The main benefit of migrating to NTFC is the increased number of supported
> tests.
> By enabling additional applications in sim/citest, we can enable additional
> tests
> because the test cases are automatically detected by the tool.
>
> The main drawback? The NTFC sources and test cases are currently in my
> private repos;
> these should be Apache repositories.
>
>
Let's ask the Apache infra team creating a new repo to host this new
automation test framework?


> Here is a Github Issue with the migration plan:
> https://github.com/apache/nuttx/issues/17717
>
> śr., 12 lis 2025 o 11:41 raiden00pl <[email protected]> napisał(a):
>
> > NTFC now supports communication with the DUT via the serial port, so it's
> > possible to use it for hardware testing. At this point the DUT reset is
> > done
> > using the system command provided in the YAML configuration. Not the best
> > solution, but sufficient for the moment.
> >
> > I've also added a test example for the nucleo-h743zi. By default the
> > st-flash tool
> > is used for DUT reset, but this can be easily changed in the
> configuration
> > file
> > to something else.
> >
> > pt., 7 lis 2025 o 13:35 raiden00pl <[email protected]> napisał(a):
> >
> >> > Right off the bat, I suggest those not familiar with Pytest to check
> >> some examples and get a good grasp on
> >> > fixtures, scopes and parametrization.
> >>
> >> That's right, knowledge of Pytest is basically the only requirement for
> >> writing your own tests. But even without knowledge of Pytest's more
> >> advanced features, it's possible to write tests with just simple Python.
> >>
> >> > How could we approach device specific functionalities?
> >> > We have a DeviceQemu and DeviceSim, so we would need a generic
> >> DeviceSerial to deal with UART: custom baud, stop bits, etc (not that
> >> difficult). Then, maybe we need a vendor specific class so we can
> operate
> >> properly. Take Espressif devices as example: we would need a device or
> >> vendor specific class (DeviceEspressif) that naturally inherits the
> >> DeviceSerial but would support entering download mode, rebooting,
> parsing
> >> the initial boot logs, etc. This would allow the Product class to
> operate
> >> properly when the target is from Espressif. Same could be done for other
> >> vendors if needed, of course.
> >>
> >> This is where YAML configuration comes in. Basically, we can add any
> >> customization parameter there. I think that all real hardware
> >> with serial port communication will need similar functionality.
> >> It would be best to avoid dependence on vendor-specific parameters
> >> and try to generalize DUT as much as possible.
> >>
> >> This week I added more qemu targets (armv7a, armv7r, armv8a, riscv-64).
> >> I also prepared the code and configuration files to enable
> >> the use of several DTUs in test cases in the future (e.g. several
> >> devices communicating with each other).
> >>
> >> wt., 4 lis 2025 o 03:43 Filipe Cavalcanti
> >> <[email protected]> napisał(a):
> >>
> >>> Thanks so much for sharing this with the community, I think it is a
> >>> massive contribution.
> >>>
> >>> I really liked the test cases, I think it is a good coverage and it
> >>> makes easy to add new tests.
> >>>
> >>> Right off the bat, I suggest those not familiar with Pytest to check
> >>> some examples and get a good grasp on
> >>> fixtures, scopes and parametrization.
> >>>
> >>> From what I was able to study on NTFC, there is a generic template
> >>> (DeviceCommon) which end up as DeviceQemu and DeviceSim. Those are made
> >>> available as a product, which is actually the device under test (DUT).
> >>>
> >>> How could we approach device specific functionalities?
> >>> We have a DeviceQemu and DeviceSim, so we would need a generic
> >>> DeviceSerial to deal with UART: custom baud, stop bits, etc (not that
> >>> difficult). Then, maybe we need a vendor specific class so we can
> operate
> >>> properly. Take Espressif devices as example: we would need a device or
> >>> vendor specific class (DeviceEspressif) that naturally inherits the
> >>> DeviceSerial but would support entering download mode, rebooting,
> parsing
> >>> the initial boot logs, etc. This would allow the Product class to
> operate
> >>> properly when the target is from Espressif. Same could be done for
> other
> >>> vendors if needed, of course.
> >>>
> >>> Filipe
> >>> ________________________________
> >>> From: raiden00pl <[email protected]>
> >>> Sent: Monday, November 3, 2025 9:41 AM
> >>> To: [email protected] <[email protected]>
> >>> Subject: Re: Automated Testing Framework for NuttX
> >>>
> >>> [External: This email originated outside Espressif]
> >>>
> >>> > I think it should be important to have some documentation explaining
> >>> how
> >>> to
> >>> > add new test cases, how to add a new board (actually there is no real
> >>> board
> >>> > supported yet), what nuttx-nftc is and its goal.Or better yet: what
> it
> >>> can
> >>> > do and what it can't.
> >>>
> >>> That's right, the documentation will be gradually updated. Since I
> prefer
> >>> looking at code rather than documentation, I need to gather some
> feedback
> >>> on what is not clear and needs more details in doc :)
> >>>
> >>> > Also the "for Community" in the name passes an idea that someone just
> >>> > created for internal use and decided to give a different version "for
> >>> > Community", maybe it is better to define a new meaning for the C:
> "for
> >>> > Compatibility", "for Completeness", etc.
> >>>
> >>> The name can still be changed as long as the project is in my
> >>> repositories.
> >>> We can even use NTF (NuttX Testing Framework), but this name is too
> close
> >>> to "NFT" which can be confusing.
> >>>
> >>> > While I'm writing this email, I saw these FAILED messages: but not
> >>> > indication what caused the failure:
> >>>
> >>> By default, all tests are executed and a more detailed log is presented
> >>> at the end of tests. You can also look at the console log in
> >>> `results/<test_date>/<test_case_name>.txt`. With the `--exitonfail`
> flag
> >>> you can abort tests on the first failure.
> >>>
> >>> >
> >>>
> >>>
> external/nuttx-testing/arch/timer/test_arch_timer_integration.py::test_timerjitter_integration
> >>> > FAILED [  4%]
> >>> >
> >>>
> >>>
> external/nuttx-testing/driver/framebuffer/test_driver_framebuffer_integration.py::TestDriverFramebuffer::test_driver_framebuffer_black
> >>> > FAILED [  4%]
> >>> >
> >>>
> >>>
> external/nuttx-testing/driver/framebuffer/test_driver_framebuffer_integration.py::TestDriverFramebuffer::test_driver_framebuffer_white
> >>>
> >>> This is interesting because these tests pass on my host. You can see
> >>> what's
> >>> wrong in the `result/` console log.
> >>>
> >>> > Is there some way to improve the test speed? It is running for more
> >>> than
> >>> > 40min and still at 4%. Some tests like these that failed are taking
> >>> more
> >>> > than 7min to run, my laptop is not a slow machine: Dell XPS
> i7-11800H.
> >>> So
> >>> I
> >>> > think on an old PC it will be even slower. Maybe it could be
> something
> >>> in
> >>> > my Ubuntu setup, did you notice sometime similar in your setup? How
> >>> much
> >>> > time does it require to finish the test on SIM? (I didn't test qemu,
> >>> but I
> >>> > support it will be slower)
> >>>
> >>> The length of test execution depends directly on the length of command
> >>> execution on NuttX. It's impossible to speed up cases where a command
> >>> executed on NuttX takes a long time. However, for tests executed on
> >>> the host (sim, qemu), we can try running tests in parallel on multiple
> >>> cores.
> >>> This isn't currently supported and will be difficult to implement,
> >>> but it should be possible. Unfortunately, the default pytest plugin for
> >>> parallel
> >>> test execution (pytest-xdist) won't work for this.
> >>>
> >>> Ultimately, running all tests should be infrequent (possibly once a
> day)
> >>> because
> >>> it's a costly process. For CI, test cases should be selected depending
> on
> >>> the area of change in the pull request. We can use Github labels for
> >>> this or
> >>> add a function to select test cases depending on `git diff` passed to
> >>> NFTC.
> >>>
> >>> With the `--testpath` option you can choose exactly which test to run,
> >>> e.g. `--testpath external/nuttx-testing/arch/atomic` will only run
> >>> arch/atomic
> >>> test cases.
> >>>
> >>> For the attached SIM configuration, all test cases on my machine take
> >>> around 1h40.
> >>>
> >>> > I noticed that during the test "python" is using 100% of the CPU
> >>> (sometimes
> >>> > it drops to 99,7%).
> >>>
> >>> This is normal for some test cases on SIM. Some programs running on
> NuttX
> >>> take up almost 100% of the core load.
> >>>
> >>>
> >>>
> >>> niedz., 2 lis 2025 o 21:33 Alan C. Assis <[email protected]>
> napisał(a):
> >>>
> >>> > Hi Mateusz,
> >>> > Impressive work! Kudos!!!
> >>> >
> >>> > I followed the steps and it compiled and tested (in fact still
> running
> >>> at
> >>> > 4% until now).
> >>> >
> >>> > The installation steps you wrote are easy to follow and work
> >>> > "out-of-the-box".
> >>> >
> >>> > I think it should be important to have some documentation explaining
> >>> how to
> >>> > add new test cases, how to add a new board (actually there is no real
> >>> board
> >>> > supported yet), what nuttx-nftc is and its goal.Or better yet: what
> it
> >>> can
> >>> > do and what it can't.
> >>> >
> >>> > Also the "for Community" in the name passes an idea that someone just
> >>> > created for internal use and decided to give a different version "for
> >>> > Community", maybe it is better to define a new meaning for the C:
> "for
> >>> > Compatibility", "for Completeness", etc.
> >>> >
> >>> > While I'm writing this email, I saw these FAILED messages: but not
> >>> > indication what caused the failure:
> >>> >
> >>> > external/nuttx-testing/arch/space/test_space_integration.py::test_df
> >>> PASSED
> >>> > [  4%]
> >>> >
> >>> >
> >>>
> external/nuttx-testing/arch/timer/test_arch_timer_integration.py::test_timerjitter_integration
> >>> > FAILED [  4%]
> >>> >
> >>> >
> >>>
> external/nuttx-testing/driver/framebuffer/test_driver_framebuffer_integration.py::TestDriverFramebuffer::test_driver_framebuffer_black
> >>> > FAILED [  4%]
> >>> >
> >>> >
> >>>
> external/nuttx-testing/driver/framebuffer/test_driver_framebuffer_integration.py::TestDriverFramebuffer::test_driver_framebuffer_white
> >>> >
> >>> > Is there some way to improve the test speed? It is running for more
> >>> than
> >>> > 40min and still at 4%. Some tests like these that failed are taking
> >>> more
> >>> > than 7min to run, my laptop is not a slow machine: Dell XPS
> i7-11800H.
> >>> So I
> >>> > think on an old PC it will be even slower. Maybe it could be
> something
> >>> in
> >>> > my Ubuntu setup, did you notice sometime similar in your setup? How
> >>> much
> >>> > time does it require to finish the test on SIM? (I didn't test qemu,
> >>> but I
> >>> > support it will be slower)
> >>> >
> >>> > I noticed that during the test "python" is using 100% of the CPU
> >>> (sometimes
> >>> > it drops to 99,7%).
> >>> >
> >>> > These things are minor issues, I think you create something really
> >>> powerful
> >>> > that will help to improve the NuttX quality to some level we never
> saw
> >>> > before.
> >>> >
> >>> > BR,
> >>> >
> >>> > Alan
> >>> >
> >>> > On Sun, Nov 2, 2025 at 2:51 PM raiden00pl <[email protected]>
> >>> wrote:
> >>> >
> >>> > > Hi,
> >>> > > here are the repositories if anyone would like to give them a try:
> >>> > >
> >>> > > - Test runner: https://github.com/szafonimateusz-mi/nuttx-ntfc
> >>> > > - Test cases: https://github.com/szafonimateusz-mi/nuttx-testing
> >>> > >
> >>> > > A quick guide on how to run the tests can be found here:
> >>> > >
> >>> > >
> >>> >
> >>>
> https://github.com/szafonimateusz-mi/nuttx-vtfc/blob/main/docs/quickstart.rst
> >>> > >
> >>> > > The easiest way to get started on a Linux host is by using a
> >>> simulator or
> >>> > > qemu-intel64.
> >>> > > I'll add examples for other QEMU targets later.
> >>> > >
> >>> > > There's still a lot of work to be done, but it's initially working
> >>> and
> >>> > > presents
> >>> > > the general idea of the tool. Any feedback or ideas are very
> welcome
> >>> :)
> >>> > >
> >>> > > czw., 23 paź 2025 o 13:33 Filipe Cavalcanti
> >>> > > <[email protected]> napisał(a):
> >>> > >
> >>> > > > This seems very good.
> >>> > > >
> >>> > > > We also have a lot of testing internally on Espressif and we
> would
> >>> be
> >>> > > > willing to share them. Many of them simply cover basic use of
> >>> > defconfigs
> >>> > > > and some cover more complex functionality such as MCUBoot, flash
> >>> > > > encryption, file system, etc.
> >>> > > >
> >>> > > > Also, we use pytest-embedded plugin for all of our tests, which
> >>> allows
> >>> > us
> >>> > > > to communicate with serial devices and even QEMU. I have create
> the
> >>> > > > pytest-embedded-nuttx plugin that adds support for NuttX by
> >>> parsing the
> >>> > > > NuttShell (either serial or QEMU).
> >>> > > >
> >>> > > > I think it is important we have the simulator and QEMU support
> >>> since
> >>> > > those
> >>> > > > don't need hardware, but we should have form of adding outside
> >>> runners
> >>> > so
> >>> > > > we can test on actual devices.
> >>> > > >
> >>> > > > Looking forward to see the Python package and the test cases!
> >>> > > > ________________________________
> >>> > > > From: Nathan Hartman <[email protected]>
> >>> > > > Sent: Wednesday, October 22, 2025 10:55 PM
> >>> > > > To: [email protected] <[email protected]>
> >>> > > > Subject: Re: Automated Testing Framework for NuttX
> >>> > > >
> >>> > > > [External: This email originated outside Espressif]
> >>> > > >
> >>> > > > Anything we can do to improve the testing coverage of NuttX is a
> >>> good
> >>> > > > thing. I am not familiar with pytest etc but conceptually the
> >>> > description
> >>> > > > sounds good. It also sounds like the test cases can be extended
> >>> over
> >>> > > time.
> >>> > > >
> >>> > > > Creating additional repositories is easy. The community just
> needs
> >>> to
> >>> > > > decide what repositories it needs and they can be created.
> >>> > > >
> >>> > > > Agreed we need a cool name for this! I haven't thought of a good
> >>> one
> >>> > yet
> >>> > > > but let's solicit ideas from the community!
> >>> > > >
> >>> > > > Cheers,
> >>> > > > Nathan
> >>> > > >
> >>> > > > On Wed, Oct 22, 2025 at 8:24 AM raiden00pl <[email protected]
> >
> >>> > wrote:
> >>> > > >
> >>> > > > > Hi everyone,
> >>> > > > >
> >>> > > > > Xiaomi would like to contribute to the community an automated
> >>> testing
> >>> > > > tool
> >>> > > > > I recently wrote, along with the dedicated NuttX test cases
> they
> >>> use
> >>> > in
> >>> > > > > their CI.
> >>> > > > >
> >>> > > > > The main idea is to provide a universal tool for automated
> >>> testing
> >>> > that
> >>> > > > can
> >>> > > > > be
> >>> > > > > used by the NuttX project and its users. It could be used in
> >>> upstream
> >>> > > CI
> >>> > > > as
> >>> > > > > well as
> >>> > > > > in the NXDART project. It’s a similar concept to
> >>> > > > > `nuttx/tools/ci/testrun`, but
> >>> > > > > offers more advanced functionality.
> >>> > > > >
> >>> > > > > The tool is a standalone Python module that runs pytest
> >>> underneath
> >>> > and
> >>> > > > > extends its
> >>> > > > > functionality with custom pytest plugins. This way, we can
> >>> completely
> >>> > > > > separate the test
> >>> > > > > cases from the logic that executes them which is not standard
> >>> pytest
> >>> > > > usage.
> >>> > > > > This approach provides more control over what the tool can do,
> >>> but
> >>> > > > > integrating with
> >>> > > > > pytest requires some tricks.
> >>> > > > >
> >>> > > > > Test cases are written as regular pytest tests. In short, they
> >>> > execute
> >>> > > > > commands
> >>> > > > > on a NuttX target and check the returned data. Currently, SIM
> and
> >>> > QEMU
> >>> > > > > targets
> >>> > > > > are supported. In the future, I plan to add serial port support
> >>> so
> >>> > that
> >>> > > > > real
> >>> > > > > hardware testing can be done. Ideally, the tool would handle
> the
> >>> > entire
> >>> > > > > process
> >>> > > > > of building a NuttX image, flashing it onto hardware, and
> running
> >>> > tests
> >>> > > > > (like Twister
> >>> > > > > for Zephyr).
> >>> > > > >
> >>> > > > > The test cases are based on programs available in nuttx-apps
> >>> (some
> >>> > > tools
> >>> > > > > aren’t
> >>> > > > > upstream yet). Test cases from Xiaomi that don’t fit well with
> >>> NuttX
> >>> > > > > upstream
> >>> > > > > will go into the OpenVela project. Users and companies
> >>> interested in
> >>> > > > NuttX
> >>> > > > > will
> >>> > > > > be able to create their own test packages and use the same tool
> >>> to
> >>> > run
> >>> > > > and
> >>> > > > > manage
> >>> > > > > them. I think this aligns quite well with the idea of
> distributed
> >>> > tests
> >>> > > > for
> >>> > > > > Nuttx (NXDART).
> >>> > > > >
> >>> > > > > The tool can generate reports and save them locally, including
> >>> test
> >>> > > > results
> >>> > > > > and
> >>> > > > > console logs for easier debugging. In the future, we could add
> >>> > options
> >>> > > to
> >>> > > > > compare
> >>> > > > > various OS metrics between tests (image size, performance,
> etc.).
> >>> > > > >
> >>> > > > > The tool is already functional and can replace the current CI
> >>> tool
> >>> > > > > `nuttx/tools/ci/testrun`.
> >>> > > > > More features will be added over time, I have a lot of ideas
> >>> here.
> >>> > > > >
> >>> > > > > After this short introduction, I have a few questions for the
> >>> > > community:
> >>> > > > >
> >>> > > > > 1. Is the community interested in adopting this new testing
> tool
> >>> and
> >>> > > test
> >>> > > > > cases for
> >>> > > > >     the NuttX? Xiaomi has agreed to donate the code to the
> >>> project
> >>> > > > >
> >>> > > > > 2. Inside Xiaomi, the project is called VTFC (Vela Testing
> >>> Framework
> >>> > > for
> >>> > > > > Community).
> >>> > > > >    The name VTFC is a bit cryptic and refers specifically to
> Vela
> >>> > > > > but Xiaomi is open
> >>> > > > >    to renaming it. Some alternative suggestions are NTFC (NuttX
> >>> > Testing
> >>> > > > > Framework
> >>> > > > >    for Community) or NTF (NuttX Testing Framework).
> >>> > > > >    If anyone has better ideas, please let us know. As we all
> >>> know, a
> >>> > > cool
> >>> > > > > project
> >>> > > > >    name is important :)
> >>> > > > >
> >>> > > > > 3. I think we’d need two separate repositories for this:
> >>> > > > >
> >>> > > > >    - one for the testing tool that runs the test cases and does
> >>> the
> >>> > > rest
> >>> > > > of
> >>> > > > > the magic (VTFC),
> >>> > > > >    - and another for test cases dedicated to NuttX upstream.
> >>> > > > >
> >>> > > > >    For the second one, we could use
> >>> > > > > https://github.com/apache/nuttx-testing,
> >>> > > > > which
> >>> > > > >    is currently empty, but we’d still need one more repo.
> >>> > > > >
> >>> > > > >    Alternatively, we could use https://github.com/Nuttx
> >>> organization
> >>> > > > which
> >>> > > > > would make
> >>> > > > >    repository management easier (no Apache infra) but then the
> >>> code
> >>> > > will
> >>> > > > > not
> >>> > > > >    officially be under Apache and I don't know what
> consequences
> >>> it
> >>> > > has.
> >>> > > > >
> >>> > > > >    Separation of the test tool and test cases from the kernel
> >>> code is
> >>> > > > > important
> >>> > > > >    because it allows for better QA for the Python code and
> >>> > automation.
> >>> > > > > There will be
> >>> > > > >    quite a lot of Python coding involved, and mixing it with
> >>> kernel
> >>> > > code
> >>> > > > > doesn’t feel good.
> >>> > > > >
> >>> > > > > Let me know what you think. Any suggestions are welcome :)
> >>> > > > >
> >>> > > > > Regards
> >>> > > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> >>
>

Reply via email to