Rempel, Cynthia wrote:
On Mon, Jul 15, 2013 at 1:37 PM, Rempel, Cynthia
<cynt6...@vandals.uidaho.edu>  wrote:
3. We have some sort of linking and running criteria for rtems toolchains 
without rtems kernels, or we don't have rtems toolchains for rtems kernels
Can you explain this #3? I don't quite get it.
Users use toolchains built with (i.e. xxx-rtems4.11-gcc), but they don't use the rtems 
kernel for their "bare metal" embedded work... don't know why...

The reason I know of for non-rtems-kernel compiles and links is related to configure type tests. The configure process may run a compile and link to check if a symbol is in a library. It is not the typical use case for an RTEMS user. Having said this u-boot and other boot monitors may use the RTEMS tool sets to build themselves. I have a recent u-boot for the ARM based Zynq device building with a recent RSB RTEMS ARM tool set.

but if we support users of these toolchains we should test the toolchains...

Yes we should. I do not think we are in a position to automatically manage this however I do think someone needs to walk a tool chain build through the gcc testsuite. I am happy to help support someone doing this with the RSB tool chain for archs that support it. We just need to settle on the specific tool set configuration for 4.11. The RSB is now using newlib 14-Jul-2013 and Sebastian's cxa_atexit and PPC SDATA patch. The pthread.h newlib header issue is currently unresolved.

FYI I have the RSB automatically building autoconf and automake from source so those users on hosts that do not have recent autoconf and automake packages should not see the annoying wrong autoconf error message when RTEMS is being built after the compiler has been built.

6. (Process) we have a build-bot script running that checks (and rejects) each 
rtems patch for compile / link errors (that checks every BSP)
7. (Process) we have a script that builds and tests the trunk of binutils, gcc, 
newlib, gdb which in turn builds all the rtems tests (referenced against the 
rtems trunk revision as of the time of the gcc release) and posts the results 
to gcc-testresults (as binutils, newlib, and gdb don't have a test results 
email).
These #6 and 7 will have to wait, but they are not specificaly
release-related. At the least, 6 won't be feasible until after we
release 4.11.
That sucks that we're stuck manually checking each patch...

FWIW, there will most likely be pressure to add features up until the release... in which case, 
what will probably happen is a "feature freeze" -- which will be unpopular and lead to 
"coding opportunity loss" -- and we'll be putting out a release that doesn't compile/link 
on some targets...

Yeah it is not optimal but this is what we have. Building rtems with buildbot using the current build system is not practical and changing the build system for 4.11 is also not practical. The overriding need is for 4.11 to be released.

Although at one point I was able to build the toolsets using the RSB (before it 
built the kernel) on gcc-20, and it successfully found the targets with the 
errors, it might be as simple as to take such a script and use the number of 
lines of compiler errors to determine if there is a problem with the patch...  
(It could be a git script)

I suspect we will need to manage this manually for 4.11. If we have the computing resources available maybe a branch specific git script could be used. Currently we do not have the resources. I should point out the m68k build of all BSPs is a massive disk user. I suspect an objcopy to a binary target specific file is picking up some massive address offset creating gigabyte images.

We also need to understand building all BSPS results in over 1200 header files per BSP being preinstall and we now have over 140 BSPs so the number of header files installed has reach a critical and silly level given the 1200 or so original files are sitting in the cloned git repo. This is a topic for post 4.11 and maybe RTEMS 5.0.

(Of course, we could avoid this problem by using a script to:
a. build rtems on all targets -- RSB is very useful for this,
b. put the output into a file,
c. filter the output file for compiler and linker errors into another file
d. if the number of lines in the filtered file exceeds a specified number -- 
reject the patch)
This of couse won't catch all the errors, but likely many of them...

But if hooking such a script into git isn't feasible, it's not... that sucks.

See above, and yeah I agree it sucks.

Chris
_______________________________________________
rtems-devel mailing list
rtems-devel@rtems.org
http://www.rtems.org/mailman/listinfo/rtems-devel

Reply via email to