On Tue, Jan 14, 2020 at 2:01 PM Gregory Nutt <spudan...@gmail.com> wrote:
> > > Haitao is preparing the jenkins job to run all possible config, but > > the config number is huge, we need to donate several powerful severs > > to ensure the precheck can finish in half hour. > > I am repeating myself, but it is worth remembering. > > There might not be any configuration in the repository that builds the > code that is changed by a PR! The code could be totally broken and > could fail the first time you compile it, but if there is no > configuration in the repository that has the configuration settings > needed to build all options for the changed code, you will never know it. > > It is for this reason that have been arguing that we need a smarter test > than just building a set of configurations. The smaller the set the > more likely that they will not build the affected code at all! Then > the build test is completely useless. > > I very typical example is when someone develops a new driver for some > "FooBar" hardware. It is enabled by CONFIG_DRIVER_FOOBAR. But there is > no defconfig file under boards that includes CONFIG_DRIVER_FOOBAR=y. As > a result, any kind of canned build test is a waste of time and will not > prove or disprove the syntactic correctness of the file -- since it > never builds it. > > I have suggested that we ask the contributor to provide a list of every > configuration option that needs to be set/unset to verify all > combinations of the changed code. Then we use those configuration > options to select all relevant configurations. > > Perhaps we could grep the modified files to get those options? > > In the case that there is no configuration in the repository that select > those configuration options, we would need to ask the contributor to > provide a test configuration. > > Anything that is random, hit or miss would be, most likely, a waste of > time. I don't know how they do it now, but back in the day I remember that the FreeBSD project had a configuration that is not valid for running, but which builds ALL options (including, I presume, options that should be mutually exclusive), to catch build errors. I think it was called something with LINT in the name. Maybe that's one of the avenues we could consider, in parallel with other build tests of valid configurations. Speaking of LINT, another avenue might be static analysis, which finds all kinds of common errors. That is imperfect and comes with false negatives but is useful nonetheless. Nathan