On Wed, 27 Feb 2019 at 18:15, Vladimir Sitnikov <sitnikov.vladi...@gmail.com> wrote: > > sebb>I did not ask for an exhaustive list, but some clue as to what you > sebb>have run so others can repeat the tests on diferent hosts. > > Well, I have already suggested to check if the project builds and/or if it > loads into IDE, code navigation, autocomplete, etc, and other typical > developers' tasks. > It is by far the most significant improvement which Gradle brings. > > I could answer questions, however it should "just work". > I do not intend to write an exhaustive documentation on how Gradle should be > used.
Again, I am not asking for extensive explanations. Just list a few Gradle commands that should be tried. For example, in the Ant case, I would suggest people try: Ant clean Ant package Ant test Ant junit -Dtest.case=xxx Ant printable-docs etc > Gradle should work transparently behind the scenes, so regular > Google/StackOverflow should answer all the typical questions immediately. > sebb>See below for further technical reasons > > I'm sure the questions you list do not qualify as technical justification for > the veto, however I'm not going to explain that further at the moment. > My point is: > A) There's nothing to veto yet > B) I would prefer spend time on implementing features rather than arguing > with you > C) As the build script is ready, you might happen to allow it to be > committed. If that happens, we all save the time in the first place Not sure what build script you are referring to. > D) Only in case you (or someone else) happen to claim a veto, then we would > have an interesting conversation. > > Of course I would appreciate comments (e.g. Graham Russell noted I add > duplicate lines in .gitignore), however I do not see how I could apply > opinions like "I think". > > sebb> So what is ready for testing? > > 1) Building the project Does it create the jars, or just the classes? > 2) Loading code to IDE Which IDEs have you tried? > 3) Running tests (certain tests fail still) How many tests are run, and how many fail? Why do they fail? > 4) Adding more tests, adding features. For instance, we need to refactor > "batchtests". One can take /gradle branch and develop the test there to see > how development feels like. Are these 'more tests' new tests, or do you mean tests which have not been transferred from the Ant build and need to be created? > 5) Adding Gradle configuration. For instance, if someone is brave enough > (s)he can try adding checkstyle or checksums or repeatable builds or maven > poms or whatever. > 6) Probably something else I can think of at least: Javadocs Printable docs website docs Staging Release promotion Generation of Maven artifacts Documentation > sebb> If a test fails > > Of course if a test is written in a cryptic way, then nothing could help. > On the other hand, Gradle produces way better developer experience than the > current codebase when it comes to testing/debugging/failure analysis. That's not my experience, but I am used to Ant output. > I'm sure you wouldn't ask those "is it because" if you had checked Travis > output at least once: > https://travis-ci.org/apache/jmeter/builds/499149535#L987 That means nothing to me. > Running and debugging tests from IDE is much simpler with Gradle. > For instance, current tests are full of assumptions like "certain commands > must be executed before even trying to run the test". That's true of some tests, but most will run independently. > On the other hand, Gradle runs all the required commands automatically even > if you checkout a brand-new project and ask it to execute a single specific > test. How does Gradle know what the required commands are? Don't you have to tell it? i.e. Gradle itself does not automatically fix the problem, though it may well be easier to specify the pre-requisites. > So even if one faces a test failure, it would be much simpler to analyze / > reproduce. In which case it should not take long to fix all the current failures. > sebb>we moved from /jakarta/jmeter to /jmeter when we > sebb>went TLP and that did not cause any major issues as I recall > > Then you really don't have points to claim "file movement is bad" In that case, the relative locations of files within the project did not change. Only the top-level changed, and developers just had to update the workspace SVN URL. The workspace layout did not change. That is very different from the moves you are proposing. There may well be assumptions in the code about the relative location of files. Changes to these locations can cause tests to fail in various ways. It won't always be obvious which failures are be due to changes in file locations. That is one reason why I say that the file moves are best done separately. > Vladimir