I suppose at the time prior to releases, the assorted focus area could perform a sub-set of their tests area where applicable and provide any feedback prior to the release.
Maybe the testing could also be influenced by what changed in the given release. I still thing running the whole thing still has value as a Integration / Regression / Quality Check / Acceptance test but I guess that is coming more from a commercial perspective. Is there any other acceptance test tools that could help from a higher level besides the unit test that could help in this sort of testing and automate it more? Eric On Mon, Jun 7, 2021 at 7:01 AM Neil C Smith <neilcsm...@apache.org> wrote: > On Sun, 6 Jun 2021 at 21:34, Eric Bresie <ebre...@gmail.com> wrote: > > So then no plans for LTS with Netcat testing until tentatively Feb 2022? > > I think "tentatively" being the word! Based on the discussion prior > to 12.3 that led to lazy consensus on pushing back any LTS, we agreed > - > > """ > * NetCAT won't be run as originally scheduled after 12.3, but *might* > happen at or after the 12.6 release. > * February 2022 release *might* be NetBeans 13.0 (or 13!) and *might* be > an LTS. > """ > > ie. there's a whole lot of things that were deliberately left as still > to resolve! > > Personally I think we need to figure out a way to make NetCAT a more > fluid, open and ongoing thing, or drop it. > > Now we've moved to voted betas, rather than just advertising them, > maybe we can encourage anyone testing them to pick a NetCAT test of > interest to try, and report issues? > > Best wishes, > > Neil > -- Eric Bresie ebre...@gmail.com