Hi, all.

I have a feeling that the thread combines (IMHO) unrelated things together; unrelated things may lead to different conclusions.

If replying, please consider to split your messages to cover just a single issue (+ change the subject). I specifically ask for a favour to discuss "priming" separately ;) since it touches my ongoing efforts in project support area (yes, I plan to do something with maven priming). I will do that for the subsequent replies.

Issue #1: Failing when the CLI fails.
----------------------------------------

On 14. 08. 24 23:26, Michael Bien wrote:

  - if the build fails in CLI, it should also fail in your IDE

This is not exactly correct. We want the _build_ in the IDE to fail, definitely _run_ to fail instead of some magic ...

... but I think (!) we would REALLY like to display at least some project structure or information EVEN THOUGH the build is terribly misconfigured to the point that mere "gradle tasks" or "mvn dependency:tree" would fail on on bad pom/build.gradle.

This is where the tool does 'extra magic' to recover from errors, to get at least some information, and possibly process other parts of the project unlike CLI that tends to fail fast on the first problem.

Issue #2: Forcing user go "our way"
--------------------------------------------

Windows has process arg-length limitations (additional to the better known filepath 
limitations) [...] So instead of adding magic [...] a much better approach would be to 
detect the situation and tell the user that the build is broken. Since often all the user 
would have to do is to add '<longClasspath>true</longClasspath>' to the right 
plugin. (#7582, ...)
[...]
Older versions of the maven wrapper for example had a bug which broke execution 
when certain file paths were used. Instead of adding a workaround for this in 
NB (which would have to be maintained forever and could itself cause problems), 
all NB could do is to notify the user and then run 'maven wrapper:wrapper' once 
- thats it!

While I agree this is the _best solution_ and the IDE should:
- warn the user
- offer the user to fix it
the user may not be in position to do such change. He may not own the project, or some process may prevent him for changing it. Yes, it may be done locally - and is the right choice if one just wants to play with an ancient project at local machine, but annoying if one has to work with codebase unable to make 'the right thing' happen.

So if the user does not follow the good advice, the IDE should not _fail_ - and maybe he should be even allowed to suppress the warning.

The tool should not force a specific way of operation to the user; our tools should help the user even in an imperfect world.

Issue #3: CoS
--------------------------------------------
Simple counter-example: Eclipse and vscode with Microsoft Java extension pack have far greater market share than NetBeans. Yet Eclipse (and the vscode extension pack that is derived from Eclipse) uses CoS by default. If a crowd jumps off a cliff, it's worth to think one more time before following it over the edge...

But still, the concept survives so broad usage. So either all that crowd using JDT-based tools is incompetent or it is not the concept itself, but rather our implementation what is bad. More probably probably, there are scenarios that do not fit our working style, but the 'market survey' done by our "competitors" shows that it still fits many.

With Maven and its very bad non-caching, non-incremental behaviour, CoS may be more important. If Netbeans' "run / compile" project action worked on the reactor level rather than on a single project, maven's behaviour would quickly become a major PITA. But doing a reactor build with --also-make is The Right Thing(tm) after a change.

I am mostly indifferent to CoS these days. I used it frequently back in my "web application" days, when webapps run exploded and the testing container simply recycled the classloader. I wholeheartedly agree that CoS will be always broken, no matter how good the implementation could be. It may rebuild .classes, but may not build jars, and definitely not fatjars, docker-images, ... so at the end, the final deliverable of the project will not be up to date. Whether that is an issue depends on the developer's exact work style and needs ... and we should not assume that our own current 'style of work' is the best one - or (even worse) the only allowed one.

I think an average developer is not a dumb excel marketing guy, could understand the risks and decide what is best for him in his specific environment and business needs.

Issue #4: Priming.
--------------------------------------------
This lead to more magic during project load. If you open a maven project in NB 
and don't have a ~/.m2/ folder yet, you will notice that it automatically 
downloads and installs plugins and dependencies
I think there's a conceptual misunderstanding what "priming" should achieve. Forget about "priming build" - that's just an implementation, and not necessarily a good one.

Priming is here mostly to ensure that Code Completion can work, the IDE can resolve symbols and underline errors (without actually compiling through the build system, right ?) etc. In the case of Maven, priming also makes sure that (non-local) parent POM gets to the machine. Without a parent POM many properties (i.e. versions) are not defined, plugins are not bound to phases. As a result, the IDE's view on what the project consists of (dependencies) and what technologies are used (plugins invoked in the lifecycle) is completely broken.

The misconception is that the priming HAS TO be done through the build tool. Priming is NOT for the build tool, priming is here for the IDE. Things that the "priming" (whatever the implementation is) does must produce results that the IDE can consume.

If the maven execution produces "something" in a way the IDE's code is not prepared to consume (such as downloading artifacts to different locations) ... the IDE will not get it anyway, although the machine may be fully set up from Maven's perspective. There are pieces of code that *assume* certain local repository layout (also true for gradle support). If the 'maven outcome' misses these assumptions, the IDE will be broken despite proper build tool execution. In these parts, we need to do "more IDE magic" by extracting more information from the Maven model or use Maven's libraries to solve more queries. I sincerely doubt someone in Maven will implement this binding as it is of no value for Maven.

For maven, priming was historically implemented as a "Project Problem" resolving action executing "package" or "install" goals. But I think this is wrong way: for example, we do not _need_ to install build plugins. The IDE will not use them -- it will just load the Maven configuration model. No build plugins (and their dependencies) -> less download -> less time. Doing build (as it is done now) actually compiles the source. But the source is often broken; so the compilation fails silently. We use specific flags to "compile" as much of the reactor as possible ... but we do not _need_ sources compilation for IDE operation. All what is needed is to "somehow" get the referenced dependencies to the local machine. With a maven build, it is hard to interpret if 'priming' actually fails - or if it is "the other stuff" during install/package execution that failed.

The notable exception is a scenario when a Plugin generates some resources - which are then used in a compilation. That seems not as common as I would expect - but has to be supported, even if going, for example, through the "dependency:go-offline" path.

Even downloading dependencies may not be enough: as Maven relies on reactor project outcomes to be "installed", the model building may fail on an never-compiled (and installed) project part; I don't remember exactly but I tried to use go-offline about 1.5 year ago, and my attempt to ditch the current "priming build" failed on something like that. Surely solvable by 'more magic' during project model (for IDE!) build - I run out of time in that attempt, and I need to retry.

Running a (regular) build by the user is not a good option, if it is the only way: the sources may be broken, the build will fail in the middle ... leaving the dependencies half initialized (up to the failed module). Regular (maven) builds do not use --fail-at-end

I insist on that the IDE must be able to open and operate well on a fresh checkout on a pristine machine - even though the [java] sources (!) are not compilable at the moment. And that means "do not fail, when the CLI fails".

Gradle support downloads its plugins (it cannot configure the project without plugin beans, actually) - but does not execute build tasks, just uses the *build system* to download, and build the configuration model - which is then dumped and read by the IDE.

To get more technical - even having 'an agent' in a buildsystem's process or daemon (as Gradle does now) is not 'easy and clean' solution. The agent has to vary between buildsystem major versions (gradle now has conditional parts, and even version-specific reflective access, as gradle API changes rapidly and even incompatibly); that's comparable to modularized "embedder". Maven internal processing allows to get huge amount of positional information (cannot compare with Gradle :) as gradle does not retain much AST information internally); in a computed MavenProject, one can find declaration origins in merged configuration -- this is hard to reproduce, as it would replicate Maven's own logic. An agent would have to serve this information.

Using dependency:go-offline itself achieves only very little in terms of multiple Maven version support, reliability of the IDE project model.

-Svata.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists



Reply via email to