Niclas Hedhman schrieb: > The idea is that we define a URL scheme along; > > pakkage:<java-package-name>?<query> > > where query is something along the lines of what can be written in > the Import-Package of OSGi declarations, and that version is > understood so that the "newest" matching version will be retrieved.
URL? If anything, this would be a case for URI, IMHO. A URL points to some resource. A package, however, is not a resource, that's a more abstract thing. An artifact would be a resource, but not a package ... The problem here is, that an artifact might (and probably will) contain more packages than just the one you referred to, and, in rare cases, a package might be spread across several artifacts. Therefore, IMO, URLs/URIs are not the right tool to refer to packages. [...] > just like we did in Transit before. > > > This would allow us to construct a complete ClassLoader by providing > the package names and versions needed, and not worry about where the > packages are located. Well, I've recently written a ClassLoader. One, that relied completely on a VFS, I also wrote. I can tell you one thing about that: A ClassLoader gets literally overwhelmed with requests for classes and resources. This is something that is really performance-critical. I first profiled and optimized my VFS implementation, until I couldn't find any hot spots there anymore, and it was still too slow. Then I simply rewrote the ClassLoader: Instead of using (VFS-)resources directly, it now extends URLClassLoader and uses resource.toURI().toURL() to provide the locations. Things are more than twice as fast that way. The ClassLoader isn't the right place to do fancy things, and because of URLClassLoader, you'll have to consider this in URLStreamHandlers, too ... The good thing about that experiment is: I've located and eliminated some performance leaks in my VFS implementation. However, in normal use, they would never have been actual HotSpots. > (The extreme is of course to create a new > ClassLoader that fetches the classes upon findClass(), but since > meta-data is missing here, it does not seem viable.) See above: Don't do that. > 1. Maven builds upon a 'project' which has one or more source and > resource directories and only a single produced artifact. There > are hacks to get around this, some better than others. This one isn't bad, IMHO. I'm tended to rename 'project' to 'unit' or something similar in Loom, just as in the rules, there is already the CompilationUnit for compiling Java sources. I think, a build unit, which produces a single artifact as result, is a good thing. This doesn't mean that a "single artifact" is actually "a single file" -- it might as well be a "single plugin" which might be represented by a directory structure. Let's look at another problem: I've torn out the Bootstrapper of Loom and made it a separate project. I intend to use it for other things, too, I think, what I did there, was the right approach (just as I think that HiveMind is the right approach to assemble chunks of code to a whole application), so I've made it a separate project. The result is now a much more powerful bootstrapper for plugin-based applications built upon HiveMind. In short, it adds a new ClassLoader, functionality to access resources other than classes, and a virtualised detection of what functionality is present to HiveMind (plus some other utilities). I called that thing "Hiveapp". Why isn't all that in the Subversion repository? Well, currently it only compiles and runs from within the IDE. The Loom build files are actually up-to-date, except one detail: They don't include Hiveapp. I'd have to release it. But Hiveapp is currently being developed as the base of two projects, it has to be a separate project and can't be included into the build of Loom. Then again, it's being developed and not releasable. The solution to that problem is outlined in the Wiki: Meta-Projects (or, as I tend to call it now, Units and Projects, where "Project" still isn't the right word). OK, I admit: I'm also simply too lazy to finally write those build files for Hiveapp. With them, I could simply use the concept of integration builds of Ivy without having to actually release it ... :) > 3. Maven1 used to have preGoal and postGoal extension points which > allowed for a kind of rule-based build system. However, that whole > notion was abandoned in Maven2, and we now have a pre-defined > lifecycle with phases that are rather hard to establish. I am > convinced that rule-based build systems are far superior to any > 'programmatic ones'. ACK. My first experiments in Loom with using Drools for building were very encouraging. I really think, this is the right approach. However, I'm still not sure how to provide ways to customize the build without writing plugins, this is a problematic point, IMHO. Then again, the Perfect Build System shouldn't require users to do that, it should just do The Right Thing ... ;) > 4. Maven's model doesn't make any separation between what is the > build model and what is the project model. All is lumped into a huge > POM, containing everything from new lifecycle instances, to > configuration for plugins, to the location of the sources and much > more. Management becomes a nightmare for larger projects. I.e. only > inheritance of the POM is supported, and very little notion of > 'delegation' is present (dependencyManagement being a small > exception). Agreed. We'll have to do that better. > 5. Maven ties the dependencies strongly by version numbers. One > quickly end up with multiple versions of the same transitive > dependencies, which are sometimes impossible to solve at build time, > and hard to solve at runtime (since most people don't use OSGi). The > industry must promote more care about versioning and compatibility, > and Maven's good intentions seems to be counter-productive. > Minimum for Maven to support are 'version ranges', so that I can > depend on 2.0+ or the equivalent [2,3) notation in OSGi. Ivy actually does this already. By making Ivy an integral part of a build system, we could use what's already there. Ivy is extendible, the Ant tasks are just some classes that use the API of Ivy. However, to make it easyly and painlessly to use would require us to take the code apart and put it back together the other way round. Ivy does a good job in resolving dependencies, but it's IMO a nightmare to use. It's probably better to do something on our own. But you shouldn't forget one important thing: Making dependencies package based, which I think is a good idea, introduces a new level of complexity. You'll now have to relate packages to artifacts, that's one thing. The other thing are package dependencies: e.g., importing and using org.drools implies using about 25 more packages within the same artifact (JAR-file), even though you don't use those classes directly. But declaring "I'm using org.drools.*" isn't correct neither, because you might not use decision tables or the Drools compiler. So, we've now got two levels of dependencies: Artifacts, which depend on other artifacts, packages, which imply the use of other packages, and the artifacts, that actually contain the packages. Moving from JAR hell to package hell? > So, IMHO it is time to resurrect Silk based on new insights since the > experimental Silk in 2005. Yes, it definitely is. I'm still working on that one ... ;) BTW: Because of problems with serialization of Drools packages (which currently partially disables the package cache in Loom) I was in contact with the Drools developers. They were very interested in this idea and they will support such a project ("we'd like to eat our own dog food, keep us up-to-date", and they're not happy with M2 neither). Michael even suggested to write some paper which outlines what I'm working on, which identifies the problems and their solutions. In his opinion, that project has a good chance to get support from JBoss -- all of JBoss is not happy with the currently existing build systems. BTW 2: There was recently a thread in de.comp.lang.java, the subject was something like "Ant and Maven -- is there really nothing better?". I've discussed that problem in this thread, also with people like Jochen Theodorou (lead developer of Groovy) who isn't happy with M2 neither. => The demand is there, people are not happy with what exists, you and I are not the only ones. If we come up with a good concept, that thing will work, there will be a community, there will be support and the users will say "thank you" to the developers. I think, Silk should actually be developed very openly. Look at Groovy and how it changed over time until it got where it now is. That was some form of Extreme Programming, always reacting to what users wanted, prototyping, trying things and trash them, not being deadlocked on some specification. I think, one of the problems of M2 is the process: Discuss, discuss again, and discuss one more time -- and then do it, exactly, as discussed. You have your problems, I have mine, and other people have theirs. The goal is to solve all of those problems. This is also the reason why I'm putting so much energy into the plugin system itself, much more, than into the actual build functionality: Plugins must be easy to write and easy to maintain, because this is the foundation for being flexible, for being able to react to demads that you never thought of, when you started that thing. "Wiki brought to coding" is IMHO the right approach, but it won't work without marketing, people have to know that there's something in the pipeline, even if it's just for the feedback we need to understand the problem it it's full extent. I think, the world doesn't need "just another build system". What the world really needs is "The Build System". If we do it, think big, very big, or leave it. Cheers, Raffi -- The difference between theory and practice is that in theory, there is no difference, but in practice, there is. [EMAIL PROTECTED] · Jabber: [EMAIL PROTECTED] PGP Key 0x5FFDB5DB5D1FF5F4 · http://keyserver.pgp.com _______________________________________________ general mailing list general@lists.ops4j.org http://lists.ops4j.org/mailman/listinfo/general