From: Sam Ruby [mailto:[EMAIL PROTECTED]] > > [[ I'm choosing to respond to a private e-mail publically ]] Hm. > Geir Magnusson Jr. wrote: > > I mean, I think its a great tool and all, but the cross dependency > > that gump seems to create in this nightly check is artifical in > > that there seems to be an implicit assumption that the current > > CVS of project A is fair game to try to use at any moment in > > project B that uses it. > > It would be nice if the next release of Turbine works not > only with the previous release of Velocity, but also > (as near as humanly possible) the next one. Of course. But Turbine works with a version of Velocity and is tested with that. > Example 2: Turbine depended on an interface that log4j > removed. Once this > was identified, log4j added the interface back and deprecated > it, agreeing > that it would be present (albeit deprecated) in the next > release, and then > removed in the following one. I am not complaining about how gump monitors things - I actually think it's a good thing - vigorous regression testing is good and will keep us honest and clean :) I think that there are two things that are important : 1) Testing the project as delivered. 2) Testing to catch future potential problems. I just worry that announcements of the 'future potential problems' could be misleading to a new user who is looking at Velocity to see if its worth investing some time to experiment with.If they think it's 'broken' because of a gump message, then people would be getting the wrong impression. Not everyone (including me) understands exactly what Gump is testing (does it do full dependency analysis? Does gump bootstrap itself with a fresh nightly snapshot of the tools it uses ? ) so at least if Gump could qualify the problem as either 'something wrong now' or ' you might have a problem later because...' it would be even more valuable. > Frequent testing of this nature also helps in other ways. > I'm sure that > jvz would have found and fixed the problem that was > identified over the > weekend himself, but having the symptoms and a test case made > available > within hours of the change - while it was still fresh in his memory - > coupled with the knowledge that the test worked prior to the > change should > have made this easy to address. Did you see what he did? He rjust emoved the offending ant task, and was a little surprised : he stated it worked fine using the ant that we choose to include with Velocity. And again - not arguing that iterative testing is bad - I do it personally whenever I make a change to velocity, I download a fresh new source tree, build all the targets, build the javadoc, build the docs, etc... What would be nice is a 'Gump server' somewhere - so I can give it an email address and tell it to gump the vel CVS in a way that I specify - give a list of build targets or something.... then it could test in an independent way and email me the message... > > If we add a new feature to Velocity, Turbine isn't broken if turbine > > includes a Velocity jar into its project, right? It is up to the > > Turbiners to make the decision to get a new version of Vel, test it, > > and include in their project. > > > > In Velocity, we use JDOM, Xerces, et al : we prefer to > simply take the > > a version or snapshot, test it, and include it. Then we > are confident > > that it will work for our users. > > Turbine builds upon multiple components. If Velocity only > works against > Xerces 1_2_1, and some other component that Turbine requires > only works > against Xerces 1_2_3, then composing a system with the four > components is > not possible. Velocity doesn't depend on anything else for it's *runtime* behavior - we use those bits for building and generating documentation - part of the packaging and delivery. Turbine depends upon us for its runtime behavior [possible behavior- because you don't have to use Velocity in Turbine], but as you just noted above, specific versions may be important - that would be interesting to have real dependency analyis - so vel could declare what it is dependant upon for runtime, turbine would too - then gump could help analyze that. > Cocoon has faced this problem (specifically with xerces) on a > number of > occasions. > > > I know you don't like binaries in CVS, but I disagree for > exactly this > > reason - there is no other way to rely on software components if you > > can't control what version you use. > > If you are a top level component, you arguably can have > considerable say > into what versions your customers use. If your component is > designed to be > embeddable, then you must be prepared to give up much of that control. No - your customers choose to use a version, test with it, and include it. It's up to you as a low-level compoent to be as backwards compatible as possible - that that is something to take up with your user community (customers) when you want to make a change if you want to keep them :) > Many of my objections to checking in binaries are addressed > by my obtaining > a cable modem and by gump. Actually, to be more precise, > what I object to > is mixing editable files with derived files, and the amount > of my objection > is inversely proportional to the "distance" between the two. These are entirely different things though. I don't mind derived stuff either, as in the case of documentation. It's nice that it's all there when a user downloads the package and wants to look at it. > A cvs > repository which contained only jars which were certified to > work together > would be a peachy idea. Because the cvs "HEAD" revision of most open > source projects is fluid, one doesn't quite get this level of > confidence > with jars mixed with source. But until you get that, we have to trust each project to produce a working distribution, snapshot, milestone or release. I assume it is in each projects interest to make life as easy as possible for users. Given the space that Velocity competes in (with JSP, WebMacro, et al as alternatives), its important to us that a user can just get it and go with little pain. > And jakarta-alexandria's > practice of checking > in generated code was simply an accident waiting to happen. > Jakarta-site's > practice of checking in generated html is close to this extreme, but > thankfully there the consequences of failure aren't as dire. We check in generated HTML for our site docs as well, and havn't had a problem yet AFAIK. The nice part is that you can know excatly what the public site will look like, as you can fully test it on a local machine before checking in. There is also the degree of control that cvs provides so with a cvs update on the server, I know what the HTML will be. > In any case, because many projects check in binaries, the > path of "working > against the previous version" is well tested. I'm just > trying to trying to > give the "next release" equal time. > > My reason for providing the link to "shit happens" is that I don't get > upset about individual failures. As long as more good comes > out of this > than bad, we all benefit. We don't get upset either - its good that they are caught. It would be interesting to make the gumpization process smarter, testing both what we declare we want to deliver (making the gump detection of aberattion that much more valuable) versus the "it's possible there will be a problem in the future..." messages. > And I am quite willing to be patient. It takes time to overcome the > objection "but this is the way I have always done it". That isn't what I am saying. I hope thats clear. Directed change is good geir
