Daniel John Debrunner wrote:
Jeremy Boynes wrote:


Daniel John Debrunner wrote:


Hopefully I made it clear in my other e-mail, but the issue is not how
the code is laid out, but do we want to require Derby contributors &
committers to download multiple JDKs to build Derby, or a single one?
And how do I make progress on J2ME if developers cannot download a J2ME
environment?


I still see this as an effect of the one-jar-fits-all model that requires everything to be built all the time.


I think we are getting closer to understanding each other, I still
disagree that it's a packaging issue, but it does arise from the
requirement that a single compile environment produce all the code for
all the platforms.


:-)


Under this model, I could do all my development and testing using 1.4
and the JDCB3.0 modules and you would do all your development and
testing using J2ME and the JSR169 modules.


So this would be a change from the current Derby model. I'm new to open
source so I'm unclear how this would work in practise. If a contributor
or committer working only in JDK 1.4 submits or commits a patch from
that breaks the build in the JDK 1.3 and/or J2ME, what happens?

Would the patch be veto'ed until it compiles in all the environments? If
so, then there is an implicit requirement for all developers to have all
the compile environments and the process is more manual, thus more error
prone.

Or is the build in the other environments simply broken until someone
who cares about that platform fixes it? Bad for quality as other
undetected problems can be added to that environment while the build is
failing.

That situation cannot arise today because of the single compile model.


Yes it is a change in the model.

It deals with the issue that many developers will not have access to all platforms or environments. Often they are working from a single machine (like now, all I have access to is a laptop) and that may not even run all environments.

Often the only way a change can be tested is for it to be committed and then built and run by some volunteer on a platform they own; a few core (tier 1) platforms may be available on a continuous basis (a la gump or continuum) but others may not be available until someone decides to download and test a beta release.

One factor here is the time "until someone who cares about that platform fixes it" is usually very very short - broken builds don't linger because its damned inconvenient and also rather embarassing. No veto is needed, its cultural. Usually the fix is done by the original developer.

Is quality affected? I don't believe so because everyone sees the problem and any defect is corrected quickly.



We take care in shared code
not to use features that are not available in the other platform. We
both make progress, and rapidly because we can concentrate on the
feature we're working on rather than the cross-platform build.


Once you downloaded the initial requirements to build Derby, how much
time have you spent worrying about the cross-platform build, which
occurs automatically? Beyond the initial download I believe it is not a
time drain.


Once it's all up and running then things are fine. However, there is a fair bit of complexity in the build to deal with compiling specific bits for specific JVMs; as we do more and more to deal with J2ME issues or to leverage bits of 1.4 and 1.5 then this is only going to get worse. It may well be that eventually the everything-in-one-build model may not work at all - for example, is there even an J2ME or JDK1.5 JDK available for OSX (a platform used by a lot of open source developers)?


These types of issues affect all sizable open source projects and breaking them down into separately developable modules is very common practise - e.g. HTTPD, Tomcat/Jakarta Commons, Geronimo, Ant, Maven are examples just at Apache alone.

--
Jeremy

Reply via email to