On Dec 16, 2006, at 4:49 PM, David Jencks wrote:
Why do you want to rebuild released jars? I certainly think the automated system should be rebuilding all the non-released code we know about, but I don't understand the point of ever rebuilding released code. Is this because you think the jar in the remote repo will change? I would think saving the expected hashcode and comparing with the actual hashcode would be more reliable.

2 reasons... one, to always be 100% sure the the codebase reflects the release binary, and to ensure that the codebase is always buildable.

This moves the trust from the released binary on some webserver somewhere back to the source code repository.


I don't really see rebuilding from source as a defense against the remote repo changing. Everyone else is going to be using the remote repo, so even if we have a more correct locally built version everyone else will be screwed.

I don't see it that way at all... by building from source, if anything does happen to the remote artifacts, then it will be quickly apparent what happened... and more so the automated builds will keep working. But as mentioned above, there is more to building the artifacts than simply to defend against artifacts being altered or removed.


I would think using an svn based repo or keeping our own audit trail (such as the hashes for every released artifact we use) would be more reliable. If some released artifact changes, I think no automated recovery is possible: someone has to figure out why and figure out what to do about it, since maven allegedly guarantees that it will never happen.

Sure it will happen... and has happened and will continue to happen. Just because Maven's repo has some policy against artifact removal or tampering, does not mean that someone can't not hack into the system and change it... or a catastrophe might occur and loose a bunch of data, but more so... other groups run their own repositories, and there is no way to ensure that they follow the system set of policies as the Maven folks do... but even more so, even with those policies in place, it is still possible that artifacts may change, and thus... its not a really trustworthy repository of artifacts that we can rely upon to have the right artifacts to build past releases off of.

So, to mitigate some of that risk, I setup our components to build from source... so that we can always have automated builds running, even in interim times like we just say where artifacts have not yet propagated to central.... we could have still ensured that the build was functional. But to do that effectively means that some practices for component integration need to be followed.


maybe I'm just being stupid.... but I'm not getting it yet.

No, you are not stupid...

I think my goals are simply a lot different than many others regarding these builds. I want to build a system that can be used to run builds 100% of the time regardless of what dependencies are propagated into what repo, builds that can be easily distributed across nodes and using SNAPSHOT artifact outputs from one build to another. Builds where you can easily run a set of tests on after a build and be able to see the entire chain of changes which were pulled into a build to correlate test pass/failures back to actual changes.

I am close to getting this done... recent work has put some extra problems for me to solve though. But you will be able to run TCK tests on a specific CTS build, which was generated from a specific server build, and which was using a specific openejb2 build, etc.

Once the specs that have been released make it to central then I can fixed up the AH configuration and get back, just lost a few days.

BUT... when it does come a time when a spec needs to be fixed, or a new spec is introduced, then expect the automated builds to start failing, with missing dependencies if there is no SNAPSHOT deployed... and if there is a SNAPSHOT deployed, then start to expect that at times build may be using different artifacts for those modules... there is no way for me to determine where they came from or what specific code they contain.

And specs will change... this does happen... new ones do get added, that does happen, bugs are found, license muck might need to change again, blah, blah, blah.

I was trying to avoid any automated build breakage when we need to make any of those changes to our specs... but I guess I will just have to deal with it later when it does happen and shit start breaking. Or maybe I will just leave it broken and let someone else fix it... though gut tells me its gonna be me.

--jason

Reply via email to