That's very similar to our present process, for big changes, branch off, develop and merge back.

I did that with the policy providers and other security stuff then merged, the way I merged brought all the change logs and commits made while creating that code.

For small changes commits are made directly to trunk, provided a test run is performed first confirming passing.

We got into trouble when some very small changes to trunk were made that triggered unrelated test failures.

After realising the problems were related to race conditions and inadequate synchronization, I branched off with the qa_refactor branch.

That work is now almost complete, all tests are passing (I noticed 4 tests failed due to the Jenkins test environment, I'm just going to ignore those).

The plan shortly will be to incorporate other changes in trunk into qa_refactor before replacing trunk. (qa_refactor was branched of a known stable build of trunk).

Basically the code review stage isn't occurring as well as we'd like it too.

A development branch might be more suitable, I noticed when Sim and I were working on trunk together, we were reviewing each others code. So perhaps a united / shared development branch might be more appropriate than the current trunk model.

In hindsight, it's lucky the small trunk commits caused those breakages, I was ready to release a codebase with concurrency bugs likely to show up in production.

It's important for people to remember that much of the Jini code was written prior to 2004, before the JMM was ratified.

There are still well designed features in Jini API.

Regards,

Peter.

On 30/04/2013 7:24 AM, Dennis Reedy wrote:
If we dont have a process wrt RTC, then it doesn't really matter what tools or 
SCMs you bring into the fold, it wont help. On most projects I've been on we 
typically create a branch for feature development or per Jira issue that 
represents a story or epic. When you consider your work done, all tests pass, 
code committed (or pushed) in your branch etc ... you are ready for peer review 
and inspection.

This can be done with svn, git or hg. It's easier with git or hg, but svn does 
not stop you from doing this at all. But without having this sort of process it 
doesn't matter what SCM you use or what tools you bring into play.

HTH

Dennis

On Apr 29, 2013, at 507PM, Peter wrote:

There's a tool called Gerrit, it's uses git, reviewers can view two code 
versions sided by side from a web page, it also allows reviewer comments.

Is anyone familiar with these code review tools?

The issue doesn't appear to be the code; it appears that the current 
development framework doesn't allow review to be documented.

You can't expect to see only minor changes in actively developed software after 
3 years.

That's the other issue, the long release cycle, the best way to fix that is to 
volunteer some assistance.

I haven't worked to deadlines, I've spent a lot of time carefully reviewing my 
own code because of the lack of peer review, I hadn't expected that to appear 
as thrashing for solutions, rather I viewed it as refinement or refactoring.

What sort of time frame are you suggesting for a 2.2.1 branch for changes to be 
incorporated in a measured fashion?

What resources are needed to perform this work?

How many releases are you proposing in this time frame?

Keep in mind there are a lot of very important race conditon / visibility fixes 
that need to make it to release sooner rather than later.

Peter.


----- Original message -----
Hi Jeff:

Interesting point.  I don't think the foundation is against Git.  There
are now incubator podlings using it, and infra@ is beginning to support
it (http://git-wip-us.apache.org).  I think the issue, at least for
River, is just the friction of switching.  There are projects switching
over (Apache Flex, for instance), and as more of them switch, their
experiences will be helpful.

It's something to look into.  Perhaps as we split off modules from the
core, there's possibility of changes.

My original point, however, stands.  The way we've been doing things,
saying for sure what's in the trunk is an exercise in forensic
analysis.  Which is a problem because there are real commercial users of
the codebase, and we'd like to have more of them.  When a code librarian
or architectural review board at an enterprise evaluates whether
"jsk-platform.jar" version 2.3.0 goes into the list of approved jars,
they need to know precisely what the changes are.

Part of our problem is that we've gone so long without a release.  That
should not happen again.

But the other thing is that the "commit-then-review" model assumed that
changes to existing code would be minor in nature, and supported by the
test suite.  In that scenario, review is then trivial.  What we've had
is broad-ranging changes to both the code (including critical subsystems
like the DynamicPolicyProvider) and the test suites, plus API changes to
the Starter package, plus claims of huge throughput increases, all of
which makes review somewhat more challenging.  As I said before, it's
not that I question the outcome.  It's that I have trouble thouroughly
evaluating that outcome on this kind of scale.

So, the two-pronged approach I'm trying to promote here is to (1) come
up with a plan to vet code additions and changes that will eventually go
into a release stream after 2.2.1, then (2) see what we can do with the
process so that we don't need to repeat (1).

Cheers,

Greg.


On Sat, 2013-04-27 at 14:29, Jeff Ramsdale wrote:
I disagree that scm choice is orthogonal to workflow and policy because
some scms and related tooling support a given workflow much better than
others. It's the combination of a given scm and workflow that should be
evaluated versus others. I have a great deal of respect for the Apache
Foundation and its history, but this battle has already been fought and won
on GitHub. It is beginning to be a professional liability not to have git
experience, and with good reason.

Several years ago I had some epic debates on this topic with a Ruby hacker
friend of mine. I was on the svn side. At best I held my own in the
arguments, but now I use git exclusively and wouldn't go back. And by that
I mean I would seriously question taking a job at a shop that didn't use
git (or Mercurial) because of what it says about their resistance to
change. I know I'm not the only one that thinks this way as I've seen
plenty of evidence the Apache Foundation's reputation is suffering as a
result of its position with respect to git and the corresponding
pull-request-based development model.

-j


On Sat, Apr 27, 2013 at 12:30 AM, Greg Trasuk<tras...@stratuscom.com>wrote:

I think I might have been the one shooting you down ;-).  To be clear,
we can think about git (although there still seems to be some skepticism
within Apache), but speaking (as always) just for myself, I think the
question of scm choice is orthogonal to workflow and policy.  We should
work out workflow and policy first, then choose tools.    I also get
nervous about big changes.

Cheers,

Greg.

On Sat, 2013-04-27 at 00:51, Jeff Ramsdale wrote:
I sorta got shot down a little while ago for suggesting git be
considered,
but that's exactly the way branches and pull requests in git are
handled. I
understand it can be done with other scms, it's just easier with git.
Non-bindingly, I agree with you.

-j


On Fri, Apr 26, 2013 at 6:41 PM, Greg Trasuk<tras...@stratuscom.com>
wrote:

Hi all:

Peter brings up an excellent point here, something that I've found
troubling in this release process.  It is exceedingly difficult to
identify what changes have been made to the code, and why, or to trace
changes to JIRA issues.  By extension it's very hard to identify which
revisions have been or should be applied to a branch, and what bugs
have
been fixed in a branch.

Just throwing this out for discussion - I'm coming around to the view
that we should adopt a policy of review-then-commit for changes to the
trunk and any release branches.

I'm thinking that when someone does a bug fix or some work on a
feature,
we should:

- do that work on a branch (call this the 'working' branch for this
discussion)
- package that work as either a patch or a list of revisions to merge
- put that package into a JIRA issue (which may already exist if we're
doing a bug fix).
- identify which branches the patch should be applied to
- review, discuss, and vote on the patch in the JIRA issue
- then apply the patch to those branches, referencing the JIRA issue in
the commit messages
- ideally, terminate the working branch.  Future work proceeds on a new
branch from the trunk.

That way, it would be easy to trace changes in the trunk and release
branches, and hence to generate an accurate release notice.

Also, I'd suggest that the "FIX_VERSION" field in JIRA belongs to the
release manager for a given release - he or she will update the field
in
the requisite issues as part of the release process.

Your thoughts?  Anyone familiar with any other projects' practices?

Greg Trasuk.


On Fri, 2013-04-26 at 16:58, Peter Firmstone wrote:
Checked RIVER-[404], the jtreg test certs still fail.

It's been fixed in trunk, we need to remove it from the release notes
for 2.2.1

I don't think River-[403] or River-[417] made it into 2.2.1 either.

I think these issues were marked as completed as part of a previous
release process that was left unfinished.

River-[403] really needs to be included though.

Regards,

Peter.

   *** Start test: Sat Apr 27 06:46:43 EST 2013
         [jtreg] Test 1: TestRMI$TestClientSubjectAfterSwitchConstraints:
         [jtreg] FAIL: Unexpected exception:
java.lang.IllegalStateException:
no object exported via this exporter
         [jtreg]        at
net.jini.jeri.BasicJeriExporter.unexport(BasicJeriExporter.java:661)
         [jtreg]        at
TestRMI$TestClientSubjectAfterSwitchConstraints.run(TestRMI.java:182)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:185)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:130)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:143)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:99)
         [jtreg]        at TestRMI.main(TestRMI.java:53)
         [jtreg]        at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
         [jtreg]        at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         [jtreg]        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         [jtreg]        at java.lang.reflect.Method.invoke(Method.java:585)
         [jtreg]        at

com.sun.javatest.regtest.MainWrapper$MainThread.run(MainWrapper.java:94)
         [jtreg]        at java.lang.Thread.run(Thread.java:595)
         [jtreg]
         [jtreg] Test 2: TestRMI$TestUnexportInServerImpl: force=true
         [jtreg] FAIL: Unexpected exception:
java.rmi.server.ExportException:
listen failed; nested exception is:
         [jtreg]        net.jini.io.UnsupportedConstraintException: Problem
with
certificates: CN=serverDSA, C=US
         [jtreg] java.security.cert.CertificateExpiredException: NotAfter:
Mon Sep 05 00:24:12 EST 2011
         [jtreg]        at

com.sun.jini.jeri.internal.runtime.BasicExportTable.export(BasicExportTable.java:84)
         [jtreg]        at
net.jini.jeri.BasicJeriExporter.export(BasicJeriExporter.java:603)
         [jtreg]        at
TestRMI$TestUnexportInServerImpl.run(TestRMI.java:240)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:185)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:130)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:134)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:143)
         [jtreg]        at UnitTestUtilities.test(UnitTestUtilities.java:99)
         [jtreg]        at TestRMI.main(TestRMI.java:53)
         [jtreg]        at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
         [jtreg]        at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         [jtreg]        at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         [jtreg]        at java.lang.reflect.Method.invoke(Method.java:585)
         [jtreg]        at

com.sun.javatest.regtest.MainWrapper$MainThread.run(MainWrapper.java:94)
         [jtreg]        at java.lang.Thread.run(Thread.java:595)
         [jtreg] Caused by: net.jini.io.UnsupportedConstraintException:
Problem with certificates: CN=serverDSA, C=US
         [jtreg] java.security.cert.CertificateExpiredException: NotAfter:
Mon Sep 05 00:24:12 EST 2011
         [jtreg]        at

net.jini.jeri.ssl.SslServerEndpointImpl$SslListenEndpoint.checkCredentials(SslServerEndpointImpl.java:726)
         [jtreg]        at

net.jini.jeri.ssl.SslServerEndpointImpl$SslListenEndpoint.listen(SslServerEndpointImpl.java:658)
         [jtreg]        at
com.sun.jini.jeri.internal.runtime.Binding$2.run(Binding.java:92)
         [jtreg]        at java.security.AccessController.doPrivileged(Native
Method)
         [jtreg]        at
net.jini.security.Security$5.run(Security.java:543)
         [jtreg]        at java.security.AccessController.doPrivileged(Native
Method)
         [jtreg]        at
net.jini.security.Security.doPrivileged(Security.java:540)
         [jtreg]        at
com.sun.jini.jeri.internal.runtime.Binding.activate(Binding.java:89)
         [jtreg]        at

com.sun.jini.jeri.internal.runtime.BasicExportTable.getBinding(BasicExportTable.java:221)
         [jtreg]        at

com.sun.jini.jeri.internal.runtime.BasicExportTable.access$100(BasicExportTable.java:46)
         [jtreg]        at

com.sun.jini.jeri.internal.runtime.BasicExportTable$LC.addListenEndpoint(BasicExportTable.java:247)
         [jtreg]        at

net.jini.jeri.ssl.SslServerEndpointImpl.enumerateListenEndpoints(SslServerEndpointImpl.java:568)
         [jtreg]        at

net.jini.jeri.ssl.HttpsServerEndpoint.enumerateListenEndpoints(HttpsServerEndpoint.java:835)
         [jtreg]        at

com.sun.jini.jeri.internal.runtime.BasicExportTable.export(BasicExportTable.java:81)
         [jtreg]        ... 14 more
         [jtreg] Caused by:
java.security.cert.CertificateExpiredException:
NotAfter: Mon Sep 05 00:24:12 EST 2011
         [jtreg]        at

sun.security.x509.CertificateValidity.valid(CertificateValidity.java:256)
         [jtreg]        at
sun.security.x509.X509CertImpl.checkValidity(X509CertImpl.java:570)
         [jtreg]        at
sun.security.x509.X509CertImpl.checkValidity(X509CertImpl.java:543)
         [jtreg]        at
net.jini.jeri.ssl.Utilities.checkValidity(Utilities.java:774)
         [jtreg]        at

net.jini.jeri.ssl.SslServerEndpointImpl$SslListenEndpoint.checkCredentials(SslServerEndpointImpl.java:698)
         [jtreg]        ... 27 more

Greg Trasuk wrote:
Hi all:

This build https://builds.apache.org/job/river-2.2-qa-jdk7/4/says it
failed, but if you look closely at the console output, all the
testing
passed.  There was a configuration error (mine) in the archiving
results
post-build step. There's another build scheduled which should show
a
complete "pass" result.  Unfortunately, as we know, the test run is
about 16 hours.  And we're not the only project with long test
runs, so
there's a substantial wait time before the run gets onto an
executor
machine (just as an aside, I'm looking into setting up a Jenkins
instance of my own to run test builds in the future.  Perhaps
others
should think of doing the same thing).  In any case, given the
minimal
changes from 2.2, I'm now comfortable going forward with a release.

I'm currently building the 2.2.1 release candidate and am thinking
of
calling for a vote shortly.  Should I go straight to the vote, or
do
people want to review and independently test the 2.2 branch first?

Not that I'm calling a vote yet, but as I'm typing, I see the
release
candidate has finished uploading to
http://people.apache.org/~gtrasuk/river/, so if you want to have a
look
at it, go ahead, and let me know if there's any issues.  I will
mention
that if you read the RAT reports, the "qa_jtreg" report names 224
files
without license headers.  These are ".policy" and other
configuration
files with little creative content.  Further, they were in the
previous
release as approved by the Incubator, so I think we're on safe
ground
here, although they probably should have license headers added in
the
trunk at some point.

Also, as a point of order, normally a vote would run for 72 hours.
Given that the weekend is coming up, I'm inclined to extend that
to 96
hours.  Opinions?

Cheers,

Greg Trasuk.







Reply via email to