On Wed, Jan 23, 2013 at 6:44 PM, Rob Godfrey <rob.j.godf...@gmail.com>wrote:
> Firstly I think it would be helpful if you made clear the requirements you
> consider to be essential, nice to have, unimportant and/or detrimental.
> On 23 January 2013 20:17, Rafael Schloming <r...@alum.mit.edu> wrote:
> > On Wed, Jan 23, 2013 at 8:01 AM, Keith W <keith.w...@gmail.com> wrote:
> > > Essential
> > >
> > > 3. To change proton-api, all that is required is to edit a Java file.
> > > - Developer productivity
> > >
> > This seems to be kind of a leading requirement so to speak, or at least
> > it's phrased a little bit oddly. That said I would never argue with it
> > most of the Java files, however in the case of the API files I don't see
> > how you're ever going to be able to stop after just editing the API.
> > Because we have two implementations, we're fundamentally stuck with
> > manually syncing the implementations themselves whenever a change to the
> > interface occurs. By comparison the highly automatable task of syncing
> > API files themselves seems quite small. I'm imagining most changes would
> > something like this, say we want to add a getter to the Message
> > we would need to:
> I think it's worth considering two different cases
> 1) The API change is purely on the Java side... there is no corresponding
> change to the C API. This may be to add some sort of convenience method,
> or simply a refactoring.
> In this case the developer making the change needs only to work in Java,
> there will be two implementations of the interface to change (in two
> different source locations) but is all rather trivial.
Is this actually possible? Wouldn't you at least need to build/run the C so
that you know there is actually no impact on the C impl? Even if you're
just calling it differently it could tickle a bug.
2) The API change affects both C and Java.
> In this case either a single developer has to commit to making the change
> in both the C and the Java, or the API change has to have been discussed
> before work commences and Java and C developers will need to work
> together. If there is a single developer or developers working very
> closely together then I would suggest that the steps would in fact be:
> 1. edit the Message interface / edit the message.h file
> 2. write and/or modify a test (and Python binding if necessary)
> 3. edit the JNI binding to use the SWIG generated API
> 4. edit the C / Pure Java
> 5. run the tests against the C / Java
> (6. modify other bindings if necessary)
> repeat steps 4 and 5 until they pass.
> In the case where the C and Java developers are separated by time/distance
> then the build / tests on one side will be broken until the implementation
> catches up. For the sake of politeness it is probably better to ensure
> that at all points the checked in code compiles even if the tests do not
> pass. For cases where the changes to the API are additions then it should
> be relatively easy to make the changes in such a way as to simply have any
> tests relating to the new API be skipped. For cases where the C leads the
> Java, the Java implementation can simply throw
> UnsupportedOperationException or some such. Where the Java leads the C we
> can throw said exception from the JNI binding code and leave the .h file
> unchanged until the C developer is ready to do the work.
> Only for cases where there is modification to existing APIs does it seem
> that there may be occaisins where we could not have a consistent build
> across components, and I would strongly recommend that any change where the
> Java and C are being worked on in such a fashion should take place on a
> branch, with a merge to trunk only occurring when all tests are passing
> against all implementations.
> > 1. edit the Message interface
> > 2. write and/or possibly modify a test
> > 3. edit the java Message implementation
> > 4. run the tests against java, if they don't pass go to step 2
> > 5. now that the java impl passes the tests, run the tests against the C
> > impl
> > 6. if the sync check fails on the C build, run the sync script
> > 7. edit the message.h file
> > 8. edit the message.c implementation
> > 9. edit the adapter layer between the C API and the Java interfaces
> > 10. run the tests against the C, if they don't pass go to step 8
> > 11. run the tests against both, just to be sure
> > 12. check in
> > Given the above workflow, it seems like even with a relatively small
> > like adding a getter, the scripted portion of the syncing effort is going
> > to be vanishingly small compared to the manual process of syncing the
> > implementations. Perhaps I'm just envisioning a different workflow than
> > you, or maybe I'm missing some important scenarios. Could you describe
> > workflow(s) you envision and how the sync process would impacting your
> > productivity?
> I differ strongly in my opinion here. Every time I need to drop out of my
> development environment to run some ad-hoc script then there is overhead...
Don't you end up dropping out of your development environment anyways in
order to build the C code?
> Moreover if we are using svn to do this I presume we would be having to
> check in any change before the sync could be made. This means that every
> edit to a file now has to be followed by commit and sync (which would
> obviously be an insane process). Those of us behind corporate firewalls
> and proxies experience very degraded response times when updating from the
> Apache repository.
I wasn't suggesting syncing via the repository, I was suggesting syncing
the directories in the local checkout, i.e. effectively have a script that
just does 'cp -r ../proton-j/<blah...>/src bindings/<blah...>/src', but
obviously with some logic in there to exclude .svn and the like and to work
regardless of what directory you're in. You could even as you suggest below
make the sync happen automatically from the build.
> Frankly I have doubts that any such sync script could be produced that
> would work across all the different environments that developers may work
> in. Mostly though I remain totally unconvinced that there is a compelling
> reason to do this. Rather than having a "sync" script to keep two copies in
> svn... why not just test if in a svn directory and ../proton-j is present,
> then copy ../proton-j/proton-api. Then in the release process just copy
> the proton-j/proton-api source into the relevant place in the tarball?
That's really not that different from what I'm proposing, modulo whether
you do the copy automatically or not, the only real difference is whether
you check in the copy or whether it just sits in your checkout during
development and the release script does the copying. My issue with the
release script doing the copying is that it makes the mapping from release
tarball back to svn significantly less transparent. I know as a user of
other open source projects I sometimes find myself wanting to find the svn
source for a file included in a given release, I usually do this via google
and look at the thing view viewvc. It would be quite confusing to be
browsing the release branch and have it be that different from what is in
the source tarball. I'd also worry about someone accidentally checking in a
copy, something that happens fairly frequently with other files that are
perpetually locally modified. As you can imagine that would make kind of a
> > > 4. To switch to a particular SVN revision, simple SVN commands are run
> > > (e.g. svn switch or svn update)
> > > - Developer productivity
> > >
> > > 5. proton-c can be built, excluding its JNI binding, without requiring
> > > non-standard tools*
> > > 6. proton-c can be built, excluding its JNI binding, from a standalone
> > > checkout of the proton-c directory
> > > - Developer productivity / tool familiarity
> > >
> > > Neutral
> > >
> > > 1. A "tarball" source release of proton-c can be built by a user
> > > without an external dependency on any other part of proton, e.g.
> > > proton-api.
> > > 2. The aforementioned proton-c tarball release can be produced by
> > > performing a simple "svn export" of proton-c.
> > > - If I were building proton-c for my platform for tarball, I would
> > > also want to run the tests to be sure proton-c functions correctly.
> > > For this reason I question the usefulness of a proton-c tarball. I
> > > would want a tarball that included the whole tree including the tests.
> > >
> > The proton-c tarball does include the tests directory. The tests
> > is just pure python code, so once you've installed proton-c onto your
> > system, you can run any of the proton tests just like you would run any
> > normal python script. As I mentioned in another post, the inclusion of
> > tests under both proton-c and proton-j is the one deviation in directory
> > structure from a pure svn export, and even this much is kindof a pain as
> > there is no way for the README to actually describe things properly
> > being broken in either the svn tree or in the release artifact.
> So, if we want to keep the source and the svn the same, wouldn't it make
> more sense for the release tarballs to actually just be strict subsets of
> the proton tree? that is the proton-j tarball would be
> +-- proton-j
> +-- tests
> and the proton-c tarball would be
> +-- proton-c
> +-- tests
> If we wanted to avoid a copy of the java API in the build then we could
> then actually just reference ../proton-j and include that subset of
> proton-j in the release source tarball?
I'm not sure that helps with the README situation. You now effectively
have 3 views of the same tree, and it would be difficult to imagine writing
a single README that would make sense in every case. You'd pretty much have
to structure each sub directory as a standalone tree and each would need
their own READMEs. As is, the C tarball basically works on the assumption
that the tests directory is a standalone tree which it simply includes to
eliminate the extra step of downloading it, so it's similar in that respect
but avoids the extra depth.