On 24 January 2013 15:49, Rafael Schloming <r...@alum.mit.edu> wrote:
> On Wed, Jan 23, 2013 at 6:44 PM, Rob Godfrey <rob.j.godf...@gmail.com>wrote:
>
>> Firstly I think it would be helpful if you made clear the requirements you
>> consider to be essential, nice to have,  unimportant and/or detrimental.
>>
>> On 23 January 2013 20:17, Rafael Schloming <r...@alum.mit.edu> wrote:
>>
>> > On Wed, Jan 23, 2013 at 8:01 AM, Keith W <keith.w...@gmail.com> wrote:
>> >

[snip]

>> > Given the above workflow, it seems like even with a relatively small
>> change
>> > like adding a getter, the scripted portion of the syncing effort is going
>> > to be vanishingly small compared to the manual process of syncing the
>> > implementations. Perhaps I'm just envisioning a different workflow than
>> > you, or maybe I'm missing some important scenarios. Could you describe
>> what
>> > workflow(s) you envision and how the sync process would impacting your
>> > productivity?
>> >
>> >
>> I differ strongly in my opinion here. Every time I need to drop out of my
>> development environment to run some ad-hoc script then there is overhead...
>>
>
> Don't you end up dropping out of your development environment anyways in
> order to build the C code?
>

No - presuming I have built the C at some point in the past and there
is no change to the SWIG part, then I don;t need to drop out of my
environment at all to do a build / test.

>
>> Moreover if we are using svn to do this I presume we would be having to
>> check in any change before the sync could be made. This means that every
>> edit to a file now has to be followed by commit and sync (which would
>> obviously be an insane process).  Those of us behind corporate firewalls
>> and proxies experience very degraded response times when updating from the
>> Apache repository.
>>
>
> I wasn't suggesting syncing via the repository, I was suggesting syncing
> the directories in the local checkout, i.e. effectively have a script that
> just does 'cp -r ../proton-j/<blah...>/src bindings/<blah...>/src', but
> obviously with some logic in there to exclude .svn and the like and to work
> regardless of what directory you're in. You could even as you suggest below
> make the sync happen automatically from the build.
>

And which runs on Windows too? :-)

Really this is a non-starter for me.  I haven't seen anything to
change my mind that actively having the source in two places in the
tree is insane.  It is far more sensible IMHO to simply bow to the
simple fact that both the C and the Java builds do depend on a shared
definition of the API.

The same issue actually exists right now in that both depend on the
Python API ... just that that is not actually formally defined
anywhere.

>
>>
>> Frankly I have doubts that any such sync script could be produced that
>> would work across all the different environments that developers may work
>> in.  Mostly though I remain totally unconvinced that there is a compelling
>> reason to do this. Rather than having a "sync" script to keep two copies in
>> svn... why not just test if in a svn directory and ../proton-j is present,
>> then copy ../proton-j/proton-api.  Then in the release process just copy
>> the proton-j/proton-api source into the relevant place in the tarball?
>>
>
> That's really not that different from what I'm proposing, modulo whether
> you do the copy automatically or not, the only real difference is whether
> you check in the copy or whether it just sits in your checkout during
> development and the release script does the copying. My issue with the
> release script doing the copying is that it makes the mapping from release
> tarball back to svn significantly less transparent. I know as a user of
> other open source projects I sometimes find myself wanting to find the svn
> source for a file included in a given release, I usually do this via google
> and look at the thing view viewvc. It would be quite confusing to be
> browsing the release branch and have it be that different from what is in
> the source tarball. I'd also worry about someone accidentally checking in a
> copy, something that happens fairly frequently with other files that are
> perpetually locally modified. As you can imagine that would make kind of a
> mess.

So instead of doing a copy lets just be have the source code in the
one place in the repo, and have the source tarball include that
directory (both the C and the Java traballs) the same as they have to
do with the tests.  Moreover let's not fake a new directory structure,
let's just tar up from a higher level but exclude the bits that we
don;t want for the C / java tar respectively.

>
>
>
>>
>>
>>
>> >
>> > > 4. To switch to a particular SVN revision, simple SVN commands are run
>> > > (e.g. svn switch or svn update)
>> > > - Developer productivity
>> > >
>> > > 5. proton-c can be built, excluding its JNI binding, without requiring
>> > > non-standard tools*
>> > > 6. proton-c can be built, excluding its JNI binding, from a standalone
>> > > checkout of the proton-c directory
>> > > - Developer productivity / tool familiarity
>> > >
>> > > Neutral
>> > >
>> > > 1. A "tarball" source release of proton-c can be built by a user
>> > > without an external dependency on any other part of proton, e.g.
>> > > proton-api.
>> > > 2. The aforementioned proton-c tarball release can be produced by
>> > > performing a simple "svn export" of proton-c.
>> > > - If I were building proton-c for my platform for tarball, I would
>> > > also want to run the tests to be sure proton-c functions correctly.
>> > > For this reason I question the usefulness of a proton-c tarball.  I
>> > > would want a tarball that included the whole tree including the tests.
>> > >
>> >
>> > The proton-c tarball does include the tests directory. The tests
>> directory
>> > is just pure python code, so once you've installed proton-c onto your
>> > system, you can run any of the proton tests just like you would run any
>> > normal python script. As I mentioned in another post, the inclusion of
>> > tests under both proton-c and proton-j is the one deviation in directory
>> > structure from a pure svn export, and even this much is kindof a pain as
>> > there is no way for the README to actually describe things properly
>> without
>> > being broken in either the svn tree or in the release artifact.
>> >
>> >
>> So, if we want to keep the source and the svn the same, wouldn't it make
>> more sense for the release tarballs to actually just be strict subsets of
>> the proton tree?  that is the proton-j tarball would be
>>
>> proton
>>   |
>>   +-- proton-j
>>   |
>>   +-- tests
>>
>> and the proton-c tarball would be
>>
>> proton
>>   |
>>   +-- proton-c
>>   |
>>   +-- tests
>>
>>
>> If we wanted to avoid a copy of the java API in the build then we could
>> then actually just reference ../proton-j and include that subset of
>> proton-j in the release source tarball?
>>
>
>  I'm not sure that helps with the README situation. You now effectively
> have 3 views of the same tree, and it would be difficult to imagine writing
> a single README that would make sense in every case. You'd pretty much have
> to structure each sub directory as a standalone tree and each would need
> their own READMEs.

Why - you build from the top level.  The only thing that is different
is that you'd probably have a README_JAVA and a README_C and you'd
exclude the one you didn't want

-- Rob

>As is, the C tarball basically works on the assumption
> that the tests directory is a standalone tree which it simply includes to
> eliminate the extra step of downloading it, so it's similar in that respect
> but avoids the extra depth.
>
> --Rafael

Reply via email to