On 24 January 2013 14:43, Rafael Schloming <r...@alum.mit.edu> wrote:
> On Wed, Jan 23, 2013 at 6:10 PM, Rob Godfrey <rob.j.godf...@gmail.com>wrote:
>
>> On 23 January 2013 19:09, Rafael Schloming <r...@alum.mit.edu> wrote:
>>
>> > I've added another wiki page that documents the proton release steps as
>> > best I can remember. I'll updated it more during the 0.4 release:
>> > https://cwiki.apache.org/confluence/display/qpid/Proton+Release+Steps
>> >
>> > I think it's important to understand the overall release and testing
>> > process as it is a significant and perhaps underrepresented factor
>> against
>> > which to measure any proposals. I believe the build system requirements
>> > documented below are inherently incomplete as they don't recognize the
>> fact
>> > that the C build system is not just a developer productivity tool, it is
>> > also the installer for our end users. And before anyone says our end
>> users
>> > will just use yum or equivalents, all those packaging tools *also* depend
>> > on our build system both directly, and because we can't even supply a
>> > release for packagers to consume without a reasonable amount of direct
>> > install testing. To a good extent a standard looking C source tarball is
>> > pretty much the equivalent of a jar or jar + pom file in the Java world,
>> > it's really the only platform independent means of distribution we have.
>> >
>> >
>> It would be helpful if you could enumerate requirements which you believe
>> to be missing and add the to the existing wiki page.  I don't think anyone
>> is suggesting that the make install step should be broken in the source
>> tarball, so it's a little unclear to me the problem you are trying to
>> highlight above.
>>
>
> I believe it was suggested at one point that we not have a C source tarball
> but just export the entire tree as a single source tarball. This strictly
> speaking would not break the make install step, however it would have a
> serious impact on our ability to leverage others to test the C impl. Anyone
> downloading this would need to understand a great deal about the dual
> nature of proton and how it is structured just in order to know that they
> can ignore half the tree. Compare that with a standard C source tarball
> where I can hand it off to someone who knows nothing about proton and
> simply tell them to do a make install and then run one test script. Given
> the latter structure to our release artifacts there are *significantly*
> more resources we have access to in order to perform the testing necessary
> to do a quality release.
>

I'm not sure requiring "the ability to read a README file" is really
going to have a huge impact on our ability to leverage others. I'm not
sure how widespread CMake use is (it's certainly less familiar to me
than autotools) - certainly I expect people unfamiliar with cmake will
have to read the README anyway.



> I'll take a stab at distilling some requirements out of the above scenario
> and sticking them onto the wiki page, but I actually think the scenario
> itself is more important than the requirements. There's no disagreement
> that it would be nice to have a very standard looking C source tarball with
> minimal dependencies and so forth that can be used in the above manner,
> it's simply the relative priority of the requirement when it conflicts with
> developer convenience that is a source of contention.
>
>
>>
>> > It's also probably worth noting that perhaps the biggest issue with
>> system
>> > tests in Java is not so much imposing maven on proton-c developers, but
>> the and nottest against Java code running elsewhere.
>>
>> > fact that Java may not be available on all the platforms that proton-c
>> > needs to be tested on. My primary concern here would be iOS. I'm not an
>> > expert, but my brief googling seems to suggest there would be significant
>> > issues.
>> >
>> >
>> So, I think we probably need to consider what sort of tests are required,
>> and which languages it is appropriate to write any particular type of test
>> in.  For me tests in Java have some advantages over Python tests. Firstly
>> they allow interop tests between the two implementations within the same
>> process
>
>
> Can you elaborate on the benefits of this? It seems to me when it comes to
> interop testing that, to the extent you can get away with it, over the wire
> tests would be preferred. For example you could run proton-c on iOS via
> Java tests running on another system, to say nothing of testing against non
> proton implementations which would necessarily need to be over-the-wire.
>

The same arguments could be made about our existing tests. However
being able to run in process makes thing much easier to automate - no
need to randomly assign ports, communicate randomly assigned ports to
the other process, etc.

>
>> and secondly they will also be able to be used against any future
>> pure JavaScript Proton implementation (something we have planned to do but
>> not yet embarked upon).
>
>
> This is also true of python tests. In fact the whole point of a python test
> suite is that you can run python, java, and javascript all within the same
> JVM using purely Java tooling.

Except then we would have two levels of API mangling between the test
and the implementation.

Back when the tests were written against the non-idiomatic Python API
this wasn't so bad as there was very little code actually in the
Python "binding".  now there is a Python object model, and then a Java
object model.  Keith has already pointed out at least one instance
where the Python "binding" is doing things in its object
initialization that are not replicated in other bindings. In general
we are now in the situation where it's very hard to work out if a bug
is in the implementation or the binding.

Frankly my experience of the current Python tests has been very poor.
Partly this is because of the nature of the tests which tend to be
monolithic and unclear in what they are testing and why the expected
outcomes are "correct"; partly it is because attempting to debug
python through jython to Java turns out to be unpleasant and slow.
Moreover the test framework we have for Python is bespoke and has no
other tooling support which makes integration hard.

Now obviously the answer to not liking the tests is to write some of
my own.  That is what we are planning to do, but we are planning to do
so in Java.  These tests will be able to be run against the C through
the JNI binding so rather than making these tests "java only" anyone
who has access to a JVM will be able to run them against any
implementation which has a binding to the Java API.

>
>  A third issue for me is that when we start to
>> attempt more granular testing of things such as error handling, I will want
>> to ensure that the user experience is identical between the pure Java and
>> JNI binding implementations of the Java Proton API... if the tests are
>> being run through a second translation into the Python API then this is not
>> easily verifiable.
>>
>>
> Why is this not verifiable? The proton API in both python and Java have
> largely identical object models, and both languages support nearly
> identical exception models.
>
> I would expect that testing for specific
> exceptions from the Python API would quite directly force you to translate
> those exceptions with high granularity from the Java API, and then you
> could run the negative test suite against both pure Java and Java/JNI.
>
>
>> As a final aside, on the standard development environment many of us have
>> to work with, the installed version on Python is too old to support the
>> current Python client (lack of UUID, etc).
>>
>
> It's true there are a few pieces we depend on that aren't available in
> python 2.4, however the qpid python client has a compat layer that provides
> these pieces if they are not there and makes the 2.4 vs new python
> experience identical. Pulling this into proton would be quite trivial.
>
>
>>
>> Personally I think the more tests we have the better, and it's more
>> important to encourage people to write tests than to force the use of a
>> particular language to write them in.  I'd also suggest that we should be
>> writing at least some tests for each of the idiomatic bindings.
>>
>
> I never said we shouldn't write tests, however as you say above it's
> important to understand what kind of tests need to be written where.
> Writing engine/protocol tests in every binding has very limited utility
> since exercising the engine from one binding will run the exact same code
> as exercising it from another, however writing tests for a binding that
> verify that types are decoded properly is quite useful as those tests would
> actually be exercising the binding code itself which is unique to the
> binding language.

Actually I think we need some way to verify that for each binding
every call to the API presented through the binding actually works.
If a change was made to the .h file tomorrow with no corresponding
change to the PHP / Ruby / Perl binding... how would we know that
those bindings had been broken?

> As you point out, checking that exceptions are mapped
> into a specific binding in a particular way is also binding specific in the
> general case, however I think in the specific case of the Java binding a
> test suite written in python could easily be run against both the Java and
> JNI implementations and provide us with a high degree of confidence that
> they behave identically. The only case where behaviour could differ is if
> the python binding where to merge two error conditions into one, but this
> would presumably just be a bug as the python binding itself would be losing
> relevant information also.
>
> In general I'm quite interested in what you think fundamentally can't be
> tested from python vs what you think could be but is cumbersome or simply
> blocked for reasons X, Y, and Z. I think a clear statement of this would
> help identify areas we could improve (e.g. python 2.4 support) and also
> help inform us what kind of Java tests should be encouraged.
>

I'm not saying that things can't be tested from Python (with enough
thunking layers in between).  I think the same is also true of Ruby.
Possibly we could even write all our tests in C and then write a sort
of reverse JNI mapping where instead we wrap the Java code within a C
library.  I'm just stating that I think there is value in adding tests
in Java that can be run against both the pure Java and C
implementations.  That in doing so we will improve the testing
coverage and better define the API.

My personal experience of the current set up has been (very) poor.
Attempting to implement proton-j by reference to the tests was a
deeply frustrating and depressing experience, and the move from the
straight SWIG binding to the idiomatic Python API made the experience
much worse.  We are now attempting to test the Java implementation
though a Python binding to Java which we do not purport to support
using a Python API that is barely documented.  Testing using Java will
be testing against a Java-C binding we do support and a Java
implementation we do support.  Given the lack of JVMs on some
platforms, clearly test only in Java would not be sufficient...
however I don't believe anyone is suggesting that we remove or
deprecate the Python tests.

-- Rob

Reply via email to