Simon,
  thanks,  that gives me some ideas where to look.  I think I'll begin
by looking at a,b and c, with one eye on d and g.

Kelvin

2009/5/28 Simon Laws <[email protected]>:
> On Thu, May 28, 2009 at 3:59 PM, kelvin goodson <[email protected]> 
> wrote:
>> In running the OASIS tests in the 2.x otests directory, I've been
>> having trouble ensuring what seems like a "successful" outcome is
>> really successful when the expected outcome is an exception being
>> thrown.  The OASIS test infrastructure requires only that tests which
>> should fail do fail;  a very weak postcondition which can be met by
>> all kinds of circumstances, including bad setup of eclipse projects.
>> There are plans afoot for the test infrastructure to permit an
>> implementation to enforce tighter postconditions, and I was hoping to
>> be able to begin tabulating more precisely the nature of the
>> exceptions that Tuscany will throw for each test case that expects
>> failure. I was also hoping that there was some generic postcondition I
>> could enforce, such as all encountered Exceptions being derived from a
>> given Tuscany Exception parent class, but that's not the case.  I was
>> wondering if there have been discussions concerning exceptions,
>> exception handling etc. that might help me understand what Tuscany
>> should be throwing in any given circumstance.
>>
>> Kelvin
>>
>
> Hi Kelvin
>
> Good questions. The code at present isn't very clean with regard to
> how exceptional conditions are handled.
>
> There are two types of things that can happen when you read a
> contribution into the runtime
>
> 1/ A user error - The user has got something wrong in the contribution
> and could reasonably be expected to fix it and try again
> 2/ A system error - Some systematic error (out of memory?) has
> occurred that will likely required operator assistance.
>
> I don't think that we have been very consistent to date so this would
> be a good opportunity to tidy this up in 2.x before going on the
> assign errors to negative test cases
>
> The approach in 1.x throughout the read/resolve/build phases was to
> register user errors with the monitor so that they can be analyzed
> later and reported to the user. There are some issues currently, in no
> particular order
>
> A/ often the messages don't have enough contextual information. We
> should be reporting
>
>  Contribution Name/Path to failing artifact/the error and what might
> be done to fix it
>
>     where path to failing artifact could be composite name/component
> name/service name etc..
>
> B/ we tried an approach where processing continues after the first
> error is found in an attemp to report as many problems in a composite
> as possible. Not all of the code takes accoun of this and so results
> can be unpredictable. We should probably stop at the first SEVERE
> error rather than trying to continue
>
> C/  There is a mix of exception and monitor calls in the various
> processing classes that need rationalizing as we are fixing A and B
>
> D/ We need to define a generic user exception to report the contents
> of the monitor
>
> E/ We need to define/identify which system exception(s)  can be reported
>
> F/ We could do with deconstructing the 1.x  itest/validation and, if
> possible, put the tests with the modules that report the error.
>
> G/ There a monitor utilities for reporting errors and warnings
> repeated all over the place that could do with being consolidated in
> one convenient place.
>
> Simon
>

Reply via email to