@dan a little of both, its been a while since i was into jini. been off on
the osgi front for too long and not very impressed with it. so i am coming
home to jini/river and want to get going on it.

i think if we can get everything tuned up for maven we can handle both our
benchmarking issues and day2day unit testing

jason



On Mon, Feb 28, 2011 at 9:41 PM, Dan Creswell <[email protected]>wrote:

> Can you explain what would we do with maven?
>
> I can't tell if you're attempting to encourage work on another item or
> trying to solve the benchmark challenge.
>
>
> On 28 February 2011 07:31, Jason Pratt <[email protected]> wrote:
>
>> any chance of getting maven into this?
>>
>> jason
>>
>>
>>
>> On Mon, Feb 28, 2011 at 8:29 PM, Patricia Shanahan <[email protected]> wrote:
>>
>> > On 2/27/2011 11:19 PM, Peter Firmstone wrote:
>> >
>> >> Patricia Shanahan wrote:
>> >>
>> >>> On 2/23/2011 6:26 PM, Peter Firmstone wrote:
>> >>>
>> >>>> Patricia Shanahan wrote:
>> >>>>
>> >>>>> On 2/22/2011 12:16 AM, Peter Firmstone wrote:
>> >>>>>
>> >>>>>> Patricia Shanahan wrote:
>> >>>>>>
>> >>>>>>> I want to get going on some performance tuning, but believe it is
>> >>>>>>> best
>> >>>>>>> guided and controlled by well-organized benchmarks. To that end, I
>> >>>>>>> propose adding a place for benchmarks to the River structure.
>> >>>>>>>
>> >>>>>> ...
>> >>>
>> >>>> Each module would be considered a separate build, the tests would
>> >>>> include unit and integration tests. Other modules yours depends on
>> are
>> >>>> like libraries, you wouldn't be concerned with their implementation.
>> The
>> >>>> existing qa harness would be a separate library module (similar to
>> how
>> >>>> junit or jtreg is a separate component). Jini Platform tests would be
>> >>>> separated from implementation tests. (We also need to consider how to
>> >>>> best utilise the discovery and join test kit.)
>> >>>>
>> >>>> If the changes are localised to your module, meaning the public API
>> >>>> doesn't change, which is like an independent build, you could just
>> >>>> duplicate the original module, modify it, then run performance tests
>> >>>> against both the new and original modules, your new module name would
>> >>>> have a different version number to reflect the experimental changes.
>> >>>>
>> >>>> If the changes to involve changing public API in the module, in a non
>> >>>> backward compatible manner, and this API is not part of net.jini.*
>> >>>> (which must remain backward compatible), then other modules that are
>> >>>> dependent can be migrated to the new module over time, proxy's that
>> >>>> utilise the new module will need to use preferred class loading to
>> >>>> ensure the correct classes are loaded. The module version number
>> would
>> >>>> be incremented to show it is not backward compatible.
>> >>>>
>> >>>
>> >>> How would you propose handling a case like outrigger.FastList?
>> >>>
>> >>> It is package access only, so changing its interface to the rest of
>> >>> outrigger did not affect any public API. Several classes needed to be
>> >>> changed to handle the interface change.
>> >>>
>> >>> Patricia
>> >>>
>> >>>
>> >>>  Well each module can have it's own release schedule, in this case
>> you'd
>> >> develop the Outrigger module, integration test it, increment its
>> version
>> >> and release it. The public API hasn't changed, users can update their
>> >> Outrigger servers and add the new proxy jar archive to their codebase
>> >> servers, this will allow Outrigger to evolve at a faster pace than
>> >> River's Jini Platform. This makes our release process much faster, with
>> >> more updates more often.
>> >>
>> >> It's a lot of work to release the whole platform before users can
>> >> download and test the new Outrigger, so having incremental module
>> >> releases makes a lot of sense.
>> >>
>> >> Each module can function in the same environment coexisting with
>> >> previous module versions, since it is only an implementation of
>> >> Javaspace, implementation classes will exist in separate ClassLoaders
>> >> even in the same JVM.
>> >>
>> >
>> > The issue I'm trying to understand is not the release of production
>> code,
>> > but the issue of how to organize benchmarks that need to be run in order
>> to
>> > decide whether to make a change. Depending on the benchmark result, the
>> > change may not happen at all.
>> >
>> > Patricia
>> >
>>
>
>

Reply via email to