core2 checkin reminder

2006-07-01 Thread Jim Marino
Just a reminder (since it was buried in a previous email  
thread)...When doing checkins on the sandbox core2 implementation,  
please run:


$ mvn -Psourcecheck

This will execute PMD and Checkstyle as part of the build.

Thanks,
Jim 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



PMD problem?, was: core2 checkin reminder

2006-07-01 Thread Jeremy Boynes

On 6/30/06, Jim Marino [EMAIL PROTECTED] wrote:

Just a reminder (since it was buried in a previous email
thread)...When doing checkins on the sandbox core2 implementation,
please run:

$ mvn -Psourcecheck

This will execute PMD and Checkstyle as part of the build.



I ran into the same thing last night but was too tired to post to the thread.

I think the code that is causing PMD to barf is:
   protected static File findLoadLocation() {
   String location ...
   File locfile = new File(location);
   return locfile.getParentFile();
   }

which I don't have an issue with and which I think PMD is incorrectly
complaining about given the rule in question.

Is this a bug in PMD? If so, should we comment out the rule or change our code?
I have to admit I'm not very keen on changing code to work around
defects in a code checking tool.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using scenarios, was: Proposed approach for M2

2006-07-01 Thread Jim Marino


On Jul 1, 2006, at 12:07 AM, Jeremy Boynes wrote:


Jean-Sebastien Delfino wrote:
 1. Use scenarios to drive the M2 work
 Start a community discussion on the end to end scenarios that we  
want to

 support in M2.

 I'm thinking about concrete end to end scenarios that define the  
end user

 experience and the overall story going from development, build,
 package, deploy
 to runtime and administration of an SCA application.

snip

 Here are a few ideas of scenarios to initiate the discussion:
 - a scenario building your average web app with SCA
 - a scenario showing how to aggregate multiple services into new  
ones

 - mediation and routing/exchange scenarios
 - an application using publish/subscribe
 - building a whole system of services on a network
 - integration with an AJAX UI
 - what else? any thoughts?


On 6/30/06, Kevin Williams [EMAIL PROTECTED] wrote:

Sebastien,

This sounds great to me.  You may have intended this but, I think  
that

the scenarios should be implemented as we go resulting in new unit
tests, samples or sample apps by the time we are ready to release M2.

Also, I propose a scenario that involves data access and the  
transfer of
a data graph between modules.  A source module would get the graph  
using
the DAS and pass it to a worker module.  The graph would be  
modified by
the worker and sent back to the source module with change history  
intact

to be synchronized with the database.

An inter-op scenario would be nice too.



One the the things that came out at the BOF at ApacheConEU was that we
are not doing a good job of communicating what SCA is all about. I
think having a bunch of scenarios like this will help us do that.

Another thing that came out was that it would help if we broke the
distribution down into smaller pieces - for example, making SCA, SDO
and DAS available as individual releases rather than bundling them all
together which gave users the false impression that they were all
tightly coupled.

I think we need a lot more information on each scenario though - at
least to the level of detail Kevin provided. For example,

 - a scenario building your average web app with SCA

I'm not sure what your average web app is - are we talking JSP
taglibs, working with a framework such as WebWork, or the integration
with something like Spring? Are we talking about just accessing
services, or both producing and consuming? Are we talking about
accessing a remote service or wiring local application components with
SCA? Are we talking portable web app with Tuscany bundled, or how it
works in an SCA-enabled container?

I'd like to suggest we capture these on the wiki in enough detail that
a user new to the project would be able to understand what we are
talking about. The scenarios can then become illustrative samples of
what SCA is about and how it can be realized with Tuscany.

I don't want the scenarios to become the be-all and end-all though. We
tried that with M1 and IMO it failed miserably. We scrambled to
implement features and ended up with a brittle codebase that cracked
when we needed to make significant changes. Testing focused on seeing
if a scenario worked and we ended up with poor coverage across the
codebase.

Instead I think we need to define additional, finer-grained scenarios
that cover the components of the system. For example, different ones
for SCA, SDO and DAS, and, digging deeper, different ones for
web-services, Spring, static SDOs, non-relational DAS and so on.
I think this is really important and it's a problem IMO not just with  
Tuscany but also how the specs are presented. A lot of the feedback  
I've been getting is that it appears all of these technologies are  
tightly coupled, i.e. if one wants to use SCA then they must buy into  
a bunch of other things, such as SDO or DAS. This could also be said  
in reverse: if one wants to use SDO or DAS, then they need to use an  
SCA runtime. Obviously this is not the case (the SCA specs do not  
require SDO and vice versa) and I believe we really need to push this  
a la cart approach. For example, if I want to use the core as an  
embedded runtime and bootstrap it directly with no extensions, I  
should be able to do it. Even SCA itself is a la cart. For example, I  
may want to use assembly with Spring and no policy.


Basically, I'd like us from the SCA side to focus on scenarios which  
demonstrate SCA. We should have integration scenarios with other  
technologies such as SDO, DAS, JPA, JAXB, JSF, etc. but it would be  
nice if we could segment them so people can first get familiar with  
SCA and then choose the direction they want to go with ancillary  
technologies.


My other more practical concern is by splitting things up, it makes  
it easier for new people to come on board and select an area to work in.



Basically, there are a lot of different scenarios for SDO on its own,
we don't need to matrix them all into SCA, just pick a few key ones
that help illustrate SCA concepts.



At the 

Re: PMD problem?, was: core2 checkin reminder

2006-07-01 Thread Jim Marino

I'd say change PMD.


On Jul 1, 2006, at 1:04 AM, Jeremy Boynes wrote:


On 6/30/06, Jim Marino [EMAIL PROTECTED] wrote:

Just a reminder (since it was buried in a previous email
thread)...When doing checkins on the sandbox core2 implementation,
please run:

$ mvn -Psourcecheck

This will execute PMD and Checkstyle as part of the build.



I ran into the same thing last night but was too tired to post to  
the thread.


I think the code that is causing PMD to barf is:
   protected static File findLoadLocation() {
   String location ...
   File locfile = new File(location);
   return locfile.getParentFile();
   }

which I don't have an issue with and which I think PMD is incorrectly
complaining about given the rule in question.

Is this a bug in PMD? If so, should we comment out the rule or  
change our code?

I have to admit I'm not very keen on changing code to work around
defects in a code checking tool.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Support for callbacks

2006-07-01 Thread Jim Marino

Hi Ignacio,

Let's try IRC, perhaps Monday's chat? Other comments inline...

On Jun 30, 2006, at 1:30 PM, Ignacio Silva-Lepe wrote:


Apologies Jeremy, didn't mean to exclude people, just trying
to expedite the discussion.

The first basic issue I see is how to incorporate callbacks as  
defined in the CI spec in particular, and bi-directional
interfaces in general, into the Tuscany architecture.  
Depending  on  how closely a RuntimeWire is supposed to  
correspond to an SCA  wire,  it seems like one way to  
incorporate a callback is to  extend  InboundWire to include an  
OutboundInvocationChain, and  OutboundWire  to include an  
InboundInvocationChain. That is, a  wire would include  a  
'reverse' pair of invocation chain ends to  account for a   
callback.With that in place, it seems feasible to  re-use   
WireInvocationHandler and TargetInvoker in a similar  fashion  
to  actually perform the callback invocation. Are there  any  
subtle (or  not so subtle) gotchas in this that I am  overlooking?


I was thinking there would be a couple of things: a system   
transport service and a conversational scope container. The   
system transport service would listen for callbacks. That  
service  would dispatch and invoke a component, which in turn  
would ask its  scope container for the component implementation  
instance to  dispatch to.


Not sure if I follow. Is the system transport service intended  
as  an alternative

for a reverse invocation chain pair?
Yes. The system service would be a transport listener which would   
pick up the callback invocation off of a wire. The callback   
invocation would be sent from the proxy injected into the target  
as  described below. Do you think it would help if we outlined  
several of  scenarios, e.g. a stateless callback done in the same  
composite, a  stateless callback done across remote boundaries, a  
stateful callback  done in the same composite, and a stateful  
callback done across  remote boundaries? I was thinking we could  
sketch out what happens  and then map it down to the core.


If I understand correctly, would a system service transport use a  
low level
communication mechanism, like a socket for instance? This does not  
seem like

an appropriate approach for a local scenario,
Right, for the local scenario, I was thinking the callback instance  
would be put on the thread local context and the proxy would access  
it from there as opposed to going out over a socket and back in  
through a listener. Basically, it would be an optimization of the  
remote case. I think we can further optimize things depending on  
scopes, e.g. if the callback scope is module, we could possibly  
avoid threadlocal storage and have the proxy hold on to an instance  
directly.

but I am really guessing about
how such a listener would pick up a callback invocation if it is  
not via the

architected RuntimeWire/InvocationChain mechanism. On the other hand,
you do say the listener would pick up the callback invocation off  
of a wire,

which I'm not sure I follow either.
In the remote case, the target proxy would perform the invocation  
over a particular transport, which the listener would be listening  
on. The callback invocation would then be handled like any component  
invocation.


Admittedly, using a 'reverse' pair of invocation chains does not  
seem like a
very orthodox approach, but given that the SCA architecture does  
not define
separate reference and service elements for a callback (i.e., these  
seem to be
bundled into the forward reference and service only in reverse), it  
looks like it is
up to the Tuscany architecture to supply a sensible design. As an  
alternative,
a separate RuntimeWire instance could be introduced for a callback,  
with
corresponding outbound and inbound ends, but this would not  
correspond that

closely to its SCA counterpart.
I think part of the problem may be that wires in SCA are  
bidirectional while in Java a reference pointer is unidirectional.  
We could look to try and model this with the approach you are  
proposing since it may be closer to the bidirectional nature of  
wires. Maybe on IRC we can come up with the scenarios and then  
outline the two approaches (we can post the transcript)? If we have  
difficulty on IRC due to the complexity of the topic, we may have to  
do a call and we could post to the list a summary of what was discussed.


My question about gotchas had more to do with trying to use a  
WireInvocationHandler
(e.g., JDKOutboundInvocationHandler) as the object called by the  
callback proxy

injected into the target.
I'd like to talk about this more since I'm not sure I'm getting  
everything (email is difficult).

At first glance, this seems feasible, even if we are performing
an outbound invocation on an InboundWire and the corresponding  
inbound invocation
of the client happens from an OutboundWire. Is this reversal the  
reason why a transport
listener is a better approach in your 

Which codebase?, was: Proposed approach for M2

2006-07-01 Thread Jeremy Boynes

Oh look, there's an elephant in the sandbox.

On 6/30/06, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

2. Stage the assembly of our M2 runtime.
I propose that we start a fresh stream for M2 and  build the runtime through
baby steps, in parallel with the scenario work.


When I tried to make substantial changes to M1, I ran into a bunch of
problems due to the fragility of its code - just look at
AbstractCompositeContext for example (or if you want to hang on to
breakfast, don't). To avoid those it was easier to start a fresh
stream and pull in pieces from M1, refactoring along the way to reduce
the fragility. Jim and then others joined in along the way leading to
what we have now. The thought of starting over yet again is not very
appealing.

Instead, I suggest we go ahead with what was suggested on IRC three
weeks ago and move the code we have now out of the sandbox and into a
branch. Or perhaps, given there has been no development on the trunk
recently (since mid-May), go ahead and just replace trunk itself.

That gives us something to start from making it easier to support the
high-level scenarios that we come up with. We can incrementally
improve on that code based on what we find using and working with it,
starting by taking a look at the suggestions that you made.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PMD problem?, was: core2 checkin reminder

2006-07-01 Thread Jeremy Boynes

On 7/1/06, Jim Marino [EMAIL PROTECTED] wrote:

I'd say change PMD.



Do you mean comment out the rule or fix the bug?
The latter is a better solution but I'm hoping you mean the first :-)

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Metadata model (and relationship to STP, SDO and DAS)

2006-07-01 Thread Jeremy Boynes

Jean-Sebastien Delfino wrote:
- A simpler metadata model (the recursive model is much simpler than
the 0.9 spec, this is a great opportunity for us to simplify our
implementation)

I was chatting with Oisin at ApacheConEU about the model that they
were using in the STP project. They currently have a model of the 0.9
spec and he was wondering if they could leverage the changes that we
have made to support the 1.0 model.

Having a simple POJO-based model was appealing to both of us due to
flexibility that it provided and the accessibility that it provided to
developers. You didn't need special tooling or codegen to work with it
(although tooling could be used if you had it) which made it open to
the community.

We recognized that the two projects had some different requirements.
In a runtime situation you don't want to be digging around a model on
every request (as the model changes very infrequently compared to the
number of requests you process). However, a tooling environment is all
about manipulating the model so you want UI components to be notified
for every change that is made.

Our thought was that if we captured metadata about the relationships
in the model, the tooling runtime would be able to monitor them and
notify components as needed.  There was a lot of work done in JSR-220
for this and we were wondering about reusing their relationship
annotations.

This would give us a couple of interesting benefits. Firstly, the
model would be directly persistable to a database using any JPA
implementation. Secondly, the same model (POJOs) could be used in the
tool environment by generating an ECore representation from the
embedded metadata (I'm not sure quite what that means, I'm channeling
Oisin here). Thirdly, the same model could be used to create SDO type
definitions from Java classes allowing us to jump-start that work.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



REMINDER: Tuscany weekly IRC Chat on Monday, July 3rd at 15:30GMT

2006-07-01 Thread ant elder

Hi folks!

This is a reminder that the weekly Tuscany developer chat will be
occurring on Monday, June 19, at: 15:30 GMT, 16:30 BST, 08:30am PDT,
11:30am EDT, 21:00 Bangalore

The chat takes place on the freenode IRC network, (use server
irc.freenode.net), on channel #tuscany, and is scheduled to last one
hour, though it may run longer.  Please join us!

If you need an IRC client for windows, check out http://www.mirc.com,
and http://www.mirc.com/links.html has some links to clients for other
OS's.

Thanks,

  ...ant


SPI modularity

2006-07-01 Thread Jeremy Boynes

Jean-Sebastien Delfino wrote:
- Modularity, building our runtime in a more modular way, with more but
simpler modules, clean SPI modules with only interfaces, and decoupling the
core and the Java component implementation type / container.
- Simpler SPIs, covering all aspects of the cycle (development, build,
deployment + install, runtime, admin etc.)

There's several things here conflated together - I propose we tackle
them separately.

If you define runtime as a running SCA environment, I think we have a
good start on a modular approach. There are bound to be tweaks but I
think we are going in the right direction.

We have a basic core with a well-defined SPI into which we can add
extensions without needing any change to the core. We also have ways
in which extensions can co-operate, providing new extension points
themselves into which other things can plug. spi and core2 combined
are about 12000 lines of code and the binary is just over 300K in size
- this is not very big really.

Having said that, I agree that we can restructure the SPI module to
make things a little clearer. I think some of the confusion now comes
because it contains both runtime and deployment interfaces. We agreed
a long time ago that we wanted the runtime to be self-contained and
independent of the deployment mechanism used. Bearing that in mind, we
should be able to separate the runtime part of the SPI from the
deployment part.

Taking a quick swag at the top-level packages in the SPI I would
propose we create two new ones, deployment and runtime, and move
things around as follows:

runtime:
   bootstrap, component, event, host, monitor, policy, services, wire

deployment:
   annotation, builder, deployer, loader, model

Assuming this works and all dependencies point from deployment to
runtime we could then split the SPI module into two. I'd like to
re-evaluate before doing that though as I think a typical usage would
always have both runtime and deployment code and would typically
always need both modules. Making people deal with two things just
seems like unneeded complexity.

Finally, I think we need to be clear about what is an SPI and what is
an API. To me, an SPI is something used by an system to expand its
functionality, it's a view from the inside looking out; an API is used
by something else to manipulate a system, it's a view from the outside
looking in.

With that in mind I think some of the SPIs you mention above are
really APIs. Things like deployment, install, admin (and I'd add
management, monitoring) are interfaces the runtime would provide to
allow outside entities to manipulate it.

I'd propose that this may be a better way to slice up the current spi
module. For example, we could consider splitting out the bootstrap and
deployer packages into a new api module. The intention would be to
provide interfaces used by external actors without exposing them to
the mechanisms used to extend the runtime.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: REMINDER: Tuscany weekly IRC Chat on Monday, July 3rd at 15:30GMT

2006-07-01 Thread ant elder

Ok turns out the 3rd and 4th are holidays in US and Canada so likely a lot
of folks wont be able to make this chat. Lets go ahead anyway and the 1st
topic could be if we should schedule another chat for later on in the week.

  ...ant

On 7/1/06, ant elder [EMAIL PROTECTED]  wrote:


Hi folks!

This is a reminder that the weekly Tuscany developer chat will be
occurring on Monday, June 19, at: 15:30 GMT, 16:30 BST, 08:30am PDT,
11:30am EDT, 21:00 Bangalore

The chat takes place on the freenode IRC network, (use server
irc.freenode.net), on channel #tuscany, and is scheduled to last one
hour, though it may run longer.  Please join us!

If you need an IRC client for windows, check out http://www.mirc.com,
and http://www.mirc.com/links.html has some links to clients for other
OS's.

Thanks,

   ...ant




Re: PMD problem?, was: core2 checkin reminder

2006-07-01 Thread Jim Marino

Yea sorry, comment out the rule.

Jim

On Jul 1, 2006, at 1:17 AM, Jeremy Boynes wrote:


On 7/1/06, Jim Marino [EMAIL PROTECTED] wrote:

I'd say change PMD.



Do you mean comment out the rule or fix the bug?
The latter is a better solution but I'm hoping you mean the first :-)

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SPI modularity

2006-07-01 Thread Jim Marino


On Jul 1, 2006, at 2:42 AM, Jeremy Boynes wrote:


Jean-Sebastien Delfino wrote:
- Modularity, building our runtime in a more modular way, with more  
but
simpler modules, clean SPI modules with only interfaces, and  
decoupling the

core and the Java component implementation type / container.
- Simpler SPIs, covering all aspects of the cycle (development, build,
deployment + install, runtime, admin etc.)

There's several things here conflated together - I propose we tackle
them separately.

If you define runtime as a running SCA environment, I think we have a
good start on a modular approach. There are bound to be tweaks but I
think we are going in the right direction.

We have a basic core with a well-defined SPI into which we can add
extensions without needing any change to the core. We also have ways
in which extensions can co-operate, providing new extension points
themselves into which other things can plug. spi and core2 combined
are about 12000 lines of code and the binary is just over 300K in size
- this is not very big really.

I believe a good portion of that 300K is related to the Geronimo  
WorkManager dependencies. Since WorkManager can be implemented as a  
thin facade over the JDK 5 concurrency libraries, we should look at  
implementing a simple one and eliminating the dependencies. Once we  
do that, I think we will only need StAX, which is slated to become  
part of the JDK anyway.



Having said that, I agree that we can restructure the SPI module to
make things a little clearer. I think some of the confusion now comes
because it contains both runtime and deployment interfaces. We agreed
a long time ago that we wanted the runtime to be self-contained and
independent of the deployment mechanism used. Bearing that in mind, we
should be able to separate the runtime part of the SPI from the
deployment part.

Taking a quick swag at the top-level packages in the SPI I would
propose we create two new ones, deployment and runtime, and move
things around as follows:

runtime:
   bootstrap, component, event, host, monitor, policy, services, wire

deployment:
   annotation, builder, deployer, loader, model

What does annotation do? On a related note, I think we also have a  
POJO extension model for things like annotation processors. I don't  
think this can be easily factored into an SPI below core since it  
will drag in a whole bunch of things (either in the basic SPI package  
or a special POJO SPI package). Any ideas would be  
great...otherwise we may just say that type of extension is a low- 
level one done off core.



Assuming this works and all dependencies point from deployment to
runtime we could then split the SPI module into two. I'd like to
re-evaluate before doing that though as I think a typical usage would
always have both runtime and deployment code and would typically
always need both modules.
I don't see separating these into two projects is worth the added  
complexity. As long as we have a clear packaging structure, IMO it is  
much easier to deal with one extension jar.



Making people deal with two things just
seems like unneeded complexity.

Finally, I think we need to be clear about what is an SPI and what is
an API. To me, an SPI is something used by an system to expand its
functionality, it's a view from the inside looking out; an API is used
by something else to manipulate a system, it's a view from the outside
looking in.

With that in mind I think some of the SPIs you mention above are
really APIs. Things like deployment, install, admin (and I'd add
management, monitoring) are interfaces the runtime would provide to
allow outside entities to manipulate it.

This is a god point. I think it also entails a different design  
approach. For example, an SPI is intended for experienced, systems- 
level programmers and therefore can sacrifice a bit of complexity for  
power and flexibility.



I'd propose that this may be a better way to slice up the current spi
module. For example, we could consider splitting out the bootstrap and
deployer packages into a new api module. The intention would be to
provide interfaces used by external actors without exposing them to
the mechanisms used to extend the runtime.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Which codebase?, was: Proposed approach for M2

2006-07-01 Thread Jim Marino


On Jul 1, 2006, at 1:17 AM, Jeremy Boynes wrote:


Oh look, there's an elephant in the sandbox.

On 6/30/06, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

2. Stage the assembly of our M2 runtime.
I propose that we start a fresh stream for M2 and  build the  
runtime through

baby steps, in parallel with the scenario work.


When I tried to make substantial changes to M1, I ran into a bunch of
problems due to the fragility of its code - just look at
AbstractCompositeContext for example (or if you want to hang on to
breakfast, don't). To avoid those it was easier to start a fresh
stream and pull in pieces from M1, refactoring along the way to reduce
the fragility. Jim and then others joined in along the way leading to
what we have now. The thought of starting over yet again is not very
appealing.

Probably not surprisingly I agree, as the thought of starting over is  
not very appealing, especially in a piecemeal fashion. I feel we are  
at serious risk of loosing momentum if we start from scratch,  
particularly since a number of things are already underway using  
core2 (e.g. Spring integration, the deployer, data transformation,  
the Celtix binding, support for conversations, OSGi integration,  
Groovy support).


Also, I'm not clear on how we would merge M1 and core2. When we  
created core2, we pulled in pieces of M1 where appropriate. While  
some things could be brought over (e.g. parts of the invocation  
chain, annotation processing, some of the loaders, autowire, etc.) a  
lot just couldn't. For example, we pretty much re-wrote the scope  
containers and wound up with a much cleaner design.  Starting over  
like this seems to me to be at least two months of work assuming  
people that have contributed to the core in the past and are familiar  
with it are willing to sign up for this. In contrast, I'd like to be  
in a position sometime this month where we are comfortable having a  
modular core release that people can build extensions with.  
Sebastien, perhaps you could outline which parts of M1 would be  
merged with core2 so I can understand this better?


I'd much prefer we do what Jeremy suggested below and improve core2  
directly. In terms of a codebase that others can get up to speed  
with, it seems to me the best way to do this is not by rewriting it  
piecemeal, but by having a lot of extensions and documentation that  
people can refer to and gradually work their way into core. I believe  
the core is no different than other middleware such as Spring,  
Geronimo, JBoss, Hibernate, etc. in that the nature of the problem  
itself is complex and that complexity is proportionally reflected in  
the architecture but not through to the end-user programming model. I  
also believe that this complexity is best dealt with through  
modularity and layers people can start with and gradually work into  
the core (assuming they even want to).



Instead, I suggest we go ahead with what was suggested on IRC three
weeks ago and move the code we have now out of the sandbox and into a
branch. Or perhaps, given there has been no development on the trunk
recently (since mid-May), go ahead and just replace trunk itself.

That gives us something to start from making it easier to support the
high-level scenarios that we come up with. We can incrementally
improve on that code based on what we find using and working with it,
starting by taking a look at the suggestions that you made.



--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SPI modularity

2006-07-01 Thread Jeremy Boynes

On 7/1/06, Jim Marino [EMAIL PROTECTED] wrote:

I believe a good portion of that 300K is related to the Geronimo
WorkManager dependencies. Since WorkManager can be implemented as a
thin facade over the JDK 5 concurrency libraries, we should look at
implementing a simple one and eliminating the dependencies. Once we
do that, I think we will only need StAX, which is slated to become
part of the JDK anyway.



The 300K was the size of the two Tuscany jars - I didn't factor in the
size of dependencies.

I agree we should have another look at using Geronimo's
implementation. I know it was my suggestion to use it but it pulls in
quite a bit given what it does vs. what's in the JRE.



What does annotation do? On a related note, I think we also have a
POJO extension model for things like annotation processors. I don't
think this can be easily factored into an SPI below core since it
will drag in a whole bunch of things (either in the basic SPI package
or a special POJO SPI package). Any ideas would be
great...otherwise we may just say that type of extension is a low-
level one done off core.



I was thinking the annotation processing stuff would end up there and
that would be part of deployment. Thinking more we would want the
annotations used by components to decorate themselves to be some API
package.

--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[PATCH] Need to upgrade maven surefire plugin to version 2.2

2006-07-01 Thread Raymond Feng



Hi,

I hit a bug in surefire plugin: http://jira.codehaus.org/browse/MSUREFIRE-81. 
The issue was intiailly reported by Dan. :-)

Here's the patch for the pom.xml.

Please review and apply.

Thanks,
Raymond
Index: pom.xml
===
--- pom.xml (revision 418508)
+++ pom.xml (working copy)
@@ -141,7 +141,7 @@
 plugin
 groupIdorg.apache.maven.plugins/groupId
 artifactIdmaven-surefire-plugin/artifactId
-version2.1.3/version
+version2.2/version
 configuration
 includes
 include**/*TestCase.java/include

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Re: [PATCH] Need to upgrade maven surefire plugin to version 2.2

2006-07-01 Thread Jeremy Boynes

Thanks Raymond, applied in r418512
--
Jeremy

On 7/1/06, Raymond Feng [EMAIL PROTECTED] wrote:



Hi,

I hit a bug in surefire plugin:
http://jira.codehaus.org/browse/MSUREFIRE-81. The issue was
intiailly reported by Dan. :-)

Here's the patch for the pom.xml.

Please review and apply.

Thanks,
Raymond
-
To unsubscribe, e-mail:
[EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Brainstorm not to expose INTERNAL Properties

2006-07-01 Thread Fuhwei Lwo
Option 1 would only work if SDO implementation never needs to expose  any 
internal EMF properties. I am not sure whether there is a  possibility that we 
need to map the internal EMF properties to  something so the SDO users can 
control some EMF like behavior.
  
  What kind of internal EMF properties are we talking about here?  Can you 
share them?  Thanks.
  
  Best regards,
  
  Fuhwei Lwo

Yang ZHONG [EMAIL PROTECTED] wrote:  Current SDO impl bases on EMF, EMF may 
produce INTERNAL Properties besides
user designation.
It may be nice for SDO not to expose the INTERNAL Properties.

Here lists some thoughts JUST as a START, it's much more important if YOU
brainstorm please.
Frank has provided options such as
1. SDOXSDEcoreBuilder builds user Properties at the beginning of
getEStructuralFeatures() List and INTERNAL properties at the end of
getEStructuralFeatures().
SDO can simply hide high-index INTERNAL Properties without too much
performance sacrifice.
2. SDO maintains a seperated List from getEStructuralFeatures()

I agree with either of the two options and feel like it may be another
option to extend getEAllStructuralFeatures() to change the property List in
effect.

Really hope to see your comment on any of the options even more options from
you.

No matter what option, an interesting thing to consider is the order of
INTERNAL Properties from SUPER typeS and user Properties from subtype.
Really hope to see good solution to that concern so that List decoration
(index translation) can be evitable.

Thanks in advance.

-- 

Yang ZHONG



Re: SDO Samples

2006-07-01 Thread Fuhwei Lwo
Frank,

  As a user learning SDO by examples, I prefer one file per one example  so I 
don't need to be forced to understand example files directory  structure and 
dependency.
  
  Regards,
  
  Fuhwei Lwo
  
Frank Budinsky [EMAIL PROTECTED] wrote:  Robbie,

Looks pretty good to me. I wonder if someone from the SCA team can comment 
on consistency of approach with other Tuscany and Apache samples. What do 
others think about using things like SdoSampleConstants. What about the 
shouldUseDataGraph() call to query the user to choose from two ways to run 
it?

For this:

// TODO: do you need to do this ?
employees.add(newEmployee);

The answer is no. You would only need to add the newEmployee if you 
created it by calling DataFactory.create(). Since you created it by 
calling create() on the parent object, it's already attached, so this call 
to add will be a NOOP.

Frank.

Robbie J Minshall  wrote on 06/29/2006 02:48:15 PM:

 
 I am working on some samples for the SDO specification.  Any 
 thoughts or comments on the following would be appreciated. 
 
 The first point of contact with SDO may or may not be the 
 specification or an introductory paper, regardless it of the first 
 point of contact I would hope that the samples are complete and 
 usable enough that they can be used in close conjunction with the 
 spec, sdo papers, or on their own.  With this in mind it is very 
 very important that the documentation generated ( I think the the 
 project site is a good candidate here ) include a very consumable 
 tutorial as well as a good outline of the sample packaging and usage
 so that the user can either use the samples on their own or in 
 reference to the paper or specification in their hand. 
 
 Currently the draft samples have a package that includes the code 
 snipets throughout the specification so that the user can read each 
 section and run or modify the very simple code snipet as it appears 
 in the 2. 0 specification (these are essentially primatives).  The 
 next package includes working samples from the Examples Section of 
 the 2.0 Specification.  The code in these sections is as close to 
 the code in the specific Example as possible with differences 
 highlighted ( the sample you reviewed is one example of this ).  The
 third package includes working samples from other sources, such as 
 papers, with the intention that a user could read the paper and then
 modify and execute the working sample appropiately. 
 
 The current draft samples can simply be executed as a standAlone 
 Java application from the command line or from within eclipse. 
 
 The following is a sample of the style the Full Examples are 
 written.  I am going to concentrate comments on this single example,
 then complete the others in the same mann.  If people have comments 
 or suggestions please let me know : 
 
 
 
 thanks, 
 Robbie Minshall. [attachment sample.zip deleted by Frank 
 Budinsky/Toronto/IBM] 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]