Re: Classloading code in core contribution processing

2008-02-25 Thread Rajini Sivaram
On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

  Jean-Sebastien Delfino wrote:
  Great to see a *test* case for cycles, but my question was: Do you
  have a *use* case for cycles and partial packages right now or can
 it  be fixed later?

  Rajini Sivaram wrote:
  No, I dont have an use-case, at least not an SCA one. But there are
 plenty
  of them in OSGi - eg. Tuscany modules cannot run in OSGi without support
 for
  split-packages.  Of course you can fix it later.

 I'm not arguing for or against fixing it now or later, I'm trying to get
 the real use case to make a decision based on concrete grounds. Can you
 point me to your OSGi use cases, or help me understand Tuscany modules
 cannot run in OSGi without support for split packages?


 Tuscany node and domain code are split into three modules each for API, SPI
and Implementation eg. tuscany-node-api, tuscany-node and tuscany-node-impl.
The API module defines a set of classes in org.apache.tuscany.sca.node and
the SPI module extends this package with more classes. So the package
org.apache.tuscany.sca.node is split across tuscany-node-api and
tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
entries for Tuscany modules, we would get three OSGi bundles corresponding
to the node modules. And the API and SPI bundles have to specify that they
use split-packages. It would obviously have been better if API and SPI used
different packages, but the point I am trying to make is that splitting
packages across modules is not as crazy as it sounds, and split packages do
appear in code written by experienced programmers.

IMO, supporting overlapping package import/exports is more important with
SCA contributions than with OSGi bundles since SCA contributions can specify
wildcards in import.java/export.java. eg. If you look at packaging
tuscany-contribution and tuscany-contribution-impl where
tuscany-contribution-impl depends on tuscany-contribution, there is no clear
naming convention to separate the two modules using a single import/export
statement pair. So if I could use wildcards, the simplest option that would
avoid separate import/export statements for each subpackage (as required in
OSGi) would be to export org.apache.tuscany.sca.contribution* from
tuscany-contribution and import org.apache.tuscany.sca.contribution* in
tuscany-contribution-impl. The sub-packages themselves are not shared but
the import/export namespaces are. We need to avoid cycles in these cases.
Again, there is a way to avoid sharing package spaces, but it is simpler to
share, and there is nothing in the SCA spec which stops you sharing packages
across contributions.

I dont think the current model resolver code which recursively searches
exporting contributions for artifacts is correct anyway - even for artifacts
other than classes. IMO, when an exporting contribution is searched for an
artifact, it should only search the exporting contribution itself, not its
imports. And that would avoid cycles in classloading. I would still prefer
not to intertwine classloading and model resolution because that would
unnecessarily make classloading stack traces which are complex anyway, even
more complex that it needs to be. But at least if it works all the time,
rather than run into stack overflows, I might not have to look at those
stack traces



and this will convince me to help fix it now :) Thanks.


It is not broken now - you have to break it first and then fix it :-).


 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




-- 
Thank you...

Regards,

Rajini


Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

just back in from vacation and catching up. I've put some comments in line
but the text seems to be circling around a few hot issues:

- How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
- Support for split namsepaces/shared packages
- Recursive searching of contributions
- Handling non-existent resources, e.g by spotting cycles in
imports/exports.

These are of course related but it may be easier if we address them
independently.

Simon




  Tuscany node and domain code are split into three modules each for API,
 SPI
 and Implementation eg. tuscany-node-api, tuscany-node and
 tuscany-node-impl.
 The API module defines a set of classes in org.apache.tuscany.sca.node and
 the SPI module extends this package with more classes. So the package
 org.apache.tuscany.sca.node is split across tuscany-node-api and
 tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
 entries for Tuscany modules, we would get three OSGi bundles corresponding
 to the node modules. And the API and SPI bundles have to specify that they
 use split-packages. It would obviously have been better if API and SPI
 used
 different packages, but the point I am trying to make is that splitting
 packages across modules is not as crazy as it sounds, and split packages
 do
 appear in code written by experienced programmers.


The split packages across the various node/domain module was not by design.
The code moved around and that was the result. We could go ahead and fix
this. Are there any other explicit examples of split packages that you
happen to know about


 IMO, supporting overlapping package import/exports is more important with
 SCA contributions than with OSGi bundles since SCA contributions can
 specify
 wildcards in import.java/export.java. eg. If you look at packaging
 tuscany-contribution and tuscany-contribution-impl where
 tuscany-contribution-impl depends on tuscany-contribution, there is no
 clear
 naming convention to separate the two modules using a single import/export
 statement pair. So if I could use wildcards, the simplest option that
 would
 avoid separate import/export statements for each subpackage (as required
 in
 OSGi) would be to export org.apache.tuscany.sca.contribution* from
 tuscany-contribution and import org.apache.tuscany.sca.contribution* in
 tuscany-contribution-impl. The sub-packages themselves are not shared but
 the import/export namespaces are. We need to avoid cycles in these cases.
 Again, there is a way to avoid sharing package spaces, but it is simpler
 to
 share, and there is nothing in the SCA spec which stops you sharing
 packages
 across contributions.


I'm not sure if you are suggesting that we implement a wildcard mechanism or
that we impose some restrictions, for example, to mandate that
import.javashould use fully qualified package names (as it says in
line 2929 of the
assembly spec). Are wildcards already supported?

The assembly spec seems to recognize that artifacts from the same namespace
may appear in several places (line 2946) but it is suggesting that these
multiple namespace references are included explicitly as distinct import
declarations.



 I dont think the current model resolver code which recursively searches
 exporting contributions for artifacts is correct anyway - even for
 artifacts
 other than classes. IMO, when an exporting contribution is searched for an
 artifact, it should only search the exporting contribution itself, not its
 imports. And that would avoid cycles in classloading. I would still prefer
 not to intertwine classloading and model resolution because that would
 unnecessarily make classloading stack traces which are complex anyway,
 even
 more complex that it needs to be. But at least if it works all the time,
 rather than run into stack overflows, I might not have to look at those
 stack traces


Looking at the assembly spec there is not much discussion of recursive
inclusion. I did find line 3022 which describes the behaviour
w.r.tindirect dependent contributions which to me implies that
contributions
providing exports should be recursively searched




 and this will convince me to help fix it now :) Thanks.


 It is not broken now - you have to break it first and then fix it :-).


  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 


 --
 Thank you...

 Regards,

 Rajini



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Simon Laws
On Mon, Feb 25, 2008 at 1:12 AM, Venkata Krishnan [EMAIL PROTECTED]
wrote:

 Hi,

 I have been working on modifying the existing bigbank demo to include
 security (things that have been tried and working in the securie-bigbank
 demo).

 All seemed fine, until I tried the modified bigbank demo from a
 distribution.  One of things we do now is aggregating the various
 definitions.xml in META-INF/services since we now allow various modules
 and
 contributions to have their own definitions.xml if needs be.

  In a distro all of these definitions.xml are aggregated into a single
 file
 using the shade transformer.  I end up with a definitions.xml that has
 multiple sca:definitions elements but no root.  Also there seems to be
 multiple 'xml' declarations - ?xml version=1.0 encoding=ASCII?.
 All
 these creates trouble for the XMLStreamReader.  At the present moment I am
 thinking of the following :

 1) In the Definitions Document Processor prepend and append the xml with
 dummy elements so that there is a root element

 2) Either strip all the duplicate xml declarations when doing step (1) or
 go
 an manually delete this in all the definitions.xml in our modules

 Though most of it has been tried and works, I feel its like some 'trick
 code' and could give us troubles in maintainability.  Does anybody have a
 better idea to deal with this ?

 Thanks.

 - Venkat


Hi Venkat

Can I just clarify that you are saying that you are having problems because
of the way that the shader plugin is aggregating the definitions.xml files
that now appear in various extension modules, e.g. binding-ws-axis2,
poilcy-logging et. and that this is not specifically related to the bingbank
demo or to the way that Tuscany subsequently aggregates the contents is
finds in definitions.xml files.

Does definitions.xml have to appear in META-INF/services. Could we, for
example, further qualify the definitions.xml file by putting it in a
directory that represents the name of the extension module to which it
refers? Or does that make it difficult to pick them up generically?

Simon


[TEST] Conversation Lifetime

2008-02-25 Thread Kevin Williams
I would like to add a few iTests for Conversation Lifetime items that
don't seem to have explicit tests,  In particular, I am looking at:

  1) The ability to continue a conversation by loading a reference
that had been written to persistent storage
  2) Implicit end of a conversation by a non-business exception
  3) Verify that a client's call to Conversation.end truly ends the conversation

Does this sound like a good idea?

Thanks,

--Kevin

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [TEST] Conversation Lifetime

2008-02-25 Thread Simon Laws
On Mon, Feb 25, 2008 at 1:08 PM, Kevin Williams [EMAIL PROTECTED]
wrote:

 I would like to add a few iTests for Conversation Lifetime items that
 don't seem to have explicit tests,  In particular, I am looking at:

  1) The ability to continue a conversation by loading a reference
 that had been written to persistent storage
  2) Implicit end of a conversation by a non-business exception
  3) Verify that a client's call to Conversation.end truly ends the
 conversation

 Does this sound like a good idea?

 Thanks,

 --Kevin

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Kevin, Sounds like a great idea! Let me know if you need any help.

Simon


[jira] Commented: (TUSCANY-1997) Axis binding does not allow external configuration to increase the number of the maximum connections opened.

2008-02-25 Thread Catalin Boloaja (JIRA)

[ 
https://issues.apache.org/jira/browse/TUSCANY-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12572090#action_12572090
 ] 

Catalin Boloaja commented on TUSCANY-1997:
--

Could you provide a patched jar for the 1.0 version too ?

Thanks,

Catalin Boloaja

 Axis binding does not allow external configuration to increase the number of 
 the maximum connections opened.
 

 Key: TUSCANY-1997
 URL: https://issues.apache.org/jira/browse/TUSCANY-1997
 Project: Tuscany
  Issue Type: Bug
  Components: Java SCA Axis Binding Extension
 Environment: Solaris , Windows , Websphere , Tomcat
Reporter: Catalin Boloaja
Assignee: Jean-Sebastien Delfino
 Fix For: Java-SCA-1.2

 Attachments: tuscany-binding-ws-axis2-1.1-TUSCANY-1997.jar


 In a high volume situation the default setting for Axis2 is 2 connections per 
 host.
 The default protocol being HTTP 1.1 , this means that only 2 POST requests 
 can be issued at the same time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-25 Thread Rajini Sivaram
Simon,

A few comments inline.


On 2/25/08, Simon Laws [EMAIL PROTECTED] wrote:

 Hi Rajini

 just back in from vacation and catching up. I've put some comments in line
 but the text seems to be circling around a few hot issues:

 - How closely class loading should be related to model resolution, i.e.
 options 1 and 2 from previously in this thread
 - Support for split namsepaces/shared packages
 - Recursive searching of contributions
 - Handling non-existent resources, e.g by spotting cycles in
 imports/exports.

 These are of course related but it may be easier if we address them
 independently.

 Simon


 
 
   Tuscany node and domain code are split into three modules each for API,
  SPI
  and Implementation eg. tuscany-node-api, tuscany-node and
  tuscany-node-impl.
  The API module defines a set of classes in org.apache.tuscany.sca.nodeand
  the SPI module extends this package with more classes. So the package
  org.apache.tuscany.sca.node is split across tuscany-node-api and
  tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
  entries for Tuscany modules, we would get three OSGi bundles
 corresponding
  to the node modules. And the API and SPI bundles have to specify that
 they
  use split-packages. It would obviously have been better if API and SPI
  used
  different packages, but the point I am trying to make is that splitting
  packages across modules is not as crazy as it sounds, and split packages
  do
  appear in code written by experienced programmers.


 The split packages across the various node/domain module was not by
 design.
 The code moved around and that was the result. We could go ahead and fix
 this. Are there any other explicit examples of split packages that you
 happen to know about


No, as far as I know, in Tuscany modules, the only packages that are split
across multiple modules are o.a.t.s.node and o.a.t.s.domain. I was just
using it as an example to show that there may be existing code which use
split-packages, and the test case for classloading in the presence of
split-packages is not just a fabricated test case. For Tuscany, I agree that
it would be easy to fix domain and node to use different package names, but
that may not always be the case with 3rd party code already packaged as jars
which need to be imported as contributions.

Split-packages are not good practice (according to OSGi), but there are
valid use-cases for them. The most commonly cited example in OSGi is Java
localization classes.



  IMO, supporting overlapping package import/exports is more important
 with
  SCA contributions than with OSGi bundles since SCA contributions can
  specify
  wildcards in import.java/export.java. eg. If you look at packaging
  tuscany-contribution and tuscany-contribution-impl where
  tuscany-contribution-impl depends on tuscany-contribution, there is no
  clear
  naming convention to separate the two modules using a single
 import/export
  statement pair. So if I could use wildcards, the simplest option that
  would
  avoid separate import/export statements for each subpackage (as required
  in
  OSGi) would be to export org.apache.tuscany.sca.contribution* from
  tuscany-contribution and import org.apache.tuscany.sca.contribution* in
  tuscany-contribution-impl. The sub-packages themselves are not shared
 but
  the import/export namespaces are. We need to avoid cycles in these
 cases.
  Again, there is a way to avoid sharing package spaces, but it is simpler
  to
  share, and there is nothing in the SCA spec which stops you sharing
  packages
  across contributions.
 

 I'm not sure if you are suggesting that we implement a wildcard mechanism
 or
 that we impose some restrictions, for example, to mandate that
 import.javashould use fully qualified package names (as it says in
 line 2929 of the
 assembly spec). Are wildcards already supported?


I thought Sebastien added support for wildcards in import.java since I
remember seeing .* in the tutorials (maybe I am wrong).

The assembly spec seems to recognize that artifacts from the same namespace
 may appear in several places (line 2946) but it is suggesting that these
 multiple namespace references are included explicitly as distinct import
 declarations.


If import statements specify location, I would expect distinct import
statements, but I am not sure I would expect to find two separate import
declarations when importing a split-package where location is not specified.



 
  I dont think the current model resolver code which recursively searches
  exporting contributions for artifacts is correct anyway - even for
  artifacts
  other than classes. IMO, when an exporting contribution is searched for
 an
  artifact, it should only search the exporting contribution itself, not
 its
  imports. And that would avoid cycles in classloading. I would still
 prefer
  not to intertwine classloading and model resolution because that would
  unnecessarily make classloading stack traces which are complex anyway,

Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Venkata Krishnan
Hi Simon,

Thanks for responding.  Please see my comments inline.

- Venkat

On Mon, Feb 25, 2008 at 6:36 PM, Simon Laws [EMAIL PROTECTED]
wrote:

 On Mon, Feb 25, 2008 at 1:12 AM, Venkata Krishnan [EMAIL PROTECTED]
 wrote:

  Hi,
 
  I have been working on modifying the existing bigbank demo to include
  security (things that have been tried and working in the securie-bigbank
  demo).
 
  All seemed fine, until I tried the modified bigbank demo from a
  distribution.  One of things we do now is aggregating the various
  definitions.xml in META-INF/services since we now allow various modules
  and
  contributions to have their own definitions.xml if needs be.
 
   In a distro all of these definitions.xml are aggregated into a single
  file
  using the shade transformer.  I end up with a definitions.xml that has
  multiple sca:definitions elements but no root.  Also there seems to be
  multiple 'xml' declarations - ?xml version=1.0 encoding=ASCII?.
  All
  these creates trouble for the XMLStreamReader.  At the present moment I
 am
  thinking of the following :
 
  1) In the Definitions Document Processor prepend and append the xml with
  dummy elements so that there is a root element
 
  2) Either strip all the duplicate xml declarations when doing step (1)
 or
  go
  an manually delete this in all the definitions.xml in our modules
 
  Though most of it has been tried and works, I feel its like some 'trick
  code' and could give us troubles in maintainability.  Does anybody have
 a
  better idea to deal with this ?
 
  Thanks.
 
  - Venkat


 Hi Venkat

 Can I just clarify that you are saying that you are having problems
 because
 of the way that the shader plugin is aggregating the definitions.xml files
 that now appear in various extension modules, e.g. binding-ws-axis2,
 poilcy-logging et. and that this is not specifically related to the
 bingbank
 demo or to the way that Tuscany subsequently aggregates the contents is
 finds in definitions.xml files.


Yes I am talking about aggregating the definitions.xml files from the
various modules.  The shade plugin is working alright.  This is not specific
to the bigbank demo - more a general problem.  I think I have been caught on
wrong foot trying to use this META-INF/services aggregation for the
definitions.xml file as well. :(



 Does definitions.xml have to appear in META-INF/services. Could we, for
 example, further qualify the definitions.xml file by putting it in a
 directory that represents the name of the extension module to which it
 refers? Or does that make it difficult to pick them up generically?


I did think of including the extension module where it is defined, but then
we must enlist all extension modules then or in otherwords enlist the
locations of these definitions.xml file somewhere.  Am not sure if we can
search for resources using regular expressions - something like
/*/definitions.xml.

Thanks.



 Simon



[jira] Commented: (TUSCANY-1863) Distributed Workpool Sample over Distributed SCA Bindings

2008-02-25 Thread Simon Laws (JIRA)

[ 
https://issues.apache.org/jira/browse/TUSCANY-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12572113#action_12572113
 ] 

Simon Laws commented on TUSCANY-1863:
-

I had a brief exchange with Giorgio on the ML about this 
(http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27856.html). He is 
updating the code to run with the latest Tuscany code so will commit once that 
is done. 

 Distributed Workpool Sample over Distributed SCA Bindings
 -

 Key: TUSCANY-1863
 URL: https://issues.apache.org/jira/browse/TUSCANY-1863
 Project: Tuscany
  Issue Type: New Feature
Affects Versions: Java-SCA-1.0
 Environment: Linux  Sun JDK 1.5 / Eclipse
Reporter: Giorgio Zoppi
Assignee: Simon Laws
 Fix For: Java-SCA-Next

 Attachments: workpool-distributed-job.zip, workpool-dynamic.zip, 
 wp.zip


 This sample is a distributed workpool, which works on 4 component workers 
 over 3 node. A client get a Workpool service, which is workpool's master, and 
  submits to it a stream of integer values. Every client multiplies that value 
 for 10 and it gives the result to the Workpool master. This is a good 
 example, which stress current sca over axis implementation. I directly attach 
 the sample tar file. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-25 Thread Jean-Sebastien Delfino

 Jean-Sebastien Delfino wrote:

Looks good to me, building on your initial list I added a few more items
and tried to organize them in three categories:

A) Contribution workspace (containing installed contributions):
- Contribution model representing a contribution
- Reader for the contribution model
- Workspace model representing a collection of contributions
- Reader/writer for the workspace model
- HTTP based service for accessing the workspace
- Web browser client for the workspace service
- Command line client for the workspace service
- Validator for contributions in a workspace



ant elder wrote:
Do you have you heart set on calling this a workspace or are you open to
calling it something else like a repository?



I think that they are two different concepts, here are two analogies:

- We in Tuscany assemble our distro out of artifacts from multiple Maven 
repositories.


- An application developer (for example using Eclipse) can connect 
Eclipse workspace to multiple SVN repositories.


What I'm looking after here is similar to the above 'distro' or 'Eclipse 
workspace', basically an assembly of contributions, artifacts of various 
kinds, that I can load in a 'workspace', resolve, validate and run, 
different from the repository or repositories that I get the artifacts from.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [TEST] Conversation Lifetime

2008-02-25 Thread Raymond Feng

+1. The more itests, the better :-).

Thanks,
Raymond

- Original Message - 
From: Kevin Williams [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 5:08 AM
Subject: [TEST] Conversation Lifetime



I would like to add a few iTests for Conversation Lifetime items that
don't seem to have explicit tests,  In particular, I am looking at:

 1) The ability to continue a conversation by loading a reference
that had been written to persistent storage
 2) Implicit end of a conversation by a non-business exception
 3) Verify that a client's call to Conversation.end truly ends the 
conversation


Does this sound like a good idea?

Thanks,

--Kevin

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Contribution classloading pluggability: was: Re: Classloading code in core contribution processing

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:

Hi,

I don't want to intercept the discussion but I'm wondering if we should 
define the pluggability of the classloading scheme for SCA contributions.


Typically we have the following information for a ready-to-deploy unit:

* The URL of the deploment composite (deployable composite)
* A collection of URLs for the required contributions to support the SCA 
composite


There are some class relationship defined using import.java and 
export.java. In different environments, we may need to have different 
classloaders to deal with java classes in the collection of 
contributions. Should we define a SPI as follows to provide the 
pluggability?


public interface ClassLoaderProvider {
   // Start the classloader provider for a collection of contributions 
(deployment unit)

   void start(ListContribution contributions);

   // Get the classloader for a given contribution in the deployment unit
   ClassLoader getClassLoaders(Contribution contribution);

   // Remove the contributions from the provider
   void stop(ListContribution contributions);
}

Thanks,
Raymond



This is an interesting proposal but I think it's orthogonal to the 
discussion we've been having on contribution import cycles and support 
for partial packages.


Import cycles and partial namespaces are not specific to Java and can 
occur too with WSDL/XSD. I think we should handle them in a Java (and 
ClassLoader) independent way.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PassByValueInterceptor always copying data now?

2008-02-25 Thread Raymond Feng

Please see my comments inline.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 8:36 AM
Subject: PassByValueInterceptor always copying data now?


With the latest trunk code, PassByValueInterceptor seems to always copy 
data returned by my Java component to my service (with a feed binding).


Is this on the service binding side? If it's for the reference binding, you 
can simply have the binding invoker implements the PassByValueAware 
interface and return true for the allowsPassByReference() method. The final 
SPI is being discussed on the ML.




I don't think it's right (and it's actually breaking me now as the 
JAXBDataBinding fails to copy my objects).


Is it the failure that was reported by Luciano (CloneNotSupportedException)? 
If so, I have a fix coming in.




Or is the feed binding not doing what it should do to tell the databinding 
framework not to copy?


The changes are in progress. At this moment, the PBV is still an interceptor 
if none of the invokers return true for allowsPassByReference(). There is an 
interim way you can use to disable it by calling 
InvocationChain.setAllowsPassByReference(true).




BTW it's again another example of some databinding magic happening on the 
invocation chain and making things complicated to follow. Where are we 
with the discussion about not having so much databinding magic happen at 
invocation time?


As we discussed before, the control should be a combination of the client, 
invokers and runtime. But we are yet to refactor the code so that the client 
of the invocation chain handles the PBV and PBV interceptor becomes a 
utility class.




How can I, in my binding, disable the automatic databinding processing?
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PassByValueInterceptor always copying data now?

2008-02-25 Thread Luciano Resende
This is a side effect of a workaround that was added in the
databinding based on the following discussion thread [1], and I'm also
seeing issues as described in [2]. Maybe Raymond could give us other
choices here.

[1] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg28222.html
[2] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg28241.html

On Mon, Feb 25, 2008 at 8:36 AM, Jean-Sebastien Delfino
[EMAIL PROTECTED] wrote:
 With the latest trunk code, PassByValueInterceptor seems to always copy
  data returned by my Java component to my service (with a feed binding).

  I don't think it's right (and it's actually breaking me now as the
  JAXBDataBinding fails to copy my objects).

  Or is the feed binding not doing what it should do to tell the
  databinding framework not to copy?

  BTW it's again another example of some databinding magic happening on
  the invocation chain and making things complicated to follow. Where are
  we with the discussion about not having so much databinding magic happen
  at invocation time?

  How can I, in my binding, disable the automatic databinding processing?
  --
  Jean-Sebastien

  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]





-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PassByValueInterceptor always copying data now?

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:

Please see my comments inline.

Thanks,
Raymond

- Original Message - From: Jean-Sebastien Delfino 
[EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 8:36 AM
Subject: PassByValueInterceptor always copying data now?


With the latest trunk code, PassByValueInterceptor seems to always 
copy data returned by my Java component to my service (with a feed 
binding).


Is this on the service binding side? If it's for the reference binding, 
you can simply have the binding invoker implements the PassByValueAware 
interface and return true for the allowsPassByReference() method. The 
final SPI is being discussed on the ML.




I don't think it's right (and it's actually breaking me now as the 
JAXBDataBinding fails to copy my objects).


Is it the failure that was reported by Luciano 
(CloneNotSupportedException)? If so, I have a fix coming in.




Or is the feed binding not doing what it should do to tell the 
databinding framework not to copy?


The changes are in progress. At this moment, the PBV is still an 
interceptor if none of the invokers return true for 
allowsPassByReference(). There is an interim way you can use to disable 
it by calling InvocationChain.setAllowsPassByReference(true).




BTW it's again another example of some databinding magic happening on 
the invocation chain and making things complicated to follow. Where 
are we with the discussion about not having so much databinding magic 
happen at invocation time?


As we discussed before, the control should be a combination of the 
client, invokers and runtime. But we are yet to refactor the code so 
that the client of the invocation chain handles the PBV and PBV 
interceptor becomes a utility class.




How can I, in my binding, disable the automatic databinding processing?
--
Jean-Sebastien



OK, Thanks. I'm short-circuiting JAXBDataBinding.copy() for now in my 
local copy until you resolve the above issues.


Another issue: when JAXBDataBinding creates a new JAXB Context it does 
not pass all the necessary classes, causing JAXB to throw:
javax.xml.bind.JAXBException: class 
org.apache.tuscany.sca.implementation.data.collection.Item nor any of 
its super class is known to this context.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PassByValueInterceptor always copying data now?

2008-02-25 Thread Raymond Feng

I checked in a few fixes under r630942 and r630935 to get you going now.

For the JAXBContext issue, can you open a JIRA to track it? The current 
introspection-based databinding might have some flaws in some cases as you 
see. We need to have a separate discussion.


Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 9:00 AM
Subject: Re: PassByValueInterceptor always copying data now?



Raymond Feng wrote:

Please see my comments inline.

Thanks,
Raymond

- Original Message - From: Jean-Sebastien Delfino 
[EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 8:36 AM
Subject: PassByValueInterceptor always copying data now?


With the latest trunk code, PassByValueInterceptor seems to always copy 
data returned by my Java component to my service (with a feed binding).


Is this on the service binding side? If it's for the reference binding, 
you can simply have the binding invoker implements the PassByValueAware 
interface and return true for the allowsPassByReference() method. The 
final SPI is being discussed on the ML.




I don't think it's right (and it's actually breaking me now as the 
JAXBDataBinding fails to copy my objects).


Is it the failure that was reported by Luciano 
(CloneNotSupportedException)? If so, I have a fix coming in.




Or is the feed binding not doing what it should do to tell the 
databinding framework not to copy?


The changes are in progress. At this moment, the PBV is still an 
interceptor if none of the invokers return true for 
allowsPassByReference(). There is an interim way you can use to disable 
it by calling InvocationChain.setAllowsPassByReference(true).




BTW it's again another example of some databinding magic happening on 
the invocation chain and making things complicated to follow. Where are 
we with the discussion about not having so much databinding magic happen 
at invocation time?


As we discussed before, the control should be a combination of the 
client, invokers and runtime. But we are yet to refactor the code so that 
the client of the invocation chain handles the PBV and PBV interceptor 
becomes a utility class.




How can I, in my binding, disable the automatic databinding processing?
--
Jean-Sebastien



OK, Thanks. I'm short-circuiting JAXBDataBinding.copy() for now in my 
local copy until you resolve the above issues.


Another issue: when JAXBDataBinding creates a new JAXB Context it does not 
pass all the necessary classes, causing JAXB to throw:
javax.xml.bind.JAXBException: class 
org.apache.tuscany.sca.implementation.data.collection.Item nor any of its 
super class is known to this context.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:
Why don't we use META-INF/definitions.xml? META-INF/services folder is 
for the java service provider pattern.

...
We don't even need the META-INF/ part, IMO definitions.xml is a 
development artifact like .java, .composite, .wsdl, .xsd and doesn't 
need to be in META-INF.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Raymond Feng
I was trying to follow the META-INF/sca-contribution.xml pattern as the file 
name is defined by the spec and we only have at most one definitions.xml in 
the same contribution. For .java, .compostem .wsdl, and .xsd artifacts, they 
could have different artifact names, such as A.java and B.java.


Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 10:00 AM
Subject: Re: Trouble with aggregating definitions.xml in distro



Raymond Feng wrote:
Why don't we use META-INF/definitions.xml? META-INF/services folder is 
for the java service provider pattern.

...
We don't even need the META-INF/ part, IMO definitions.xml is a 
development artifact like .java, .composite, .wsdl, .xsd and doesn't need 
to be in META-INF.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Contribution classloading pluggability: was: Re: Classloading code in core contribution processing

2008-02-25 Thread Raymond Feng


- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 8:23 AM
Subject: Re: Contribution classloading pluggability: was: Re: Classloading 
code in core contribution processing




Raymond Feng wrote:

Hi,

I don't want to intercept the discussion but I'm wondering if we should 
define the pluggability of the classloading scheme for SCA contributions.


Typically we have the following information for a ready-to-deploy unit:

* The URL of the deploment composite (deployable composite)
* A collection of URLs for the required contributions to support the SCA 
composite


There are some class relationship defined using import.java and 
export.java. In different environments, we may need to have different 
classloaders to deal with java classes in the collection of 
contributions. Should we define a SPI as follows to provide the 
pluggability?


public interface ClassLoaderProvider {
   // Start the classloader provider for a collection of contributions 
(deployment unit)

   void start(ListContribution contributions);

   // Get the classloader for a given contribution in the deployment unit
   ClassLoader getClassLoaders(Contribution contribution);

   // Remove the contributions from the provider
   void stop(ListContribution contributions);
}

Thanks,
Raymond



This is an interesting proposal but I think it's orthogonal to the 
discussion we've been having on contribution import cycles and support for 
partial packages.


My proposal is for the java classloading strategy over related 
contributions. That's why I started it in a different thread. The general 
disucssion on import/export should stay independent of java.




Import cycles and partial namespaces are not specific to Java and can 
occur too with WSDL/XSD. I think we should handle them in a Java (and 
ClassLoader) independent way.


+1. My understanding is that the contribution service will figure out the 
import/export for various artifacts across contributions in a general way. 
With such metadata in place, the java class loader provider can be plugged 
to implement a classloading scheme which honors the import/export 
statements.



--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Simon Laws
So, just to be clear again...



 
 
  Hi Venkat
 
  Can I just clarify that you are saying that you are having problems
  because
  of the way that the shader plugin is aggregating the definitions.xmlfiles
  that now appear in various extension modules, e.g. binding-ws-axis2,
  poilcy-logging et. and that this is not specifically related to the
  bingbank
  demo or to the way that Tuscany subsequently aggregates the contents is
  finds in definitions.xml files.
 

 Yes I am talking about aggregating the definitions.xml files from the
 various modules.  The shade plugin is working alright.


In as much that the shade plugin is identifying that there are multiple
files with the same name, definitions.xml in this case, and is blindly
concatenating them?


  This is not specific
 to the bigbank demo - more a general problem.  I think I have been caught
 on
 wrong foot trying to use this META-INF/services aggregation for the
 definitions.xml file as well. :(


I agree that having all of the files called definitions.xml located in the
same logical place on the classpath is causing problems and also that the
choice of META-INF/services doesn't seem to be right. Don't think these two
things are related though.




 
  Does definitions.xml have to appear in META-INF/services. Could we, for
  example, further qualify the definitions.xml file by putting it in a
  directory that represents the name of the extension module to which it
  refers? Or does that make it difficult to pick them up generically?
 

 I did think of including the extension module where it is defined, but
 then
 we must enlist all extension modules then or in otherwords enlist the
 locations of these definitions.xml file somewhere.  Am not sure if we can
 search for resources using regular expressions - something like
 /*/definitions.xml.


For example, could you use something like

policy-logging\src\main\resources\org\apache\tuscany\policy\logging\definitions.xml




 Thanks.


 
  Simon
 



Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

I'm covering old ground here but trying to make sure I'm looking at this in
the right way.

A - How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
   A1 (classloader uses model resolver) - standardizes the artifact
resolution process but make classloading more complex
   A2 (classloader doesn't use model resolver) - simplifies the classloading
process but leads to multiple mechanisms for artifact resolution
B - Support for split namspaces/shared packages
   Supporting this helps when consuming Java artifacts in the case where
there is legacy code and for some java patterns such as localization. I
expect this
   could apply to other types of artifacts also, for example, XML schema
that use library schema for common types.
C - Recursive searching of contributions
   It's not clear that we have established that this is a requirement
D - Handling non-existent resources, e.g by spotting cycles in
imports/exports.
  It would seem to me to be sensible to guard against this generally. Is a
specific requirement if we have C

It seems to me that there we are talking about two orthogonal pieces of
work. Firstly B, C  D describe behaviour of artifact resolution in general.
Then, given the artifact resolution framework, how does Java classloading
fit in, I.e. A1 or A2.

Can we agree the general behaviour first and then agree javal classloading
as a special case of this.

Regards

Simon


[DISCUSSION] PassByValue SPI, was: Re: svn commit: r628163

2008-02-25 Thread Raymond Feng

Hi,

I think Simon's proposal should work as follows instead of passing the 
properties to the createInvoker() call.


public interface Invoker {
   InvokerProperties getProperties(); // Contribute properties
}

public class InvokerProperties {
   public void setAllowsPassByReference(boolean allowsPBR) {
 
   }
   public boolean allowsPassByReference() {
   
   }

   // Add more properties without impacting the Invoker interface
   public AnotherPropertyType getAnotherProperty() {
  ...
   }

   public void setAnotherProperty(AnotherPropertyType anotherProp) {
...
   }
}

So the difference is whether having simple properties on the Invoker 
interface or defining a complex property as a collection of properties.


Anyway, since we have different opinions, I'm OK to have a vote to get it 
decided.


So far we have the following options on the table:

1) Add allowsPassByReference() to the Invoker interface directly
2) Add getInvokerProperties() to the Invoker interface directly and the 
InvokerProperties will encapasulate known properties including 
allowsPassByReference.
3) Add allowsPassByReference() to an optional SPI (either a separate 
interface or a sub-interface of Invoker)
4) Add getInvokerProperties() to an optional SPI (either a separate 
interface or a sub-interface of Invoker)


Please add your options if I miss them before we call a vote.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, February 22, 2008 12:14 PM
Subject: Re: svn commit: r628163 - in /incubator/tuscany/java/sca: 
itest/interfaces/src/main/java/org/apache/tuscany/sca/itest/interfaces/ 
itest/interfaces/src/test/java/org/apache/tuscany/sca/itest/interfaces/ 
modules/binding-ejb/src/main/java/org/apache/tus




Simon Nash wrote:

I'm wondering whether it would be good to have a vote about this.
Of the five people who have expressed a view on this so far, four
of them have had a different first preference.  In the interests
of making progress, I think it might be good to put forward a set
of options and vote to choose between them.

One question of clarification inline below.

...

 Jean-Sebastien Delfino wrote:

My preference:

1. (1) above add a method to Invoker, and ask people on our dev and user 
mailing lists if they have any issues with it.


2. (2) above and a plan to merge all these Xyz2 interfaces into the main 
interfaces in the next major release.


 Simon Nash wrote:

By the next major release do you mean the 1.2 release that we recently
started discussing, or something else?


Difficult to say until more discussion shapes 1.2 :) I mean major enough 
to introduce significant SPI changes.



3. Simon's proposal [1], which introduces too much complexity IMHO.


A few more concerns with that proposal:

- It introduces a breaking change as well.

- An extension developer will have to work with two objects instead of 
one. The same technique applied to other extension points (provider, 
artifactprocessor, resolver) will double the number of interfaces.


- Ownership and lifecycle of InvokerProperties are unclear. I don't see 
why an Invoker should return InvokerProperties if it's already passed to 
it. I don't understand when an Invoker should initialize that properties 
object.


- Unless I'm missing something, it will require a breaking change to 
Provider.createInvoker() to pass an InvokerProperties, or a dependency on 
a Tuscany InvokerProperties implementation class.


- If InvokerProperties is an interface then an extension developer can 
implement it, and will be broken again as soon as a new property is added.


- The InvokerProperties pattern does not address the bigger issue of all 
changes to other extension methods (createInvoker, or just the invoke 
method itself).


The fundamental question remains: Can we add methods to an interface 
implemented by an extension? and my opinion is:


- Yes, if the change is straightforward and publicly communicated.

- No, if it requires significant changes to extensions. We then need 
another version of the interface (like Invoker2) and support both versions 
until the two interfaces get merged.


-  It should be possible to introduce in a release SPI cleanup, merging, 
refactoring and evolutions, at a reasonable pace. I am not saying that we 
should do this in the upcoming 1.2 release, but I'd like to see some SPI 
cleanup happen in a reasonable timeframe. They have been close to frozen 
for 9 months now.


I'll be happy to vote on proposals though.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DISCUSS] altering the Tuscany Charter in relation to SDO Java

2008-02-25 Thread Simon Nash

See inline.

  Simon

kelvin goodson wrote:

There's been a discussion thread going for a while [1] in the Tuscany
community with regards to shifting the Apache home for SDO Java work to a
new project.  This has been going on in parallel to the discussion on the
incubator general list on establishing a new project,  originally aimed to
be tightly scoped to JSR 235  (see [2] to jump into that thread at a
location particularly relevant for this posting).

I'd like to try to move the Tuscany side of the discussion along to some
kind of conclusion.   In view of that aim,  I'd like to suggest that we take
a fresh look at the current state of the wording for the Tuscany charter,
if that's what it's known as, that we arrived at during the graduation vote
[3].

I suggest 

 ...establish a Project Management Committee charged with the creation
 and maintenance of open-source software for distribution at no charge
 to the public, that simplifies the development, deployment and management
 of distributed applications built as compositions of service components.
 These components may be implemented with a range of technologies and
 connected using a variety of communication protocols. This software will
 implement relevant open standards including, but not limited to, the
 SCA standard defined by the OASIS OpenCSA member section, and related
 technologies.

The only edit here is that the current blessed version ends with ...
but not limited to, the SCA and SDO standards defined by the OASIS OpenCSA
member section

I urge you to give your attention to this in the near future please; making
this alteration would seem to be a necessary,  but not sufficient, element
for altering the proposal for the new project.


I would be OK with this change.  It does not in itself imply stopping
SDO development in Tuscany, as SDO is a related technology of SCA.
However, it gives Tuscany more flexibility over whether it develops
SDO itself or makes use of an implementation developed elsewhere.

  Simon


Kelvin.

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-user/200802.mbox/browser
[2]
http://mail-archives.apache.org/mod_mbox/incubator-general/200802.mbox/[EMAIL 
PROTECTED]
[3]
http://mail-archives.apache.org/mod_mbox/incubator-general/200710.mbox/[EMAIL 
PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:
I was trying to follow the META-INF/sca-contribution.xml pattern as the 
file name is defined by the spec and we only have at most one 
definitions.xml in the same contribution. For .java, .compostem .wsdl, 
and .xsd artifacts, they could have different artifact names, such as 
A.java and B.java.




There is a difference though. META-INF/sca-contribution.xml is metadata 
about the contribution, definitions.xml is not.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DISCUSS] altering the Tuscany Charter in relation to SDO Java

2008-02-25 Thread Raymond Feng
I'm fine with the proposal. In Tuscany SCA Java, we support many 
databindings such as JAXB, SDO, XmlBeans and AXIOM. IMHO, SDO is one of the 
technology choices to represent data in the SOA environment. Removing SDO 
from the sentence will give us more flexibility. I agree that it doesn't 
stop us from implementing SDO and supporting SDO in the Tuscany project.


Thanks,
Raymond

- Original Message - 
From: Simon Nash [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 2:36 PM
Subject: Re: [DISCUSS] altering the Tuscany Charter in relation to SDO 
Java




See inline.

  Simon

kelvin goodson wrote:

There's been a discussion thread going for a while [1] in the Tuscany
community with regards to shifting the Apache home for SDO Java work to a
new project.  This has been going on in parallel to the discussion on the
incubator general list on establishing a new project,  originally aimed 
to

be tightly scoped to JSR 235  (see [2] to jump into that thread at a
location particularly relevant for this posting).

I'd like to try to move the Tuscany side of the discussion along to some
kind of conclusion.   In view of that aim,  I'd like to suggest that we 
take
a fresh look at the current state of the wording for the Tuscany 
charter,
if that's what it's known as, that we arrived at during the graduation 
vote

[3].

I suggest 

 ...establish a Project Management Committee charged with the creation
 and maintenance of open-source software for distribution at no charge
 to the public, that simplifies the development, deployment and 
management

 of distributed applications built as compositions of service components.
 These components may be implemented with a range of technologies and
 connected using a variety of communication protocols. This software will
 implement relevant open standards including, but not limited to, the
 SCA standard defined by the OASIS OpenCSA member section, and related
 technologies.

The only edit here is that the current blessed version ends with ...
but not limited to, the SCA and SDO standards defined by the OASIS 
OpenCSA

member section

I urge you to give your attention to this in the near future please; 
making
this alteration would seem to be a necessary,  but not sufficient, 
element

for altering the proposal for the new project.


I would be OK with this change.  It does not in itself imply stopping
SDO development in Tuscany, as SDO is a related technology of SCA.
However, it gives Tuscany more flexibility over whether it develops
SDO itself or makes use of an implementation developed elsewhere.

  Simon


Kelvin.

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-user/200802.mbox/browser
[2]
http://mail-archives.apache.org/mod_mbox/incubator-general/200802.mbox/[EMAIL 
PROTECTED]
[3]
http://mail-archives.apache.org/mod_mbox/incubator-general/200710.mbox/[EMAIL 
PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Jean-Sebastien Delfino

Simon Laws wrote:
...

For example, could you use something like

policy-logging\src\main\resources\org\apache\tuscany\policy\logging\definitions.xml


What you're proposing makes sense to me: let's put definitions.xml file 
in logically named folders.


I'm not sure that we even need a naming convention like 
org/apache/tuscany/module-name. Definitions.xml files live in SCA 
contributions and the Tuscany contribution code should be able to find 
them wherever they are in the contribution (like we find WSDLs, XSDs, 
composite files etc).


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DISCUSSION] PassByValue SPI, was: Re: svn commit: r628163

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:
I think Simon's proposal should work as follows instead of passing the 
properties to the createInvoker() call.


public interface Invoker {
   InvokerProperties getProperties(); // Contribute properties
}

public class InvokerProperties {
   public void setAllowsPassByReference(boolean allowsPBR) {
 
   }
   public boolean allowsPassByReference() {
   
   }

   // Add more properties without impacting the Invoker interface
   public AnotherPropertyType getAnotherProperty() {
  ...
   }

   public void setAnotherProperty(AnotherPropertyType anotherProp) {
...
   }
}



I'm going to repeat what I said earlier in this thread, but in context 
now with your code example: This makes extensions depend on a Tuscany 
implementation class, InvokerProperties.


The getProperties() method will look like {
  return new InvokerProperties();
}

A very slippery slope IMHO.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DISCUSS] altering the Tuscany Charter in relation to SDO Java

2008-02-25 Thread Jean-Sebastien Delfino

kelvin goodson wrote:

There's been a discussion thread going for a while [1] in the Tuscany
community with regards to shifting the Apache home for SDO Java work to a
new project.  This has been going on in parallel to the discussion on the
incubator general list on establishing a new project,  originally aimed to
be tightly scoped to JSR 235  (see [2] to jump into that thread at a
location particularly relevant for this posting).

I'd like to try to move the Tuscany side of the discussion along to some
kind of conclusion.   In view of that aim,  I'd like to suggest that we take
a fresh look at the current state of the wording for the Tuscany charter,
if that's what it's known as, that we arrived at during the graduation vote
[3].

I suggest 

 ...establish a Project Management Committee charged with the creation
 and maintenance of open-source software for distribution at no charge
 to the public, that simplifies the development, deployment and management
 of distributed applications built as compositions of service components.
 These components may be implemented with a range of technologies and
 connected using a variety of communication protocols. This software will
 implement relevant open standards including, but not limited to, the
 SCA standard defined by the OASIS OpenCSA member section, and related
 technologies.

The only edit here is that the current blessed version ends with ...
but not limited to, the SCA and SDO standards defined by the OASIS OpenCSA
member section

I urge you to give your attention to this in the near future please; making
this alteration would seem to be a necessary,  but not sufficient, element
for altering the proposal for the new project.

Kelvin.

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-user/200802.mbox/browser
[2]
http://mail-archives.apache.org/mod_mbox/incubator-general/200802.mbox/[EMAIL 
PROTECTED]
[3]
http://mail-archives.apache.org/mod_mbox/incubator-general/200710.mbox/[EMAIL 
PROTECTED]



Trying to make sure I understand. Does that mean that the new charter:
- does not require Tuscany to implement SDO anymore
- and still allows Tuscany to implement SDO
- and still allows Tuscany to use SDO or any other related technology?

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]