Re: OSGi-based Tuscany runtime

2007-11-12 Thread Simon Laws
On Nov 8, 2007 10:56 AM, Rajini Sivaram [EMAIL PROTECTED]
wrote:

 Simon,

 Thank you. Yes, I would really appreciate your help in sorting out the
 poms.


 Thank you...

 Regards,

 Rajini


 On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:
 
  Hi Rajini
 
  I'd forgotten about project-info-reports. Thanks for reminding me!. I
  think
  the answer here is for us to get our poms right so that all dependencies
  have the correct scope. I'm happy to help out here. It's easy enough to
  work
  out which are compile time  dependencies but it's note clear that we are
  marking runtime/test dependencies accurately. I don't think there is an
  automatic way of distinguishing.
 
  Simon
 
  On 11/8/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
  
   Simon,
  
   maven-bundle-plugin can be used to generate manifest files for the jar
   files, but the recommended practice is to explicitly specify the
  exported
   packages rather than export everything from the jar. I tried to use
 this
   to
   generate manifest files for all the third party jars separately, but I
   couldn't get these jars to install and resolve under Felix. So at the
   moment, there is a single large third party jar with hardcoded
   export-packages. Once the bundles are finalized, I will try and use
   maven-bundle-plugin to generate as much of the manifest as possible.
  
   Most of the 3rd party jars do not have OSGi manifest headers ( a few
  like
   SDO do). I will try and use existing headers wherever they are
 available
   (again, I will try to do this after the bundles are finalized).
  
   I had a look at the dependency graph generated by mvn
   project-info-reports:dependencies, and the dependency tree format
 looks
   much more usable to generate a full visual graph of the dependencies,
   compared to a flat classpath. My only concern is that many of the test
   dependencies in the modules are not marked with scope test and would
   probably result in unnecessary dependencies (and I am not sure which
   dependencies I can safely remove).
  
   Thank you...
  
   Regards,
  
   Rajini
  
   On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:
   
On 11/7/07, Rajini Sivaram [EMAIL PROTECTED] wrote:

 Hello,

 https://issues.apache.org/jira/browse/TUSCANY-1897 creates a set
 of
 bundles
 to enable Tuscany to be run inside an OSGi runtime. At the moment,
   there
 are
 five bundles:

1. org.apache.tuscany.sca.api.jar  18,701
2. org.apache.tuscany.spi.jar   430,563
3. org.apache.tuscany.runtime.jar538,660
4. org.apache.tuscany.extensions.jar 1,374,045
5. org.apache.tuscany.depends.jar   57,872,558

 I would like to split the 3rd party bundle first and then possibly
  the
 Tuscany extensions bundle. Ideally I would like to have bundles
  which
 match
 the jar files provided in distribution so that OSGi manifest
  headers
can
 be added to the jars in distribution enabling a binary Tuscany
 distribution to be run under OSGi.

 I would like to satisfy as many of  Sebastien's use cases (
 http://marc.info/?l=tuscany-devm=119326781123561w=2) as
 possible.
   But
I
 am
 not sure what the granularity of the bundles should be if we want
 to
have
 the same set of jars for both an OSGi and non-OSGi distribution.
  More
fine
 grained jars provide better versioning under OSGi, but requires
 the
 maintenance of more package dependencies in the manifest files.
  Would
   it
 be
 better to group related 3rd party jars together (eg. all Axis2
  related
 jars
 into one bundle) where each jar belongs to one and only one
 bundle?

 Any thoughts on what the Tuscany distribution should look like
  (should
it
 continue to be the current list of jars, or should related jars be
grouped
 together), and on the granularity required for versioning when
  running
 Tuscany under OSGi are appreciated.


 Ant,

 Would it be possible for you to provide a list of third party jars
   used
by
 each extension? Since maven dependencies in the extension/pom.xml
include
 the dependencies for testing (sometimes without a scope), I am not
   sure
if
 I
 could use a dependency list generated by maven.


 Thank you...

 Regards,

 Rajini




 On 10/25/07, ant elder [EMAIL PROTECTED] wrote:
 
  On 10/25/07, Rajini Sivaram [EMAIL PROTECTED]
 wrote:
 
  snip
 
  This does imply splitting both Tuscany extension bundle
   and the big 3rd party bundle, into smaller chunks. Because of
  its
 size,
  I
   am
   more inclined to split the 3rd party bundle into smaller
 bundles
first
   (though I have no idea where to start with this huge big list
 of
   jar
   files).
 
 
  I can help with that, 

Re: OSGi-based Tuscany runtime

2007-11-12 Thread ant elder
On Nov 12, 2007 11:42 AM, Simon Laws [EMAIL PROTECTED] wrote:

 On Nov 8, 2007 10:56 AM, Rajini Sivaram [EMAIL PROTECTED]
 wrote:

  Simon,
 
  Thank you. Yes, I would really appreciate your help in sorting out the
  poms.
 
 
  Thank you...
 
  Regards,
 
  Rajini
 
 
  On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:
  
   Hi Rajini
  
   I'd forgotten about project-info-reports. Thanks for reminding me!. I
   think
   the answer here is for us to get our poms right so that all
 dependencies
   have the correct scope. I'm happy to help out here. It's easy enough
 to
   work
   out which are compile time  dependencies but it's note clear that we
 are
   marking runtime/test dependencies accurately. I don't think there is
 an
   automatic way of distinguishing.
  
   Simon
  
   On 11/8/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
   
Simon,
   
maven-bundle-plugin can be used to generate manifest files for the
 jar
files, but the recommended practice is to explicitly specify the
   exported
packages rather than export everything from the jar. I tried to use
  this
to
generate manifest files for all the third party jars separately, but
 I
couldn't get these jars to install and resolve under Felix. So at
 the
moment, there is a single large third party jar with hardcoded
export-packages. Once the bundles are finalized, I will try and use
maven-bundle-plugin to generate as much of the manifest as possible.
   
Most of the 3rd party jars do not have OSGi manifest headers ( a few
   like
SDO do). I will try and use existing headers wherever they are
  available
(again, I will try to do this after the bundles are finalized).
   
I had a look at the dependency graph generated by mvn
project-info-reports:dependencies, and the dependency tree format
  looks
much more usable to generate a full visual graph of the
 dependencies,
compared to a flat classpath. My only concern is that many of the
 test
dependencies in the modules are not marked with scope test and would
probably result in unnecessary dependencies (and I am not sure which
dependencies I can safely remove).
   
Thank you...
   
Regards,
   
Rajini
   
On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 11/7/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
 
  Hello,
 
  https://issues.apache.org/jira/browse/TUSCANY-1897 creates a set
  of
  bundles
  to enable Tuscany to be run inside an OSGi runtime. At the
 moment,
there
  are
  five bundles:
 
 1. org.apache.tuscany.sca.api.jar  18,701
 2. org.apache.tuscany.spi.jar   430,563
 3. org.apache.tuscany.runtime.jar538,660
 4. org.apache.tuscany.extensions.jar 1,374,045
 5. org.apache.tuscany.depends.jar   57,872,558
 
  I would like to split the 3rd party bundle first and then
 possibly
   the
  Tuscany extensions bundle. Ideally I would like to have bundles
   which
  match
  the jar files provided in distribution so that OSGi manifest
   headers
 can
  be added to the jars in distribution enabling a binary Tuscany
  distribution to be run under OSGi.
 
  I would like to satisfy as many of  Sebastien's use cases (
  http://marc.info/?l=tuscany-devm=119326781123561w=2) as
  possible.
But
 I
  am
  not sure what the granularity of the bundles should be if we
 want
  to
 have
  the same set of jars for both an OSGi and non-OSGi distribution.
   More
 fine
  grained jars provide better versioning under OSGi, but requires
  the
  maintenance of more package dependencies in the manifest files.
   Would
it
  be
  better to group related 3rd party jars together (eg. all Axis2
   related
  jars
  into one bundle) where each jar belongs to one and only one
  bundle?
 
  Any thoughts on what the Tuscany distribution should look like
   (should
 it
  continue to be the current list of jars, or should related jars
 be
 grouped
  together), and on the granularity required for versioning when
   running
  Tuscany under OSGi are appreciated.
 
 
  Ant,
 
  Would it be possible for you to provide a list of third party
 jars
used
 by
  each extension? Since maven dependencies in the
 extension/pom.xml
 include
  the dependencies for testing (sometimes without a scope), I am
 not
sure
 if
  I
  could use a dependency list generated by maven.
 
 
  Thank you...
 
  Regards,
 
  Rajini
 
 
 
 
  On 10/25/07, ant elder [EMAIL PROTECTED] wrote:
  
   On 10/25/07, Rajini Sivaram [EMAIL PROTECTED]
  wrote:
  
   snip
  
   This does imply splitting both Tuscany extension bundle
and the big 3rd party bundle, into smaller chunks. Because
 of
   its
  

Re: OSGi-based Tuscany runtime

2007-11-12 Thread Simon Laws
On Nov 12, 2007 11:58 AM, ant elder [EMAIL PROTECTED] wrote:

 On Nov 12, 2007 11:42 AM, Simon Laws [EMAIL PROTECTED] wrote:

  On Nov 8, 2007 10:56 AM, Rajini Sivaram [EMAIL PROTECTED]
  wrote:
 
   Simon,
  
   Thank you. Yes, I would really appreciate your help in sorting out the
   poms.
  
  
   Thank you...
  
   Regards,
  
   Rajini
  
  
   On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:
   
Hi Rajini
   
I'd forgotten about project-info-reports. Thanks for reminding me!.
 I
think
the answer here is for us to get our poms right so that all
  dependencies
have the correct scope. I'm happy to help out here. It's easy enough
  to
work
out which are compile time  dependencies but it's note clear that we
  are
marking runtime/test dependencies accurately. I don't think there is
  an
automatic way of distinguishing.
   
Simon
   
On 11/8/07, Rajini Sivaram [EMAIL PROTECTED] wrote:

 Simon,

 maven-bundle-plugin can be used to generate manifest files for the
  jar
 files, but the recommended practice is to explicitly specify the
exported
 packages rather than export everything from the jar. I tried to
 use
   this
 to
 generate manifest files for all the third party jars separately,
 but
  I
 couldn't get these jars to install and resolve under Felix. So at
  the
 moment, there is a single large third party jar with hardcoded
 export-packages. Once the bundles are finalized, I will try and
 use
 maven-bundle-plugin to generate as much of the manifest as
 possible.

 Most of the 3rd party jars do not have OSGi manifest headers ( a
 few
like
 SDO do). I will try and use existing headers wherever they are
   available
 (again, I will try to do this after the bundles are finalized).

 I had a look at the dependency graph generated by mvn
 project-info-reports:dependencies, and the dependency tree format
   looks
 much more usable to generate a full visual graph of the
  dependencies,
 compared to a flat classpath. My only concern is that many of the
  test
 dependencies in the modules are not marked with scope test and
 would
 probably result in unnecessary dependencies (and I am not sure
 which
 dependencies I can safely remove).

 Thank you...

 Regards,

 Rajini

 On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:
 
  On 11/7/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
  
   Hello,
  
   https://issues.apache.org/jira/browse/TUSCANY-1897 creates a
 set
   of
   bundles
   to enable Tuscany to be run inside an OSGi runtime. At the
  moment,
 there
   are
   five bundles:
  
  1. org.apache.tuscany.sca.api.jar  18,701
  2. org.apache.tuscany.spi.jar   430,563
  3. org.apache.tuscany.runtime.jar538,660
  4. org.apache.tuscany.extensions.jar 1,374,045
  5. org.apache.tuscany.depends.jar   57,872,558
  
   I would like to split the 3rd party bundle first and then
  possibly
the
   Tuscany extensions bundle. Ideally I would like to have
 bundles
which
   match
   the jar files provided in distribution so that OSGi manifest
headers
  can
   be added to the jars in distribution enabling a binary
 Tuscany
   distribution to be run under OSGi.
  
   I would like to satisfy as many of  Sebastien's use cases (
   http://marc.info/?l=tuscany-devm=119326781123561w=2) as
   possible.
 But
  I
   am
   not sure what the granularity of the bundles should be if we
  want
   to
  have
   the same set of jars for both an OSGi and non-OSGi
 distribution.
More
  fine
   grained jars provide better versioning under OSGi, but
 requires
   the
   maintenance of more package dependencies in the manifest
 files.
Would
 it
   be
   better to group related 3rd party jars together (eg. all Axis2
related
   jars
   into one bundle) where each jar belongs to one and only one
   bundle?
  
   Any thoughts on what the Tuscany distribution should look like
(should
  it
   continue to be the current list of jars, or should related
 jars
  be
  grouped
   together), and on the granularity required for versioning when
running
   Tuscany under OSGi are appreciated.
  
  
   Ant,
  
   Would it be possible for you to provide a list of third party
  jars
 used
  by
   each extension? Since maven dependencies in the
  extension/pom.xml
  include
   the dependencies for testing (sometimes without a scope), I am
  not
 sure
  if
   I
   could use a dependency list generated by maven.
  
  
   Thank you...
  
   Regards,
  
   Rajini
  
  
  
  
   On 10/25/07, ant elder [EMAIL 

Re: OSGi-based Tuscany runtime

2007-11-12 Thread ant elder
On Nov 12, 2007 12:15 PM, Simon Laws [EMAIL PROTECTED] wrote:

 On Nov 12, 2007 11:58 AM, ant elder [EMAIL PROTECTED] wrote:

  On Nov 12, 2007 11:42 AM, Simon Laws [EMAIL PROTECTED] wrote:
 
   On Nov 8, 2007 10:56 AM, Rajini Sivaram [EMAIL PROTECTED]
   wrote:
  
Simon,
   
Thank you. Yes, I would really appreciate your help in sorting out
 the
poms.
   
   
Thank you...
   
Regards,
   
Rajini
   
   
On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:

 Hi Rajini

 I'd forgotten about project-info-reports. Thanks for reminding
 me!.
  I
 think
 the answer here is for us to get our poms right so that all
   dependencies
 have the correct scope. I'm happy to help out here. It's easy
 enough
   to
 work
 out which are compile time  dependencies but it's note clear that
 we
   are
 marking runtime/test dependencies accurately. I don't think there
 is
   an
 automatic way of distinguishing.

 Simon

 On 11/8/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
 
  Simon,
 
  maven-bundle-plugin can be used to generate manifest files for
 the
   jar
  files, but the recommended practice is to explicitly specify the
 exported
  packages rather than export everything from the jar. I tried to
  use
this
  to
  generate manifest files for all the third party jars separately,
  but
   I
  couldn't get these jars to install and resolve under Felix. So
 at
   the
  moment, there is a single large third party jar with hardcoded
  export-packages. Once the bundles are finalized, I will try and
  use
  maven-bundle-plugin to generate as much of the manifest as
  possible.
 
  Most of the 3rd party jars do not have OSGi manifest headers ( a
  few
 like
  SDO do). I will try and use existing headers wherever they are
available
  (again, I will try to do this after the bundles are finalized).
 
  I had a look at the dependency graph generated by mvn
  project-info-reports:dependencies, and the dependency tree
 format
looks
  much more usable to generate a full visual graph of the
   dependencies,
  compared to a flat classpath. My only concern is that many of
 the
   test
  dependencies in the modules are not marked with scope test and
  would
  probably result in unnecessary dependencies (and I am not sure
  which
  dependencies I can safely remove).
 
  Thank you...
 
  Regards,
 
  Rajini
 
  On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:
  
   On 11/7/07, Rajini Sivaram [EMAIL PROTECTED]
 wrote:
   
Hello,
   
https://issues.apache.org/jira/browse/TUSCANY-1897 creates a
  set
of
bundles
to enable Tuscany to be run inside an OSGi runtime. At the
   moment,
  there
are
five bundles:
   
   1. org.apache.tuscany.sca.api.jar  18,701
   2. org.apache.tuscany.spi.jar   430,563
   3. org.apache.tuscany.runtime.jar538,660
   4. org.apache.tuscany.extensions.jar 1,374,045
   5. org.apache.tuscany.depends.jar   57,872,558
   
I would like to split the 3rd party bundle first and then
   possibly
 the
Tuscany extensions bundle. Ideally I would like to have
  bundles
 which
match
the jar files provided in distribution so that OSGi
 manifest
 headers
   can
be added to the jars in distribution enabling a binary
  Tuscany
distribution to be run under OSGi.
   
I would like to satisfy as many of  Sebastien's use cases (
http://marc.info/?l=tuscany-devm=119326781123561w=2) as
possible.
  But
   I
am
not sure what the granularity of the bundles should be if we
   want
to
   have
the same set of jars for both an OSGi and non-OSGi
  distribution.
 More
   fine
grained jars provide better versioning under OSGi, but
  requires
the
maintenance of more package dependencies in the manifest
  files.
 Would
  it
be
better to group related 3rd party jars together (eg. all
 Axis2
 related
jars
into one bundle) where each jar belongs to one and only one
bundle?
   
Any thoughts on what the Tuscany distribution should look
 like
 (should
   it
continue to be the current list of jars, or should related
  jars
   be
   grouped
together), and on the granularity required for versioning
 when
 running
Tuscany under OSGi are appreciated.
   
   
Ant,
   
Would it be possible for you to provide a list of third
 party
   jars
  used
   by
each extension? Since maven dependencies in the
   extension/pom.xml
   include
the dependencies for testing (sometimes without 

Re: OSGi-based Tuscany runtime

2007-11-12 Thread Rajini Sivaram
I was under the impression that maven generated a minimal list of dependent
jar versions using copy-dependencies, and I assumed that Tuscany used this
in some form to generate the minimal set (with the highest versions). I was
trying to use maven-bundle-plugin to generate bundles out of the
dependencies, and ended up with 90 more jar files than those copied by
copy-dependencies, and most of these seem to be due to multiple versions of
the files. A manual approach to sorting out third party versions will add a
lot of work to bundle-ized Tuscany, for transitive dependencies within third
party bundles, since package versions of export/import package
statements will have to be manually edited in the manifest files.


Thank you...

Regards,

Rajini


On 11/12/07, ant elder [EMAIL PROTECTED] wrote:

 On Nov 12, 2007 12:15 PM, Simon Laws [EMAIL PROTECTED] wrote:

  On Nov 12, 2007 11:58 AM, ant elder [EMAIL PROTECTED] wrote:
 
   On Nov 12, 2007 11:42 AM, Simon Laws [EMAIL PROTECTED]
 wrote:
  
On Nov 8, 2007 10:56 AM, Rajini Sivaram 
 [EMAIL PROTECTED]
wrote:
   
 Simon,

 Thank you. Yes, I would really appreciate your help in sorting out
  the
 poms.


 Thank you...

 Regards,

 Rajini


 On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:
 
  Hi Rajini
 
  I'd forgotten about project-info-reports. Thanks for reminding
  me!.
   I
  think
  the answer here is for us to get our poms right so that all
dependencies
  have the correct scope. I'm happy to help out here. It's easy
  enough
to
  work
  out which are compile time  dependencies but it's note clear
 that
  we
are
  marking runtime/test dependencies accurately. I don't think
 there
  is
an
  automatic way of distinguishing.
 
  Simon
 
  On 11/8/07, Rajini Sivaram [EMAIL PROTECTED] wrote:
  
   Simon,
  
   maven-bundle-plugin can be used to generate manifest files for
  the
jar
   files, but the recommended practice is to explicitly specify
 the
  exported
   packages rather than export everything from the jar. I tried
 to
   use
 this
   to
   generate manifest files for all the third party jars
 separately,
   but
I
   couldn't get these jars to install and resolve under Felix. So
  at
the
   moment, there is a single large third party jar with hardcoded
   export-packages. Once the bundles are finalized, I will try
 and
   use
   maven-bundle-plugin to generate as much of the manifest as
   possible.
  
   Most of the 3rd party jars do not have OSGi manifest headers (
 a
   few
  like
   SDO do). I will try and use existing headers wherever they are
 available
   (again, I will try to do this after the bundles are
 finalized).
  
   I had a look at the dependency graph generated by mvn
   project-info-reports:dependencies, and the dependency tree
  format
 looks
   much more usable to generate a full visual graph of the
dependencies,
   compared to a flat classpath. My only concern is that many of
  the
test
   dependencies in the modules are not marked with scope test and
   would
   probably result in unnecessary dependencies (and I am not sure
   which
   dependencies I can safely remove).
  
   Thank you...
  
   Regards,
  
   Rajini
  
   On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:
   
On 11/7/07, Rajini Sivaram [EMAIL PROTECTED]
  wrote:

 Hello,

 https://issues.apache.org/jira/browse/TUSCANY-1897 creates
 a
   set
 of
 bundles
 to enable Tuscany to be run inside an OSGi runtime. At the
moment,
   there
 are
 five bundles:

1. org.apache.tuscany.sca.api.jar  18,701
2. org.apache.tuscany.spi.jar   430,563
3. org.apache.tuscany.runtime.jar538,660
4. org.apache.tuscany.extensions.jar 1,374,045
5. org.apache.tuscany.depends.jar   57,872,558

 I would like to split the 3rd party bundle first and then
possibly
  the
 Tuscany extensions bundle. Ideally I would like to have
   bundles
  which
 match
 the jar files provided in distribution so that OSGi
  manifest
  headers
can
 be added to the jars in distribution enabling a binary
   Tuscany
 distribution to be run under OSGi.

 I would like to satisfy as many of  Sebastien's use cases
 (
 http://marc.info/?l=tuscany-devm=119326781123561w=2) as
 possible.
   But
I
 am
 not sure what the granularity of the bundles should be if
 we
want
 to
have
 the same set of jars for both an OSGi and non-OSGi
   distribution.
  More
fine
   

Re: OSGi-based Tuscany runtime

2007-11-12 Thread Simon Laws
Hi Rajini

By due to multiple versions of the files do you mean multiple different
version numbers? Is this reflected accurately in the report that I posted
previously in this thread. If so maybe that is a way into this, i.e. lets
try and rationalize the multiple version issue that Ant point out. It seems,
from his previous reply, that he has been doing the manually to date. It may
be that we can't do anything about some of the transitive dependencies but
we may be able to upgrade and get rid of some of them.

Regards

Simon


On Nov 12, 2007 1:12 PM, Rajini Sivaram [EMAIL PROTECTED]
wrote:

 I was under the impression that maven generated a minimal list of
 dependent
 jar versions using copy-dependencies, and I assumed that Tuscany used this
 in some form to generate the minimal set (with the highest versions). I
 was
 trying to use maven-bundle-plugin to generate bundles out of the
 dependencies, and ended up with 90 more jar files than those copied by
 copy-dependencies, and most of these seem to be due to multiple versions
 of
 the files. A manual approach to sorting out third party versions will add
 a
 lot of work to bundle-ized Tuscany, for transitive dependencies within
 third
 party bundles, since package versions of export/import package
 statements will have to be manually edited in the manifest files.


 Thank you...

 Regards,

 Rajini


 On 11/12/07, ant elder [EMAIL PROTECTED] wrote:
 
  On Nov 12, 2007 12:15 PM, Simon Laws [EMAIL PROTECTED] wrote:
 
   On Nov 12, 2007 11:58 AM, ant elder [EMAIL PROTECTED] wrote:
  
On Nov 12, 2007 11:42 AM, Simon Laws [EMAIL PROTECTED]
  wrote:
   
 On Nov 8, 2007 10:56 AM, Rajini Sivaram 
  [EMAIL PROTECTED]
 wrote:

  Simon,
 
  Thank you. Yes, I would really appreciate your help in sorting
 out
   the
  poms.
 
 
  Thank you...
 
  Regards,
 
  Rajini
 
 
  On 11/8/07, Simon Laws [EMAIL PROTECTED] wrote:
  
   Hi Rajini
  
   I'd forgotten about project-info-reports. Thanks for reminding
   me!.
I
   think
   the answer here is for us to get our poms right so that all
 dependencies
   have the correct scope. I'm happy to help out here. It's easy
   enough
 to
   work
   out which are compile time  dependencies but it's note clear
  that
   we
 are
   marking runtime/test dependencies accurately. I don't think
  there
   is
 an
   automatic way of distinguishing.
  
   Simon
  
   On 11/8/07, Rajini Sivaram [EMAIL PROTECTED]
 wrote:
   
Simon,
   
maven-bundle-plugin can be used to generate manifest files
 for
   the
 jar
files, but the recommended practice is to explicitly specify
  the
   exported
packages rather than export everything from the jar. I tried
  to
use
  this
to
generate manifest files for all the third party jars
  separately,
but
 I
couldn't get these jars to install and resolve under Felix.
 So
   at
 the
moment, there is a single large third party jar with
 hardcoded
export-packages. Once the bundles are finalized, I will try
  and
use
maven-bundle-plugin to generate as much of the manifest as
possible.
   
Most of the 3rd party jars do not have OSGi manifest headers
 (
  a
few
   like
SDO do). I will try and use existing headers wherever they
 are
  available
(again, I will try to do this after the bundles are
  finalized).
   
I had a look at the dependency graph generated by mvn
project-info-reports:dependencies, and the dependency tree
   format
  looks
much more usable to generate a full visual graph of the
 dependencies,
compared to a flat classpath. My only concern is that many
 of
   the
 test
dependencies in the modules are not marked with scope test
 and
would
probably result in unnecessary dependencies (and I am not
 sure
which
dependencies I can safely remove).
   
Thank you...
   
Regards,
   
Rajini
   
On 11/7/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 11/7/07, Rajini Sivaram [EMAIL PROTECTED]
   wrote:
 
  Hello,
 
  https://issues.apache.org/jira/browse/TUSCANY-1897creates
  a
set
  of
  bundles
  to enable Tuscany to be run inside an OSGi runtime. At
 the
 moment,
there
  are
  five bundles:
 
 1. org.apache.tuscany.sca.api.jar  18,701
 2. org.apache.tuscany.spi.jar
 430,563
 3. org.apache.tuscany.runtime.jar538,660
 4. org.apache.tuscany.extensions.jar 1,374,045
 5. org.apache.tuscany.depends.jar   57,872,558
 
  I would like to split the 3rd party bundle first and
 

[Policy Fwk Specs Relalted] Operation child element in sca:Implementation

2007-11-12 Thread Venkata Krishnan
Hi,

The PolicyFwk specs has the following : -

984 component name=xs:NCName
985   implementation.* policySets=listOfQNames
986   requires=list of intent xs:QNames
987 …
988   operation name=xs:string service=xs:string?
989policySets=listOfQNames?
990requires=listOfQNames?/*
991 …
992/implementation
993 …
994 /component

The xsd for 'implementation' (sca:Implementation) in the Assembly Model
specs does not seem to have the element 'operation' defined as child
element.  Is this something to fix in the specs or am I missing something
here.

Thanks

- Venkat


At ApacheCon this week

2007-11-12 Thread Jean-Sebastien Delfino

Anyone going to ApacheCon?

I'm there at the Hackathon today and will be there all week.

Drop me an email if you're going to be there and want to meet!

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DataBinding issues - ClassCastException

2007-11-12 Thread Jean-Sebastien Delfino
I'm trying to add some support for business objects to the Tutorial 
(instead of just flowing the catalog and cart items as strings).


My business objects are simple JavaBeans with an empty constructor. I 
need to flow them as Java objects through local service calls, XML (in 
Atom payloads) and JSON.


It should have been a simple exercise, but I've been running into a 
number of issues with the DataBinding framework. Here they are:


1. Some of the Bean2* and *2Bean transformers were missing to 
META-INF/services I added them.


2. The Databinding framework was not finding the transformers in XML - 
registrations. I changed all instances of java.lang.String to xml.string 
in the registrations of the *2String and String2* transformer to fix that.


3. Next I ran into a ClassCastException in XML2JavaBeanTransformer at
public Object transform(T source, TransformationContext context) {
- -   XMLType xmlType = (XMLType) 
context.getSourceDataType().getLogical();

as the logical type was Class instead of XMLType

4. I had to add implements Serializable to my business object class as 
it looks like we are using Java serialization to enforce pass by value 
in local service calls. I don't think it's right as we are not using 
Java serialization to pass the same business object through a remote 
call. We should change to use the same XML transformation to enforce 
pass by value with local calls as well, and not require the business 
objects to be Serializable.


5. After changes (1), (2), (3) and (4) I am now running into a build issue:
testTransform1(org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase)  
Time elapsed: 0.02 sec   ERROR!
java.lang.ClassCastException: 
org.apache.tuscany.sca.databinding.DefaultTransformerExtensionPoint$LazyPullTransformer 
cannot be cast to org.apache.tuscany.sca.databinding.DataPipeTransformer
   at 
org.apache.tuscany.sca.databinding.impl.MediatorImpl.mediate(MediatorImpl.java:75)
   at 
org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase.testTransform1(MediatorImplTestCase.java:104)

   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:232)
   at junit.framework.TestSuite.run(TestSuite.java:227)
   at 
org.junit.internal.runners.OldTestClassRunner.run(OldTestClassRunner.java:35)
   at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138)
   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:125)

   at org.apache.maven.surefire.Surefire.run(Surefire.java:132)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:299)
   at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:837)


I'm getting to the end of the rope here... does anyone know what can 
cause error (5)?


Thanks

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [Policy Fwk Specs Relalted] Operation child element in sca:Implementation

2007-11-12 Thread Jean-Sebastien Delfino

Venkata Krishnan wrote:

Hi,

The PolicyFwk specs has the following : -

984 component name=xs:NCName
985   implementation.* policySets=listOfQNames
986   requires=list of intent xs:QNames
987 …
988   operation name=xs:string service=xs:string?
989policySets=listOfQNames?
990requires=listOfQNames?/*
991 …
992/implementation
993 …
994 /component

The xsd for 'implementation' (sca:Implementation) in the Assembly Model
specs does not seem to have the element 'operation' defined as child
element.  Is this something to fix in the specs or am I missing something
here.

Thanks

- Venkat

  


Looks like a bug in the XSD.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Minimum support for generics in Java interface introspector

2007-11-12 Thread Jean-Sebastien Delfino
Still trying to make the Tutorial use simple business objects instead of 
Strings... I need to add minimum support for generics to the Java 
interface introspector.


This will allow it to correctly introspect the Cart interface, defined 
as follows:


@Remotable
public interface Cart extends CollectionString, Item {
}

@Remotable
public interface Collection K, D {

   // Get the whole collection.
   MapK, D getAll();

   // Return a collection resulting from a query.
   MapK, D query(String queryString);

   // Create a new item.
   K post(D item);

   ...
}

With this change Cart.post() will be correctly introspected as:
String post(Item item);

instead of what it is now:
Object post(Object item);

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: How WSDL2JAVA generate interfaces (Services versus PortTypes)

2007-11-12 Thread Jean-Sebastien Delfino

Luciano Resende wrote:

While working with the BPEL component type implementation, I came
across some WSDL files that define PortTypes, but no services. These
files, when processed by Wsdl2Java tool, fails to generate any java
artifacts as it looks like the code we have today only process
services (JavaInterfaceGenerator around line 91). Should the algorithm
used in these process only consider Services, or process the PortTypes
as well ?


  


The other way around :) We're using WSDL2Java to generate interfaces, it 
should only consider portTypes, not services.


--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Processing reference targets for remote bindings

2007-11-12 Thread Jean-Sebastien Delfino

Simon Laws wrote:

I've started putting some code in the node implementation to allow remote
bindings to make use of reference targets for identifying service endpoints.


It's very simple at the moment. The node implementation uses the
startComposite event as a trigger to

1/ scan all services in the composite and register them with the domain
  I've updated the domain implementation so that when there is no remote
domain there registrations are cached locally. It's the same code
implementing the local and remote domain so the proxy effectively provides a
write through cache.

2/ scan all references in the composite and compare them against the
registered service. Replace the binding uri with the uri from the registered
service if appropriate.

The if appropriate part is the tricky bit. At the moment the code [1] does
the following.

for (Component component: composite.getComponents()) {
for (ComponentReference reference: component.getReferences()) {
for (Binding binding: reference.getBindings()) {
if (binding.isUnresolved()) {
if (binding instanceof SCABindingImpl){
// TODO - only find uri if its in a remote node
} else {
// find the right endpoint for this
reference/binding. This relies on looking
// up every binding URI. If a response is
returned then it's set back into the
// binding uri
String uri = ;
try {
uri =
((SCADomainSPI)scaDomain).findServiceEndpoint(domainURI,

 binding.getURI(),

 binding.getClass().getName());
} catch(Exception ex) {
logger.log(Level.WARNING,
   Unable to  find service:   +
   domainURI +   +
   nodeURI +   +
   binding.getURI() +   +
   binding.getClass().getName() +   +
   uri);
}

if (uri.equals() == false){
binding.setURI(uri);
}
}
}
}
}
}

There is a bit of slight of hand here;

It looks for all unresolved bindings (it seems that all remote bindings are
unresolved - is that right?)

It then uses the binding uri to look up a service which either finds
something or it doesn't

In the case that a binding.uri has no endpoint specified you will see
something like MyComponent/MyService in the binding uri and there's a good
chance that a service will have been registered under this name and hence
the correct target endpoint will be set back into the reference binding.

In the case that a binding uri is set by other means (via the uri attribute
for example) then it will already look something like 
http://myhost:8080/MyComponent/MyService;. This will not match any
registered service name and hence will not be reset. This may not match the
real service uri but then that's the users fault for setting it incorrectly.


This processing is slightly removed from the model itself but relies on the
model and related processing to work out what the binding.uri should be
initially, based on specified target(s) (haven't done anything about the
multiplicity case yet). I welcome any feedback about whether this is going
in the right direction. If people are happy about this I will likely pull
the service registration logic out of the sca binding and treat it in a
similar, more generic, way.

Simon

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/node-impl/src/main/java/org/apache/tuscany/sca/node/impl/SCANodeImpl.java

  


Moving the logic to configure binding URIs from wire targets out of the 
SCA binding looks like the right direction (as this should eventually 
work for all bindings).


IMO steps should be performed in the following sequence, which may 
slightly differ from what you described:

1. ServiceBindingProvider.start() is called.
2. Binding specific code in ServiceBindingProvider.start() sets the 
effective binding URI in the Binding model (as it's the only one to know 
how to determine it).
3. Generic domain code registers binding.getURI() with the domain 
controller.
4. Just before ReferenceBindingProvider.start() is called, generic 
domain code looks the target service up, finds its URI, calls 
binding.setURI().

5. ReferenceBindingProvider.start() proceeds and uses the resolved URI.

Steps 4 and 5 could also be delayed until an invocation hits the 
ReferenceBinding.


--
Jean-Sebastien



Re: Problem with International characters in a Confluence Wiki page

2007-11-12 Thread Luciano Resende
Just replying to add the tuscany-dev back to the thread.

Also taking the chance to check if there is anyone that could help
drive this issue to a conclusion.

On Nov 7, 2007 2:10 PM, Gav [EMAIL PROTECTED] wrote:
 I created an Infra Issue so this does not get lost.

 https://issues.apache.org/jira/browse/INFRA-1400

 Gav...


  -Original Message-
  From: Luciano Resende [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, 6 November 2007 2:23 AM
  To: Infrastructure Apache
  Cc: tuscany-dev
  Subject: Problem with International characters in a Confluence Wiki page
 
  Hi Confluence Gurus
 
 In Tuscany, we are trying to get a Chinese version of our website,
  where the contents come from a Confluence Wiki. While preparing for
  that, we noticed that after a given time, usually overnight or so, we
  loose all the Chinese characters and they become . Any ideas on
  what's going on and how to fix this ? Sample pages at [1] and [2], and
  a discussion thread on the Tuscany dev-list at [3].
 
  Thoughts ?
 
 
  [1] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Test+lresende
  [2] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Chinese+Website
  [3] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg25384.html
 
  --
  Luciano Resende
  Apache Tuscany Committer
  http://people.apache.org/~lresende
  http://lresende.blogspot.com/
 
 
  --
  No virus found in this incoming message.
  Checked by AVG Free Edition.
  Version: 7.5.503 / Virus Database: 269.15.21/1109 - Release Date:
  11/4/2007 11:05 AM




-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: DataBinding issues - ClassCastException

2007-11-12 Thread Jean-Sebastien Delfino

Jean-Sebastien Delfino wrote:
I'm trying to add some support for business objects to the Tutorial 
(instead of just flowing the catalog and cart items as strings).


My business objects are simple JavaBeans with an empty constructor. I 
need to flow them as Java objects through local service calls, XML (in 
Atom payloads) and JSON.


It should have been a simple exercise, but I've been running into a 
number of issues with the DataBinding framework. Here they are:


1. Some of the Bean2* and *2Bean transformers were missing to 
META-INF/services I added them.


2. The Databinding framework was not finding the transformers in XML 
- registrations. I changed all instances of java.lang.String to 
xml.string in the registrations of the *2String and String2* 
transformer to fix that.


3. Next I ran into a ClassCastException in XML2JavaBeanTransformer at
public Object transform(T source, TransformationContext context) {
- -   XMLType xmlType = (XMLType) 
context.getSourceDataType().getLogical();

as the logical type was Class instead of XMLType

4. I had to add implements Serializable to my business object class 
as it looks like we are using Java serialization to enforce pass by 
value in local service calls. I don't think it's right as we are not 
using Java serialization to pass the same business object through a 
remote call. We should change to use the same XML transformation to 
enforce pass by value with local calls as well, and not require the 
business objects to be Serializable.


5. After changes (1), (2), (3) and (4) I am now running into a build 
issue:
testTransform1(org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase)  
Time elapsed: 0.02 sec   ERROR!
java.lang.ClassCastException: 
org.apache.tuscany.sca.databinding.DefaultTransformerExtensionPoint$LazyPullTransformer 
cannot be cast to org.apache.tuscany.sca.databinding.DataPipeTransformer
   at 
org.apache.tuscany.sca.databinding.impl.MediatorImpl.mediate(MediatorImpl.java:75) 

   at 
org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase.testTransform1(MediatorImplTestCase.java:104) 


   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 

   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 


   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:232)
   at junit.framework.TestSuite.run(TestSuite.java:227)
   at 
org.junit.internal.runners.OldTestClassRunner.run(OldTestClassRunner.java:35) 

   at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62) 

   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138) 

   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:125) 


   at org.apache.maven.surefire.Surefire.run(Surefire.java:132)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 

   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 


   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:299) 

   at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:837) 



I'm getting to the end of the rope here... does anyone know what can 
cause error (5)?


Thanks



A little more info:

The ClassCastException is on:
   } else if (transformer instanceof PushTransformer) {
   DataPipeTransformer dataPipeFactory = (i  size - 1) ? 
(DataPipeTransformer)path.get(++i) : null;


path =
[EMAIL PROTECTED],
[EMAIL PROTECTED];className=org.apache.tuscany.sca.databinding.xml.SAX2DOMPipe]

sourceDataType = class java.lang.String java.lang.String class 
java.lang.String
targetDataType = interface org.w3c.dom.Node org.w3c.dom.Node interface 
org.w3c.dom.Node


--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[continuum] BUILD ERROR: Apache Tuscany SCA Implementation Project

2007-11-12 Thread Continuum VMBuild Server

Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=21200projectId=277

Build statistics:
 State: Error
 Previous State: Failed
 Started at: Mon 12 Nov 2007 09:14:36 -0800
 Finished at: Mon 12 Nov 2007 10:09:31 -0800
 Total time: 54m 54s
 Build Trigger: Schedule
 Build Number: 8
 Exit code: 0
 Building machine hostname: vmbuild.apache.org
 Operating system : Linux(unknown)
 Java Home version : 
 java version 1.5.0_12

 Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04)
 Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode, sharing)
   
 Builder version :

 Maven version: 2.0.7
 Java version: 1.5.0_12
 OS name: linux version: 2.6.20-16-server arch: i386
   


SCM Changes:

Changed: jsdelfino @ Mon 12 Nov 2007 08:53:29 -0800
Comment: Changed tutorial to use real business objects for the catalog and cart 
items instead of just strings.
Files changed:
 /incubator/tuscany/java/sca/tutorial/assets/services/Cart.java ( 594211 )
 /incubator/tuscany/java/sca/tutorial/assets/services/Catalog.java ( 594211 )
 /incubator/tuscany/java/sca/tutorial/assets/services/FruitsCatalogImpl.java ( 
594211 )
 /incubator/tuscany/java/sca/tutorial/assets/services/Item.java ( 594211 )
 /incubator/tuscany/java/sca/tutorial/assets/services/ShoppingCartImpl.java ( 
594211 )
 
/incubator/tuscany/java/sca/tutorial/assets/services/VegetablesCatalogImpl.java 
( 594211 )
 
/incubator/tuscany/java/sca/tutorial/assets/services/db/ShoppingCartTableImpl.java
 ( 594211 )
 /incubator/tuscany/java/sca/tutorial/assets/services/db/cart.sql ( 594211 )
 
/incubator/tuscany/java/sca/tutorial/assets/services/merger/MergedCatalogImpl.java
 ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store/store-db.composite ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store/store-merger.composite ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store/store.composite ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store/uiservices/store.html ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store-eu/store-eu.composite ( 594211 )
 /incubator/tuscany/java/sca/tutorial/store-eu/uiservices/store.html ( 594211 )


Dependencies Changes:

No dependencies changed



Build Defintion:

POM filename: pom.xml
Goals: -Pdistribution clean install   
Arguments: --batch-mode

Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: DEFAULT_SCHEDULE
Profile Name: Java 5, Large Memory
Description: 




Test Summary:

Tests: 1028
Failures: 1
Total time: 1309347


Build Error:

org.apache.maven.continuum.execution.ContinuumBuildCancelledException: The 
build was cancelled
at 
org.apache.maven.continuum.execution.AbstractBuildExecutor.executeShellCommand(AbstractBuildExecutor.java:216)
at 
org.apache.maven.continuum.execution.maven.m2.MavenTwoBuildExecutor.build(MavenTwoBuildExecutor.java:149)
at 
org.apache.maven.continuum.core.action.ExecuteBuilderContinuumAction.execute(ExecuteBuilderContinuumAction.java:140)
at 
org.apache.maven.continuum.buildcontroller.DefaultBuildController.performAction(DefaultBuildController.java:417)
at 
org.apache.maven.continuum.buildcontroller.DefaultBuildController.build(DefaultBuildController.java:156)
at 
org.apache.maven.continuum.buildcontroller.BuildProjectTaskExecutor.executeTask(BuildProjectTaskExecutor.java:50)
at 
org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable$1.run(ThreadedTaskQueueExecutor.java:116)
at 
edu.emory.mathcs.backport.java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:442)
at 
edu.emory.mathcs.backport.java.util.concurrent.FutureTask.run(FutureTask.java:176)
at 
edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
at 
edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.codehaus.plexus.util.cli.CommandLineException: Error while 
executing external command, process killed.
at 

[jira] Updated: (TUSCANY-1907) Dynamic Wiring first steps.

2007-11-12 Thread Giorgio Zoppi (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giorgio Zoppi updated TUSCANY-1907:
---

Attachment: dynamic-wiring.dif

This is not yet well tested. So please await further patches.

 Dynamic Wiring first steps.
 ---

 Key: TUSCANY-1907
 URL: https://issues.apache.org/jira/browse/TUSCANY-1907
 Project: Tuscany
  Issue Type: New Feature
Reporter: Giorgio Zoppi
 Attachments: dynamic-wiring.dif


 This patch is my first step inorder to add dynamically a component to an 
 existing contribution. I submit only for tracking, so tomorrow i'll have a 
 starting point.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Created: (TUSCANY-1907) Dynamic Wiring first steps.

2007-11-12 Thread Giorgio Zoppi (JIRA)
Dynamic Wiring first steps.
---

 Key: TUSCANY-1907
 URL: https://issues.apache.org/jira/browse/TUSCANY-1907
 Project: Tuscany
  Issue Type: New Feature
Reporter: Giorgio Zoppi
 Attachments: dynamic-wiring.dif

This patch is my first step inorder to add dynamically a component to an 
existing contribution. I submit only for tracking, so tomorrow i'll have a 
starting point.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: DataBinding issues - ClassCastException

2007-11-12 Thread Jean-Sebastien Delfino

Jean-Sebastien Delfino wrote:

Jean-Sebastien Delfino wrote:
I'm trying to add some support for business objects to the Tutorial 
(instead of just flowing the catalog and cart items as strings).


My business objects are simple JavaBeans with an empty constructor. I 
need to flow them as Java objects through local service calls, XML 
(in Atom payloads) and JSON.


It should have been a simple exercise, but I've been running into a 
number of issues with the DataBinding framework. Here they are:


1. Some of the Bean2* and *2Bean transformers were missing to 
META-INF/services I added them.


2. The Databinding framework was not finding the transformers in XML 
- registrations. I changed all instances of java.lang.String to 
xml.string in the registrations of the *2String and String2* 
transformer to fix that.


3. Next I ran into a ClassCastException in XML2JavaBeanTransformer at
public Object transform(T source, TransformationContext context) {
- -   XMLType xmlType = (XMLType) 
context.getSourceDataType().getLogical();

as the logical type was Class instead of XMLType

4. I had to add implements Serializable to my business object class 
as it looks like we are using Java serialization to enforce pass by 
value in local service calls. I don't think it's right as we are not 
using Java serialization to pass the same business object through a 
remote call. We should change to use the same XML transformation to 
enforce pass by value with local calls as well, and not require the 
business objects to be Serializable.


5. After changes (1), (2), (3) and (4) I am now running into a build 
issue:
testTransform1(org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase)  
Time elapsed: 0.02 sec   ERROR!
java.lang.ClassCastException: 
org.apache.tuscany.sca.databinding.DefaultTransformerExtensionPoint$LazyPullTransformer 
cannot be cast to org.apache.tuscany.sca.databinding.DataPipeTransformer
   at 
org.apache.tuscany.sca.databinding.impl.MediatorImpl.mediate(MediatorImpl.java:75) 

   at 
org.apache.tuscany.sca.databinding.impl.MediatorImplTestCase.testTransform1(MediatorImplTestCase.java:104) 


   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 

   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 


   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:232)
   at junit.framework.TestSuite.run(TestSuite.java:227)
   at 
org.junit.internal.runners.OldTestClassRunner.run(OldTestClassRunner.java:35) 

   at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62) 

   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138) 

   at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:125) 


   at org.apache.maven.surefire.Surefire.run(Surefire.java:132)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 

   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 


   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:299) 

   at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:837) 



I'm getting to the end of the rope here... does anyone know what can 
cause error (5)?


Thanks



A little more info:

The ClassCastException is on:
   } else if (transformer instanceof PushTransformer) {
   DataPipeTransformer dataPipeFactory = (i  size - 1) ? 
(DataPipeTransformer)path.get(++i) : null;


path =
[EMAIL PROTECTED],
[EMAIL PROTECTED];className=org.apache.tuscany.sca.databinding.xml.SAX2DOMPipe] 



sourceDataType = class java.lang.String java.lang.String class 
java.lang.String
targetDataType = interface org.w3c.dom.Node org.w3c.dom.Node interface 
org.w3c.dom.Node




Next stop on the road to databinding happiness:  Revert changes (2) as 
registering the transformers with xml.string seems to confuse all the 
code that uses java.lang.String instead of xml.string as databinding 
name or id.


I then get the following exception:
Running 
org.apache.tuscany.sca.binding.ws.axis2.itests.HelloWorldNoWSDLTestCase
info: Added Servlet mapping: 

[jira] Created: (TUSCANY-1908) SDO sample code must be updated about SDO core changes

2007-11-12 Thread Adriano Crestani (JIRA)
SDO sample code must be updated about SDO core changes
--

 Key: TUSCANY-1908
 URL: https://issues.apache.org/jira/browse/TUSCANY-1908
 Project: Tuscany
  Issue Type: Bug
  Components: C++ SDO
Affects Versions: Cpp-M3
Reporter: Adriano Crestani
Assignee: Adriano Crestani
Priority: Critical
 Fix For: Cpp-M4


SDO C++ core had its DataObject::setInteger and getInteger method renamed to 
setInt and getInt respectively. These changes must be reflected on SDO sample

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Resolved: (TUSCANY-1908) SDO sample code must be updated about SDO core changes

2007-11-12 Thread Adriano Crestani (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adriano Crestani resolved TUSCANY-1908.
---

Resolution: Fixed

resolved on revision 594291

 SDO sample code must be updated about SDO core changes
 --

 Key: TUSCANY-1908
 URL: https://issues.apache.org/jira/browse/TUSCANY-1908
 Project: Tuscany
  Issue Type: Bug
  Components: C++ SDO
Affects Versions: Cpp-M3
Reporter: Adriano Crestani
Assignee: Adriano Crestani
Priority: Critical
 Fix For: Cpp-M4


 SDO C++ core had its DataObject::setInteger and getInteger method renamed to 
 setInt and getInt respectively. These changes must be reflected on SDO sample

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Processing reference targets for remote bindings

2007-11-12 Thread Simon Laws
On Nov 12, 2007 5:23 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Simon Laws wrote:
  I've started putting some code in the node implementation to allow
 remote
  bindings to make use of reference targets for identifying service
 endpoints.
 
 
  It's very simple at the moment. The node implementation uses the
  startComposite event as a trigger to
 
  1/ scan all services in the composite and register them with the domain
I've updated the domain implementation so that when there is no remote
  domain there registrations are cached locally. It's the same code
  implementing the local and remote domain so the proxy effectively
 provides a
  write through cache.
 
  2/ scan all references in the composite and compare them against the
  registered service. Replace the binding uri with the uri from the
 registered
  service if appropriate.
 
  The if appropriate part is the tricky bit. At the moment the code [1]
 does
  the following.
 
  for (Component component: composite.getComponents()) {
  for (ComponentReference reference: component.getReferences())
 {
  for (Binding binding: reference.getBindings()) {
  if (binding.isUnresolved()) {
  if (binding instanceof SCABindingImpl){
  // TODO - only find uri if its in a remote
 node
  } else {
  // find the right endpoint for this
  reference/binding. This relies on looking
  // up every binding URI. If a response is
  returned then it's set back into the
  // binding uri
  String uri = ;
  try {
  uri =
  ((SCADomainSPI)scaDomain).findServiceEndpoint(domainURI,
 
   binding.getURI(),
 
   binding.getClass().getName());
  } catch(Exception ex) {
  logger.log(Level.WARNING,
 Unable to  find service: 
  +
 domainURI +   +
 nodeURI +   +
 binding.getURI() +   +
 binding.getClass().getName()
 +   +
 uri);
  }
 
  if (uri.equals() == false){
  binding.setURI(uri);
  }
  }
  }
  }
  }
  }
 
  There is a bit of slight of hand here;
 
  It looks for all unresolved bindings (it seems that all remote bindings
 are
  unresolved - is that right?)
 
  It then uses the binding uri to look up a service which either finds
  something or it doesn't
 
  In the case that a binding.uri has no endpoint specified you will see
  something like MyComponent/MyService in the binding uri and there's a
 good
  chance that a service will have been registered under this name and
 hence
  the correct target endpoint will be set back into the reference binding.
 
  In the case that a binding uri is set by other means (via the uri
 attribute
  for example) then it will already look something like 
  http://myhost:8080/MyComponent/MyService;. This will not match any
  registered service name and hence will not be reset. This may not match
 the
  real service uri but then that's the users fault for setting it
 incorrectly.
 
 
  This processing is slightly removed from the model itself but relies on
 the
  model and related processing to work out what the binding.uri should be
  initially, based on specified target(s) (haven't done anything about the
  multiplicity case yet). I welcome any feedback about whether this is
 going
  in the right direction. If people are happy about this I will likely
 pull
  the service registration logic out of the sca binding and treat it in a
  similar, more generic, way.
 
  Simon
 
  [1]
 
 http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/node-impl/src/main/java/org/apache/tuscany/sca/node/impl/SCANodeImpl.java
 
 

 Moving the logic to configure binding URIs from wire targets out of the
 SCA binding looks like the right direction (as this should eventually
 work for all bindings).

 IMO steps should be performed in the following sequence, which may
 slightly differ from what you described:
 1. ServiceBindingProvider.start() is called.
 2. Binding specific code in ServiceBindingProvider.start() sets the
 effective binding URI in the Binding model (as it's the only one to know
 how to determine it).
 3. Generic domain code registers binding.getURI() with the domain
 controller.
 4. Just before ReferenceBindingProvider.start() is called, generic
 domain code looks the target service up, finds its URI, 

Updated tutorial, was: DataBinding issues - ClassCastException

2007-11-12 Thread Jean-Sebastien Delfino

[snip]
Jean-Sebastien Delfino wrote:

Jean-Sebastien Delfino wrote:

Jean-Sebastien Delfino wrote:
I'm trying to add some support for business objects to the Tutorial 
(instead of just flowing the catalog and cart items as strings).


My business objects are simple JavaBeans with an empty constructor. 
I need to flow them as Java objects through local service calls, XML 
(in Atom payloads) and JSON.


It should have been a simple exercise, but I've been running into a 
number of issues with the DataBinding framework. Here they are:




OK, I've been able to fix or workaround the issues I described earlier. 
An update of the tutorial using business objects (see class Item in the 
assets module) is now available in SVN.


I also put under tutorial a Tutorial.pdf that shows the steps in the 
construction of the tutorial application.


After what I've been through trying to get my little Item object 
converted to XML and JSON I'm going to do a little bit of thinking and 
I'll post some suggestions later to try to simplify the whole 
databinding story.


--

Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Processing reference targets for remote bindings

2007-11-12 Thread Jean-Sebastien Delfino

[snip]
Simon Laws wrote:

Thanks for the comments. I think we are pretty close on the sequence. It may
be that we differ on how this is plumbed into the infrastructure.

OK I'll take a look at the code if it's already in SVN.


For steps
3 and 4 I'm dealing with all of the services, references in one go at the
Node level between calls to activate and start on the composite activator.
  

(3) needs to be after ServiceBindings have been started, i.e. after (2) :)

From your comment I gather that it's currenty the other way around, or 
did I misunderstand?



Are you suggesting that this processing should be more closely integrated
with the composite activator?
  


I'd suggest to have it self contained with a clean interface, not 
closely integrated with anything.


CompositeActivator needs some serious cleanup before adding more code to it.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Ant Build Distribution System

2007-11-12 Thread Adriano Crestani
I've added a distribution example of what is being generated by the
distribution and pack.distribution targets on my sandbox [1].

- src and bin dirs are generated by distribution target
- the .zip, .tar.gz, .md5 and .asc(still working on it) are being generated
by pack.distribution targets from the bin and src dirs
- there is still no NOTICE, COPYRIGHT, README AND LICENSE files cause I
haven't added them yet

[1] https://svn.apache.org/repos/asf/incubator/tuscany/sandbox/crestani

Adriano Crestani

On Nov 11, 2007 10:10 PM, Luciano Resende [EMAIL PROTECTED] wrote:

 If you could post a sample distro in your p.a.o account, it would make
 it easier for others to review, at least for those that does not have
 all the native environment setup.

 On Nov 11, 2007 9:19 PM, Adriano Crestani [EMAIL PROTECTED]
 wrote:
  Hi,
 
  On revision 594022 I've added new targets: distribution and 
  pack.distribution on DAS ant build system. distribution target
 creates a
  distribution file structure for both, src and bin distribution. 
  pack.distribution target packs the generated distribution files and
  generates the .md5 and .asc( still working on it ) files.
 
  I've also updated the ANT_README_AND_INSTALL file with the description
 of
  these new targets.
 
 
  I'd like someone else could revise the distribution structure that these
 new
  targets are creating and give some suggestions. Then, after everything
 is ok
  with these new targets, we could reapply it to SDO and SCA projects.
 
  Thoughts? Suggestions?
 
  Adriano Crestani
 



 --
 Luciano Resende
 Apache Tuscany Committer
 http://people.apache.org/~lresende http://people.apache.org/%7Elresende
 http://lresende.blogspot.com/

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: [DAS] DAS samples

2007-11-12 Thread Luciano Resende
Basically, I changed the company-webapp not to create a database so I
could use the one I created with authentication required. And I didn't
use the new code, basically I used the support to configure
username/password when defining the data source.


On Nov 11, 2007 11:32 PM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
 What are the modifications you did for this in company webapp? Did you
 see that the DS.getConnection(id, pwd) signature is being used
 by DAS during this case?

 Regards,
 Amita


 On Nov 9, 2007 11:40 AM, Luciano Resende [EMAIL PROTECTED] wrote:
  Hi Amita
 
 I replied to this other thread [1] with some questions. But just
  FYI, I was able to run a slightingly modified version of
  company-webapp using a secured derby database in TC by specifying the
  username/password on the datasource definition.
 
 Resource name=jdbc/dastest
   type=javax.sql.DataSource
   auth=Container
   description=Derby database for DAS Samples
   maxActive=100 maxIdle=30
   maxWait=1 username=dastest 
  password=dastest
 
  driverClassName=org.apache.derby.jdbc.EmbeddedDriver

  url=jdbc:derby:D:/Opensource-Servers/apache-tomcat-5.5.20/Databases/dastest;create=true/
 
 
  [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg25244.html
 
 
  On Nov 6, 2007 8:33 PM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
   1 readmes commit - I will wait and make it along with any other
   changes in JIRA-1698 as we are still deciding which sample to use to
   demo the feature of
   JIRA-1698
  
   2 There are 3 test cases in ConnectionTests, please see if you find
   some other cases that can be included.
  
   3 Using jboss jars - as these were available on mvn repo, I missed
   the point of license, if license is the issue, then these can not be
   used.
  
TC has
   (default)BasicDataSource - which does not support getConnection(id, pwd)
   and
   PerUserPoolDataSource, SharedPoolDataSource - which support
   getConnection(id, pwd)
  
   When trying to configure PerUserPoolDataSource, SharedPoolDataSource
   with TC 6.0.14, was getting different errors, will see if can get this
   working.
  
   I am not doing any commits related to this JIRA, till 3 or some other
   sample is formed, so all changes will go together.
  
   Regards,
   Amita
  
  
   On Nov 6, 2007 11:06 PM, Luciano Resende [EMAIL PROTECTED] wrote:
Comments inline :
   
On 11/6/07, Amita Vadhavkar [EMAIL PROTECTED] wrote:
 changes done -

 1) cleaned readme files using eclipse IDE html editor - samples, 
 dbconfig
Good, Thanks, please committ this, you don't have to wait anymore :)
   
 2) replaced MySQL with Derby
Just want to make sure you have all the functionality you need in 
Derby...
   
 3) replaced sun provided JNDI jars with jboss jar - because - these 
 are
 available in mvn repos and only 3 are required in the build path
 (jboss-common 3.2.3, jnp-client 4.0.2 and jnpserver 3.2.3  - total 
 350 KB)
   
I downloaded the jars, but couldn't find any license files there.
Also, JBOSS stuff tend to be LGPL and that is not ASF Friendly, so
could you please point me to the proper license for these files ?
   
   
 4) added more test cases in ConnectionTests.java and removed 
 sample-dataSource
 5) patch attached to JIRA-1698

 Please see if there are any problems in the above, else I will commit 
 the
 patch.
 The bin size increase due to jboss jars is 350 KB and so it may be OK 
 to
 make it
 as test cases instead of sample.

   
   
Well, in summary, it's lots of dependencies issues to demonstrate we
now support authentication when retrieving the datasource
connection... and based on the dependencies being dragged to DAS
distro... I'm now inclined to have just a sample, or simpler, just
document in the User Guide.
   
BTW, I'll play with this over the weekend and try to make this working
in TC with Companyweb... Maybe this is a simpler solution :)
   
   
 Regards,
 Amita

 On 11/5/07, Amita Vadhavkar [EMAIL PROTECTED] wrote:
 
 
 
  On 11/5/07, Luciano Resende [EMAIL PROTECTED] wrote:
  
   I was trying to run the new DAS sample (dataSource) and I looks 
   like
   it requires MySQL in order to run the sample, this might not be 
   the
   best default configuration to require, as it requires lots of 
   steps in
   order to just try the sample ( e.g install MySQL), and it also 
   makes it
   difficult to test the sample during build. I'd like to suggest two
   things for our DAS Sample applications :
  
   - Use Derby as the default database in a sample application
 
 
  Agree, done changes for this
 
  - Have a simple unit test to quickly check if 

Re: [DAS] Use datasource.getConnection(user, password) and datasource.getConnection() both

2007-11-12 Thread Luciano Resende
Any reason why this can't be setup during data source configuration ?
If different applications requires different configuration, such as
username and password, the administrator could configure multiple data
sources. I'm just trying to avoid the scenario where, after a password
change, now I have to go to every application and change the DAS
config file to set new username/password.

Thoughts ?

On Nov 11, 2007 11:34 PM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
 In a multi user system, it is possible that the data source is
 deployed using one id/pwd and
 connections are obtained by different users (different user id/pwd)
 using the same deployed data source.

 See -
 http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/ad/cjvjcsup.htm
 Use the second form if you need to specify a user ID and password for
 the connection that are different from the ones that were specified
 when the DataSource was deployed.

 http://download-west.oracle.com/docs/cd/B14099_19/web.1012/b14012/datasrc.htm#i1085287
 This user name and password overrides the user name and password that
 are defined in the data source definition.

 This is the reason why it will be useful to support this second form
 in RDB DAS so as to support full scaled systems

 Regards,
 Amita


 On Nov 9, 2007 11:31 AM, Luciano Resende [EMAIL PROTECTED] wrote:
  Hi Amita
 
 I finally found some time to spend on this issue, and had a
  question in mind. When using datasource,  what's the difference from
  the username and password that can be defined on the datasource
  itself, and the one a user set on the connection property inside the
  das config ?
 
In order to test this, here is what I did :
 - created a secured derby database (requires username/password)
 - configureda datasource in TC, specifying the username, password
 
 Resource name=jdbc/dastest
   type=javax.sql.DataSource
   auth=Container
   description=Derby database for DAS Samples
   maxActive=100 maxIdle=30
   maxWait=1 username=dastest 
  password=dastest
 
  driverClassName=org.apache.derby.jdbc.EmbeddedDriver

  url=jdbc:derby:D:/Opensource-Servers/apache-tomcat-5.5.20/Databases/dastest;create=true/
 
 - use a das config pointing to a datasource, without specifying
  connectionProperties.
 
  And this worked fine for me, using a slightly modified company-webapp
  sample. I also remember trying very similar datasource with MySQL and
  having no problems...
 
  In witch case we would need to use the username/password from the das
  config, instead of the one configured with the datasource ?
 
 
  On Oct 30, 2007 12:40 AM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
   The requirement was a bit hidden inside below mail thread
   http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg23299.html
  
   Below is the JIRA issue for it.
   https://issues.apache.org/jira/browse/TUSCANY-1698
  
   checked so far that tomcat BasicDataSource.getConnection() does not 
   support
   passing in params for username and password.
   This may be how the current code that uses getConnection(no param) worked 
   so
   far.
  
   But other app servers like WebSphere, or if users of Tomcat opt to use a
   different connection pool than the one
   supplied by Tomcat, may need getConnection(userName, password). So to keep
   things generic, DAS can use userName/password
   when available in config. In case of exception upon usage or if
   userName/password not present in Config, DAS can
   attempt getConnection(no params) - the way it is doing today.
  
   We can use the current config as is without any changes like below -
  
  xsd:complexType name=ConnectionInfo
  xsd:sequence
xsd:element maxOccurs=1 minOccurs=0
   name=ConnectionProperties type=config:ConnectionProperties/
  /xsd:sequence
  xsd:attribute name=dataSource type=xsd:string/
  xsd:attribute name=managedtx type=xsd:boolean default=true/
   /xsd:complexType
  
   xsd:complexType name=ConnectionProperties
 xsd:attribute name=driverClass type=xsd:string/
 xsd:attribute name=databaseURL type=xsd:string/
 xsd:attribute name=loginTimeout type=xsd:int default=0/
 xsd:attribute name=userName type=xsd:string default=/
 xsd:attribute name=password type=xsd:string default=/
   /xsd:complexType
  
   When ConnectionProperties contain userName, password they will be used to
   obtain connection (DriverManaged based or DS based).
   e.g.
   DataSource -
   ConnectionInfo dataSource=java:comp/env/jdbc/ajaxdastest
  ConnectionProperties
 userName=dastest
 password=dastest
 /
   /ConnectionInfo
  
   DriverManager -
   ConnectionInfo
   ConnectionProperties
   driverClass=com.mysql.jdbc.Driver
   

Re: [DAS] Use datasource.getConnection(user, password) and datasource.getConnection() both

2007-11-12 Thread Amita Vadhavkar
the data sources can always be deployed with different id/pwds. jdbc
is supporting ds.getconnection(id, pwd) for the sake
of flexibility so that any valid user can access connection to
database using a ds, when ds is deployed with a different set of
id/pwd.
this code change is not changing any existing behavior , but just
adding for complete support of ds.getconnection(). if the admin
deployed 10 different ds with 10 different id/pwd, the das user can
use das config without specifying any id/pwd for connection
and it will work as before. but in case the admin choses to deploy
only 1 ds with 1 id/pwd, das user will have the privilege to obtain
connection by supplying id/pwd in das config.

On Nov 13, 2007 7:21 AM, Luciano Resende [EMAIL PROTECTED] wrote:
 Any reason why this can't be setup during data source configuration ?
 If different applications requires different configuration, such as
 username and password, the administrator could configure multiple data
 sources. I'm just trying to avoid the scenario where, after a password
 change, now I have to go to every application and change the DAS
 config file to set new username/password.

 Thoughts ?


 On Nov 11, 2007 11:34 PM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
  In a multi user system, it is possible that the data source is
  deployed using one id/pwd and
  connections are obtained by different users (different user id/pwd)
  using the same deployed data source.
 
  See -
  http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/ad/cjvjcsup.htm
  Use the second form if you need to specify a user ID and password for
  the connection that are different from the ones that were specified
  when the DataSource was deployed.
 
  http://download-west.oracle.com/docs/cd/B14099_19/web.1012/b14012/datasrc.htm#i1085287
  This user name and password overrides the user name and password that
  are defined in the data source definition.
 
  This is the reason why it will be useful to support this second form
  in RDB DAS so as to support full scaled systems
 
  Regards,
  Amita
 
 
  On Nov 9, 2007 11:31 AM, Luciano Resende [EMAIL PROTECTED] wrote:
   Hi Amita
  
  I finally found some time to spend on this issue, and had a
   question in mind. When using datasource,  what's the difference from
   the username and password that can be defined on the datasource
   itself, and the one a user set on the connection property inside the
   das config ?
  
 In order to test this, here is what I did :
  - created a secured derby database (requires username/password)
  - configureda datasource in TC, specifying the username, password
  
  Resource name=jdbc/dastest
type=javax.sql.DataSource
auth=Container
description=Derby database for DAS Samples
maxActive=100 maxIdle=30
maxWait=1 username=dastest 
   password=dastest
  
   driverClassName=org.apache.derby.jdbc.EmbeddedDriver
 
   url=jdbc:derby:D:/Opensource-Servers/apache-tomcat-5.5.20/Databases/dastest;create=true/
  
  - use a das config pointing to a datasource, without specifying
   connectionProperties.
  
   And this worked fine for me, using a slightly modified company-webapp
   sample. I also remember trying very similar datasource with MySQL and
   having no problems...
  
   In witch case we would need to use the username/password from the das
   config, instead of the one configured with the datasource ?
  
  
   On Oct 30, 2007 12:40 AM, Amita Vadhavkar [EMAIL PROTECTED] wrote:
The requirement was a bit hidden inside below mail thread
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg23299.html
   
Below is the JIRA issue for it.
https://issues.apache.org/jira/browse/TUSCANY-1698
   
checked so far that tomcat BasicDataSource.getConnection() does not 
support
passing in params for username and password.
This may be how the current code that uses getConnection(no param) 
worked so
far.
   
But other app servers like WebSphere, or if users of Tomcat opt to use a
different connection pool than the one
supplied by Tomcat, may need getConnection(userName, password). So to 
keep
things generic, DAS can use userName/password
when available in config. In case of exception upon usage or if
userName/password not present in Config, DAS can
attempt getConnection(no params) - the way it is doing today.
   
We can use the current config as is without any changes like below -
   
   xsd:complexType name=ConnectionInfo
   xsd:sequence
 xsd:element maxOccurs=1 minOccurs=0
name=ConnectionProperties type=config:ConnectionProperties/
   /xsd:sequence
   xsd:attribute name=dataSource type=xsd:string/
   xsd:attribute name=managedtx type=xsd:boolean 
default=true/

Re: Ant Build Distribution System

2007-11-12 Thread Adriano Crestani
As Luciano suggested, I've placed the generated distribution example on my
p.a.o account:

http://people.apache.org/~adrianocrestani/das_distribution_example/http://people.apache.org/%7Eadrianocrestani/das_distribution_example/

Adriano Crestani

On Nov 12, 2007 2:12 PM, Adriano Crestani [EMAIL PROTECTED]
wrote:

 I've added a distribution example of what is being generated by the
 distribution and pack.distribution targets on my sandbox [1].

 - src and bin dirs are generated by distribution target
 - the .zip, .tar.gz, .md5 and .asc(still working on it) are being
 generated by pack.distribution targets from the bin and src dirs
 - there is still no NOTICE, COPYRIGHT, README AND LICENSE files cause I
 haven't added them yet

 [1] https://svn.apache.org/repos/asf/incubator/tuscany/sandbox/crestani

 Adriano Crestani


 On Nov 11, 2007 10:10 PM, Luciano Resende [EMAIL PROTECTED]  wrote:

  If you could post a sample distro in your p.a.o account, it would make
  it easier for others to review, at least for those that does not have
  all the native environment setup.
 
  On Nov 11, 2007 9:19 PM, Adriano Crestani  [EMAIL PROTECTED]
  wrote:
   Hi,
  
   On revision 594022 I've added new targets: distribution and 
   pack.distribution on DAS ant build system. distribution target
  creates a
   distribution file structure for both, src and bin distribution. 
   pack.distribution target packs the generated distribution files and
   generates the .md5 and .asc( still working on it ) files.
  
   I've also updated the ANT_README_AND_INSTALL file with the description
  of
   these new targets.
  
  
   I'd like someone else could revise the distribution structure that
  these new
   targets are creating and give some suggestions. Then, after everything
  is ok
   with these new targets, we could reapply it to SDO and SCA projects.
  
   Thoughts? Suggestions?
  
   Adriano Crestani
  
 
 
 
  --
  Luciano Resende
  Apache Tuscany Committer
  http://people.apache.org/~lresendehttp://people.apache.org/%7Elresende
  http://lresende.blogspot.com/
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]