WireConnectException

2007-04-20 Thread Pamela Fong

What's the meaning of the following exception?

--Start of DE processing-- = [19/04/07 15:28:37:718 BST] , key =
org.apache.tuscany.core.builder.WireConnectException
org.apache.tuscany.spi.extension.CompositeComponentExtension.prepare 392
Exception = org.apache.tuscany.core.builder.WireConnectException
Source =
org.apache.tuscany.spi.extension.CompositeComponentExtension.prepare
probeid = 392
Stack Dump = org.apache.tuscany.core.builder.WireConnectException: Inbound
chain must contain at least one interceptor
at org.apache.tuscany.core.builder.ConnectorImpl.connect(ConnectorImpl.java
:311)
at org.apache.tuscany.core.builder.ConnectorImpl.connect(ConnectorImpl.java
:211)
at org.apache.tuscany.core.builder.ConnectorImpl.connect(ConnectorImpl.java
:395)
at org.apache.tuscany.core.builder.ConnectorImpl.connect(ConnectorImpl.java
:351)
at org.apache.tuscany.core.builder.ConnectorImpl.handleService(
ConnectorImpl.java:556)
at org.apache.tuscany.core.builder.ConnectorImpl.connect(ConnectorImpl.java
:93)
at org.apache.tuscany.spi.extension.CompositeComponentExtension.prepare(
CompositeComponentExtension.java:388)
at org.apache.tuscany.core.deployer.DeployerImpl.deploy(DeployerImpl.java
:126)
at org.apache.tuscany.core.launcher.LauncherImpl.bootApplication(
LauncherImpl.java:233)
at com.ibm.ws.sca2.tuscany.util.TuscanyInterfaceImpl.startModule(
TuscanyInterfaceImpl.java:275)

The module is defined as follows:

?xml version=1.0 encoding=UTF-8?
composite
xmlns=http://www.osoa.org/xmlns/sca/1.0;
xmlns:wsdli=http://www.w3.org/2006/01/wsdl-instance;
wsdli:schemaLocation=../../sca-binding-jms.xsd
name=JMSIteration1UseCase6ClientComposite

service name=JMSIteration1UseCase6ClientService
 interface.java interface=
com.ibm.ws.soa.binding.jms.service.JMSIteration1UseCase6Interface/
 binding.jms .. stuff omitted...
 /binding.jms

 referenceJMSIteration1UseCase6ClientReference/reference
/service

reference name=JMSIteration1UseCase6ClientReference
 interface.java interface=
com.ibm.ws.soa.binding.jms.service.JMSIteration1UseCase6Interface/
 binding.jms
... stuff omitted..
 /binding.jms
/reference

/composite


Re: Website - Feedback please

2007-04-20 Thread Simon Laws

On 4/20/07, haleh mahbod [EMAIL PROTECTED] wrote:


Thanks for your comments. I haven't gone through all the details and will
do
that tomorrow. However, this  caught my eye and wanted to better
understand
your comment.

 Java SCA
  - Architecture Guide
- still pointing to the old one

What do you mean by still pointing to the old one. If you follow the
link
you should see this page

http://cwiki.apache.org/TUSCANY/java-sca-architecture-overview.html

I agree that the content should be updated, but want to make sure you are
seeing this page.


 - DeveloperGuide
- I still think there should be a developer guide as it fits well
under

What is a developer guide and how is the content different than what would
go into the 'get involved' page under development section?

Here is what I was thinking (perhaps it is not right):
User Guide would hold things  like:
 -  Installation/setup information
 -  user type documentation (SCA concepts and examples, etc)
 -  How to develop a simple SCA application followed with more
advanced
topics

GetInvolved link would point to information that anyone wanting to
contribute to SCA Java would need to know about, for example, code
structure, hints on development, etc.


Haleh

On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 4/19/07, ant elder [EMAIL PROTECTED] wrote:
 
  On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:
  
   On 4/19/07, ant elder [EMAIL PROTECTED] wrote:
   
On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:
   
snip/
   
- I like the list of modules I think we should go with the
 module
   name
 from the code and link to a separate
   page for each one. (take a look I've made an example). We
 can
   then
 use
 URLs such as

 http://cwiki.apache.org/confluence/display/TUSCANY/binding-wsto
 refer
 directly to the module description(*)
   
   
I like the one wiki page with the module name per module and as
 well,
   but
do we really want all of those listed on the Java SCA Subproject
  page?
That page seems more user oriented giving an overview of Java SCA
capabilities, where as all the individual modules are a really
deep
implementation detail. For example how about on the Java SCA
   Subproject
page say Tuscany has a web service binding and links to a web
  service
page
which talks about the WS capabilities and its that web service
 page
where
the list of WS related modules could go: binding-ws,
binding-ws-xml,
binding-ws-axis2, binding-ws-cxf, interface-wsdl,
 interface-wsdl-xml,
interface-wsdl-runtime etc. Similar thing for all the other
binding,
implementation, databinding, runtime etc.
   
   ...ant
   
   I agree that this list doesn't need to go on this page but it would
be
   good
   to have a straight list somewhere so it's easy to get the low down
on
 a
   module. Perhaps in the develper guide as I had hoped that these
module
   pages
   will include design information.  I would expect the user docs for
the
   modules, i.e. what to put in the SCDL to make them work, to go in
the
  User
   Guide section. This could have a more user friendly index
 as  suggested
 
 
  A complete list does sound useful. How about the developer guide links
 to
  something like an architecture page which has a diagram of the
runtime,
 a
  bit of a description about it, and the complete list of modules? Eg,
  Similar
  to [1] but the other way up and more text explaining it.
 
 ...ant
 
  [1]
 
 

http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Modulization+Design+Discussions
 
 I thinks that's spot on. Didn't know the page had been extended to
include
 the module list. Lets link it into the architecture page (when we decide
 which architecture page we are having ;-). We can use module links from
 this
 page to record module design information. Module user information would
be
 separate of course.So doe this hierarchy look right

 Architecture Guide --- Module list  Module technical detail
   ^
   |
 User Guide --- Implementation/Binding/Databinding... list -- Extension
 User Guide

 We could probably tie the two together as you suggest by indicating
which
 modules are used to implement an Implementation/Binding/Databinding

 Simon



Hi

1/ Architecture Guide.
It was just that the text here has an M2 feel about it, i.e. it refers to
things like Kernel which is not a term used in the code now (core does
appear though). So I think we should go with an architecure page that
describes how the Tuscany runtime is put together. I prefer the level of
detail that you have put on the kernel specific architecture page [2]
although of course this need updating as well. From this wide ranging
discusion of kernel we can link, in appropriate places, to the technical
details of the inidividual modules. So, expanding on a previous post, I
would anticipate something like the 

RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread Andy Grove

Just for clarification, I think we're saying that the important thing
here is the method naming convention, rather than requiring the tests to
extend TestCase?

If we follow the junit 3.8 naming convention and always use the method
names setUp / teardown (and make sure they are public methods) and have
all test methods start with test then it won't matter if the tests
extend TestCase or have junit 4.1 annotations.

However, just to complicate matters, the tests in the parameterizedTests
package are making use of new junit 4.1 features for providing
parameters to the tests and these tests don't currently fit into the
simple junit 3.8 style and it will be much harder to re-use these tests
from other frameworks in their current form. If we want to stick to the
simple junit 3.8 style then these tests will need some refactoring. 

Regards,

Andy.

-Original Message-
From: kelvin goodson [mailto:[EMAIL PROTECTED] 
Sent: 19 April 2007 11:03
To: tuscany-dev@ws.apache.org
Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
classes don't inherit from TestCase

In fact I'd say for the purposed of introspection by some other harness
the old style is far preferable,  since it's easy to  examine the method
names/signatures to determine what is a test and what is a setup method.
I was about to start cleaning these up,  but I'd like to complete this
discussion and decide whether we should be making everything use the old
3.8style or the new
4.1 annotations.  What I will do in the meantime is add setup methods to
all the files in their existing style in order to fix up the issues with
reusing type helpers between tests, and then revisit the style after the
discussion has completed.  For simplicity I will use the same method
signatures for setup methods as are used in 3.8 when using 4.1
annotations.

Regards, Kelvin.


On 18/04/07, Andy Grove [EMAIL PROTECTED] wrote:


 Frank,

 You're absolutely right. I guess I'd forgotten that you could override

 a protected method and make it public.

 In that case, it doesn't seem to matter if we use old-style junit or

 annotations - it should still be possible to call the tests without 
 using the junit test runners.

 Andy.

 -Original Message-
 From: Frank Budinsky [mailto:[EMAIL PROTECTED]
 Sent: 17 April 2007 18:01
 To: tuscany-dev@ws.apache.org
 Subject: RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when 
 classes don't inherit from TestCase

 Hi Andy,

 Java allows you make something more visible in a derived class than in

 the base, so declaring setUp() public in MyTest wouldn't seem to be a 
 problem.


 Frank

 Andy Grove [EMAIL PROTECTED] wrote on 04/17/2007 12:19:37 PM:

  Hi Frank,
 
  The TestCase class defines setUp and tearDown as protected methods, 
  forcing the child class to also declare them as protected methods 
  and this means they can't be loaded using reflection.
 
  Using junit 4.1 means we can declare the methods as public.
 
  Thanks,
 
  Andy.
 
  -Original Message-
  From: Frank Budinsky [mailto:[EMAIL PROTECTED]
  Sent: 17 April 2007 17:03
  To: tuscany-dev@ws.apache.org
  Subject: RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when

  classes don't inherit from TestCase
 
  Hi Andy,
 
  Maybe this is a stupid question (my junit ignorance showing through
 :-),
  but couldn't you have run your simple test harness (main) even if
 MyTest
  extended from TestCase? Is there something about having the base 
  class that prevents you from simply invoking the test methods
directly?
 
  Frank.
 
  Andy Grove [EMAIL PROTECTED] wrote on 04/17/2007 11:21:49 AM:
 
  
   To better understand this myself, I just put a simple test case 
   together using junit 4.1 with annotations and made use of the 
   junit assertion calls e.g.
  
   public class MyTest {
   @Test
   public void testSomething() {
   // this test will fail
   assertEquals( numbers are same, 1, 2 );
   }
   }
  
   I then wrote a simple test harness to load the test class using 
   reflection and invoke any methods starting with test.
  
public static void main(String[] args) throws Exception {
   Class testClass = Class.forName( test.MyTest );
   Object testObject = testClass.newInstance();
   Method method[] = testClass.getMethods();
   for (int i = 0; i  method.length; i++) {
   if (method[i].getName().startsWith(test)) {
   System.out.println(Running  +
 method[i].getName());
   try {
   method[i].invoke( testObject );
   } catch (Throwable th) {
   th.printStackTrace();
   }
   }
   }
   }
  
   This ran the above test method and caught the following exception:
  
   java.lang.AssertionError: numbers are same expected:1 but 
   was:2
  
   For me, this seems to demonstrate that using junit 4.1 style tests

   will allow people to call 

DataFactory::addType problem

2007-04-20 Thread Adriano Crestani

I'm using the next SDO M3 RC4 and getting the SDOInvalidArgumentException
when trying to use DataFactory::addType or DataFactory::addPropertyToType
when passing a std::string argument


std::string tableName = item;
dataFactory-addType(dasnamespace, tableName); // doesn't work
dataFactory-addType(dasnamespace, tableName.c_str() ); // works

Has it something to do with some character codification option on my VC
project?

Adriano Crestani


Re: DataFactory::addType problem

2007-04-20 Thread Pete Robbins

Interesting!

There are 2 methods:

  1. addType(const string, const string, ...etc.)
  2. addType(const char*, const char*, ...etc.)

the first variation calls the second. Where you pass char* it works. I've
seeen similar behaviour when the string is being passed across different MS
c++ runtime libraries, e.g. if your program is built Debug and SDO is
Release. The bin distro in M3 is Release. You could try re-building the SDO
as debug.

Cheers,


On 20/04/07, Adriano Crestani [EMAIL PROTECTED] wrote:


I'm using the next SDO M3 RC4 and getting the SDOInvalidArgumentException
when trying to use DataFactory::addType or DataFactory::addPropertyToType
when passing a std::string argument


std::string tableName = item;
dataFactory-addType(dasnamespace, tableName); // doesn't work
dataFactory-addType(dasnamespace, tableName.c_str() ); // works

Has it something to do with some character codification option on my VC
project?

Adriano Crestani





--
Pete


Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread kelvin goodson

I'd agree in general that it's the naming convention that would be key to
readily being able to exercise the tests by another framework.
With regards to refactoring the parameterized tests, I like the concept of
being able to have a battery of data sets that can be used to exercise
tests.  Maybe we can put in place some simple bespoke function for this kind
of behaviour.  I've had this in the back of my mind while looking at the
code.

Another compilcation is that there's no precedent in junit 3.8 for the
@BeforeClass type of calls, which some of the new tests are using, so we'll
need to establish a convention for that.

A frustration that I find is that the current structure doesn't permit
running/debugging individual tests. If you want breakpoints deep in the SDO
/EMF code and then have to run 50 tests before getting to the one you are
interested in then that's a bit of a pain. Often in eclipse in the SDO
implementation tests,  I right click in the Junit panel on a failing test
and click run/debug, to exercise a single failing test. I think the
restriction is introduced into the CTS primarily because of the one time
initialization of the implementation specific test helper.   I would imagine
it could be very low cost to initialize this once per setUp() in a
superclass, the first initialization triggering some static code that
performed any real startup overhead and cached the helper.  This all leads
me to believing that to get true agnosticism wrt the test harness we should
perhaps introduce bespoke function, some of which replicates the junit
4.1features, either by creating an abstract specialization of TestCase
or
TestRunner or both.

--
Kelvin

On 20/04/07, Andy Grove [EMAIL PROTECTED] wrote:



Just for clarification, I think we're saying that the important thing
here is the method naming convention, rather than requiring the tests to
extend TestCase?

If we follow the junit 3.8 naming convention and always use the method
names setUp / teardown (and make sure they are public methods) and have
all test methods start with test then it won't matter if the tests
extend TestCase or have junit 4.1 annotations.

However, just to complicate matters, the tests in the parameterizedTests
package are making use of new junit 4.1 features for providing
parameters to the tests and these tests don't currently fit into the
simple junit 3.8 style and it will be much harder to re-use these tests
from other frameworks in their current form. If we want to stick to the
simple junit 3.8 style then these tests will need some refactoring.

Regards,

Andy.

-Original Message-
From: kelvin goodson [mailto:[EMAIL PROTECTED]
Sent: 19 April 2007 11:03
To: tuscany-dev@ws.apache.org
Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
classes don't inherit from TestCase

In fact I'd say for the purposed of introspection by some other harness
the old style is far preferable,  since it's easy to  examine the method
names/signatures to determine what is a test and what is a setup method.
I was about to start cleaning these up,  but I'd like to complete this
discussion and decide whether we should be making everything use the old
3.8style or the new
4.1 annotations.  What I will do in the meantime is add setup methods to
all the files in their existing style in order to fix up the issues with
reusing type helpers between tests, and then revisit the style after the
discussion has completed.  For simplicity I will use the same method
signatures for setup methods as are used in 3.8 when using 4.1
annotations.

Regards, Kelvin.


On 18/04/07, Andy Grove [EMAIL PROTECTED] wrote:


 Frank,

 You're absolutely right. I guess I'd forgotten that you could override

 a protected method and make it public.

 In that case, it doesn't seem to matter if we use old-style junit or

 annotations - it should still be possible to call the tests without
 using the junit test runners.

 Andy.

 -Original Message-
 From: Frank Budinsky [mailto:[EMAIL PROTECTED]
 Sent: 17 April 2007 18:01
 To: tuscany-dev@ws.apache.org
 Subject: RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
 classes don't inherit from TestCase

 Hi Andy,

 Java allows you make something more visible in a derived class than in

 the base, so declaring setUp() public in MyTest wouldn't seem to be a
 problem.


 Frank

 Andy Grove [EMAIL PROTECTED] wrote on 04/17/2007 12:19:37 PM:

  Hi Frank,
 
  The TestCase class defines setUp and tearDown as protected methods,
  forcing the child class to also declare them as protected methods
  and this means they can't be loaded using reflection.
 
  Using junit 4.1 means we can declare the methods as public.
 
  Thanks,
 
  Andy.
 
  -Original Message-
  From: Frank Budinsky [mailto:[EMAIL PROTECTED]
  Sent: 17 April 2007 17:03
  To: tuscany-dev@ws.apache.org
  Subject: RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when

  classes don't inherit from TestCase
 
  Hi Andy,
 
  Maybe this is a 

Re: Windows Plataform SDK include

2007-04-20 Thread Andrew Borley

Adriano,

If you follow the Visual Studio 2005/Platform SDK for Windows Server
2003 R2 installation instructions at
http://msdn.microsoft.com/vstudio/express/visualc/usingpsdk/ it
details the settings you need to change in VS.

Hope this helps
Andy

On 4/20/07, Pete Robbins [EMAIL PROTECTED] wrote:

Adriano, I have these included on my include path automatically. You should
not need to add these to the studio projects. To run the build/vc express I
start a Visual Studio 2005 Command Prompt (start icon available in the start
menu somewhere) and this has INCLUDE set correctly.

Cheers,


On 20/04/07, Adriano Crestani [EMAIL PROTECTED] wrote:

 OK, I resolved the C:\Program Files\Microsoft Visual Studio 8\VC\include
 problem using $(VCInstallDir)\include, however I've found no environment
 variable for C:\Program
 Files\Microsoft Platform SDK for Windows Server 2003 R2\include : (

 Adriano Crestani

 On 4/19/07, Adriano Crestani [EMAIL PROTECTED] wrote:
 
  I have the same question for C:\Program Files\Microsoft Visual Studio
  8\VC\include. How would be the better way to define it on my include
  directory?
 
  Adriano Crestani
 
  On 4/19/07, Adriano Crestani [EMAIL PROTECTED] wrote:
  
   Hi,
  
   Guys, I need some suggestion about how should I define a include
   directory on C++ DAS VC project. I need to define the Windows
 Plataform SDK
   include directory, on my machine it's located on C:\Program
   Files\Microsoft Platform SDK for Windows Server 2003 R2\include,
 however
   I want to define it as something like %PLATAFORM_SDK_HOME%\include. Is
 there
   a default environment variable that defines it?
  
   Adriano Crestani
  
 
 




--
Pete



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Question on ModelObject for binding extension

2007-04-20 Thread Snehit Prabhu

Hi,
Is there an updated version of this document (Extending Tuscany) that
reflects the current state of the trunk? Most of the classes in the models
shown are nonexistent today. Is the whole programming model depicted here
irrelevant?
thanks
snehit

On 4/11/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Pamela Fong wrote:
 If I choose to use EMF to generate a model to represent my extended SCDL
 schema, I would also need to generate EMF model to represent
 sca-core.xsdsince the binding schema extends from the core schema. So
 I would end up
 packaging two generated packages within one binding extension. Someone
 else
 comes along adding extension to sca-core and using EMF to generate the
 model
 code, also needs to package the core and the extended packages. How do
 things co-exist in the long run? Or do we just assume all generated core
 packages should be identical and thus it's ok to have it multiple
 times in
 the classpath?

 On 4/10/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Pamela Fong wrote:
  Hi,
 
  I read the article Extending Tuscany by contributing a new
  implementation /
  binding type by Raymond and Jeremy. Got a question about the
  definition of
  ModelObject. The example in the article is a very simple java
 bean-like
  object. This is fine if all we have to deal with is some simple
  attributes
  in the extension. If the binding requires complex SCDL model
 extension,
  defining the ModelObject by hand may not be the best choice (let's
say
  one
  could have mulitple layers of nested elements and arrays etc.). One
  obvious
  alternative would be to generate some model code based on the
 extension
  xsd using EMF. However, since the binding's xsd extends from
  sca-core.xsd,
  generating model code would require the core model, which doesn't
  exist in
  Tuscany. What would be the recommended mechanism to define the
  ModelObject
  in this case?
 
  -pam
 

 Hi,

 ModelObject does not exist anymore in the latest assembly model in
trunk
 (now under java/sca/modules/assembly). The assembly model is now
 represented by a set of interfaces, so you have the flexibility to
 implement your model classes however you want, without having to extend
 a ModelObject class.

 You can choose to use EMF or another suitable XML databinding
technology
 to implement your model classes or, probably simpler, just create plain
 Java classes that implement the model interfaces. The only requirement
 for a binding model class is to implement the o.a.t.assembly.Binding
 interface. Then, if you choose the plain java class option, to read
 the model from XML use StAX as we've done for the other bindings (see
 java/sca/modules/binding-ws-xml for an example), it is actually pretty
 easy thanks to StAX.

 Hope this helps.

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Right, if multiple extensions choose to use a specific databinding or
modeling technology for their models they'll need to coordinate if they
want to share some code.

I would recommend to try implementing your model using plain Java
classes first, at least this way you can share more code with the other
model modules from Tuscany.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Question on ModelObject for binding extension

2007-04-20 Thread Simon Laws

On 4/20/07, Snehit Prabhu [EMAIL PROTECTED] wrote:


Hi,
Is there an updated version of this document (Extending Tuscany) that
reflects the current state of the trunk? Most of the classes in the models
shown are nonexistent today. Is the whole programming model depicted here
irrelevant?
thanks
snehit

On 4/11/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Pamela Fong wrote:
  If I choose to use EMF to generate a model to represent my extended
SCDL
  schema, I would also need to generate EMF model to represent
  sca-core.xsdsince the binding schema extends from the core schema. So
  I would end up
  packaging two generated packages within one binding extension. Someone

  else
  comes along adding extension to sca-core and using EMF to generate the
  model
  code, also needs to package the core and the extended packages. How do
  things co-exist in the long run? Or do we just assume all generated
core
  packages should be identical and thus it's ok to have it multiple
  times in
  the classpath?
 
  On 4/10/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
 
  Pamela Fong wrote:
   Hi,
  
   I read the article Extending Tuscany by contributing a new
   implementation /
   binding type by Raymond and Jeremy. Got a question about the
   definition of
   ModelObject. The example in the article is a very simple java
  bean-like
   object. This is fine if all we have to deal with is some simple
   attributes
   in the extension. If the binding requires complex SCDL model
  extension,
   defining the ModelObject by hand may not be the best choice (let's
 say
   one
   could have mulitple layers of nested elements and arrays etc.). One

   obvious
   alternative would be to generate some model code based on the
  extension
   xsd using EMF. However, since the binding's xsd extends from
   sca-core.xsd,
   generating model code would require the core model, which doesn't
   exist in
   Tuscany. What would be the recommended mechanism to define the
   ModelObject
   in this case?
  
   -pam
  
 
  Hi,
 
  ModelObject does not exist anymore in the latest assembly model in
 trunk
  (now under java/sca/modules/assembly). The assembly model is now
  represented by a set of interfaces, so you have the flexibility to
  implement your model classes however you want, without having to
extend
  a ModelObject class.
 
  You can choose to use EMF or another suitable XML databinding
 technology
  to implement your model classes or, probably simpler, just create
plain
  Java classes that implement the model interfaces. The only
requirement
  for a binding model class is to implement the o.a.t.assembly.Binding
  interface. Then, if you choose the plain java class option, to read

  the model from XML use StAX as we've done for the other bindings (see
  java/sca/modules/binding-ws-xml for an example), it is actually
pretty
  easy thanks to StAX.
 
  Hope this helps.
 
  --
  Jean-Sebastien
 
 
  -

  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 Right, if multiple extensions choose to use a specific databinding or
 modeling technology for their models they'll need to coordinate if they
 want to share some code.

 I would recommend to try implementing your model using plain Java
 classes first, at least this way you can share more code with the other
 model modules from Tuscany.

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi Snehit,

You are correct, the code base has moved on since the document you refer was
written. Not only is the code being tidied up but we are trying to improve
the docs as well.  I just took a look and Raymond has started a page on how
to extend Tuscany [1] on the project website/wiki. It's just a start and I
don't think that the way extension points are written has settled down 100%
but you get the idea.

Most of the function of the Tuscany SCA runtime implementation is provided
using the extension point/module activator mechanism that Raymond is
starting to describe. If you look at the list of modules in the source code
[2] you see it is starting to grow in length. Not all of these are loaded as
extensions but all of the implementation, bindings and databindings are.

Picking one at random, say the runtime that support components implemented
using Java, you can see a module called implementation-java-runtime. Look
inside there and you see a module activator file [3] which in turn refers to
a class that is run automatically by the runtime (using the JDK service
loading mechanism) when it loads all of the extension modules it finds on
the classpath. If you look inside the referenced class [4] you can see what
it has to do to register support for the SCA implementation.java element.
I have to admit that I'm not an expert on how 

Re: Website - Feedback please

2007-04-20 Thread ant elder

On 4/20/07, Simon Laws [EMAIL PROTECTED] wrote:

snip/

2/ Developer guide

This comment was just stating a preference. I just liked the idea of
having
a user guide and a developer guide. I felt that getting involved sounded
like it should be talking about mail lists and irc's etc rather than the
details of the development process. If people are generally happy with the
getting involved then its far more important to get the content there than
debate what it's called ;-)



I like having all three -  user guides, developer guides and a getting
involved page. The getting involved page could be a general Tuscany project
level thing  about the project mailing lists, IRC, Apache conventions etc.
The user and developer guides could be sub-project specific so individual
ones for DAS, SCA, and SDO.

  ...ant


Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread kelvin goodson

A quick correction on my previous note which reflects a bias towards the
junit 3.8 approach that I didn't really intend.


some static code that performed any real startup overhead and cached the

helper.  This all leads me to believing that to get true agnosticism wrt the
test harness we should perhaps introduce bespoke function, some of which
replicates the junit 4.1 features, either by creating an abstract
specialization of TestCase or TestRunner or both.



We could of course introduce the generic test case behaviour for individual
test method set-up independent of the junit TestCase class.

Kelvin.


Re: DAS M3 Release

2007-04-20 Thread Amita Vadhavkar

Hi,
I have introduced a new section in the wiki page for JIRAs which are under
review (patch
available) and are just a couple of days away from getting into trunk. I
will modify the wiki
page ones these are committed.

The 2 JIRAs - TUSCANY-948(DAS Connection support for standalone J2SE) and
TUSCANY-841 (Compound key relationship tests) are already part of trunk.

Especially, if you take a look at JIRA-800 - it can be one way to
demonstrate newly
added features for each release , with proper description of the feature and
working
example.

Please give feedback.

Regards,
Amita


On 4/19/07, Luciano Resende [EMAIL PROTECTED] wrote:


Great...

  I'll take a quick look and try to help reviewing the list of JIRAS based
on what I have worked on since M2. Amita, it would be great if you could
review this wiki page either [1] or [2] and add the features you mentioned
you worked on or are about to finish and would like to have on the next
release.

Also, I have moved the page on the wiki, to be a child of the RDB DAS and
below is the new link [2].

[1] http://cwiki.apache.org/confluence/display/TUSCANY/RDB+DAS+-+Releases
[2]

http://cwiki.apache.org/confluence/display/TUSCANY/RDB+DAS+-+Java+DAS+M3+Release

Thanks

On 4/17/07, Adriano Crestani [EMAIL PROTECTED] wrote:

 Hi,

 I've listed the opened JIRAs for Java DAS on [1]. I need a feedback of
 which
 JIRA should be resolved before the Java DAS M3 release and which
postponed
 for next one. I will be helping Amita on this research too ; ).

 I also listed on [1] the JIRAs for Java DAS that were resolved after
last
 release. This list will probably be made the Java DAS M3 key features.
Let
 me know if any feature not listed should be included on or some listed
 feature should be removed from.

 [1]


http://cwiki.apache.org/confluence/display/TUSCANY/Features+for+Java+DAS+M3+Release

 Adriano Crestani

 On 4/17/07, Amita Vadhavkar [EMAIL PROTECTED] wrote:
 
  Hi All,
 
  I have worked on a couple of jiras, and planning to analyze more JIRAs
  soon.So far the JIRAs I have worked on are as below:-
 
  TUSCANY-800 - Ajax DAS - patch available, creating a final patch
  TUSCANY-841 - Compound Key Relationship Tests - resolved
  TUSCANY-863 - Auto canned DB creation - patch available
  TUSCANY-948 - DAS support for standalone/J2SE applications - resolved
  TUSCANY-952 - multiple schema support - patch available
  TUSCANY-864 - DAS SCA container - may need to be tied to next SCA
 release,
  work-in-progress
 
  Please give your feedback about how these JIRAs can be made part of
DAS
  Java
  M3.
 
  Also, it will really helpful to get your perspective about what all
can
 be
  the key
  features, any pending JIRAs which can add value to this release, any
  must
  JIRA which
  need to be newly added. I am going to research too on this front.
  Appreciate
  your
  comments.
 
  Regards,
  Amita
 
 
  On 4/16/07, Luciano Resende [EMAIL PROTECTED] wrote:
  
   Recently we had couple inquires about a DAS Release [1] and [2], so
 it's
   probably time to start discussing what should be in the next DAS
   release.Wecould start by reviewing the discussion we had right after
   we released DAS
   M2 [3], and also get a list of things we already have done after our
   previous release.
  
   Couple things I would like to help for our next release:
  
   - Review MySQL Support
  
   Sample
 - Automate creation of Canned databases for DAS Samples
 (TUSCANY-863)
  
   Documentation
 - Continue to work on DAS User's guide
- Migrate it to new wiki and investigate the possibility to add
 to
   the
   release package
  
   Infrastructure
 - Automate release distribution process
  
   Once we agree on a set of items for our next release, we then could
  start
   tracking the release progress on our wiki.
  
   Thoughts ?
  
   [1] -
  http://www.mail-archive.com/[EMAIL PROTECTED]/msg00798.html
   [2] -
  
http://www.mail-archive.com/tuscany-user%40ws.apache.org/msg00589.html
   [3] -
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg11017.html
   [4] -
  
http://cwiki.apache.org/confluence/display/TUSCANY/RDB+DAS+-+Releases
  
   --
   Luciano Resende
   http://people.apache.org/~lresende
  
 




--
Luciano Resende
http://people.apache.org/~lresende



Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Simon Nash

I agree with the comments from Ant and Simon on focusing on stability
and consumability at the moment rather than adding a large amount of
new function.

I've been working on a complex bug fix recently (watch this space
for a JIRA and patch) and I noticed that we lost some previously
working support for AbstractLifecycle in the recent round of
runtime changes to remove the use of SCA to assemble the Tuscany
runtime.  I have had to code around this to make progress and I
would like to get a good story in place for how we manage the
lifecycle of runtime objects, focusing particularly on destruction
and resource reclamation in long-running scenarios.  Failure to do
this can lead to a variety of problems that are hard to track down.
I would be pleased to work on this to put in place a consistent
approach and document how it works.

  Simon

Simon Laws wrote:

On 4/19/07, ant elder [EMAIL PROTECTED] wrote:



On 4/19/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Davanum Srinivas wrote:
  Folks,
 
  Let's keep the ball rolling...Can someone please come up with a 
master

  list of extensions, bindings, services, samples which can then help
  decide what's going to get into the next release. Please start a wiki
  page to document the master list. Once we are done documenting the
  list. We can figure out which ones are MUST, which ones are nice to
  have, which ones are out of scope. Then we can work backwards to
  figure out How tightly or loosely coupled each piece is/should be and
  how we could decouple them if necessary using
  interfaces/spi/whatever...
 
  Quote from Bert Lamb:
  I think there should be a voted upon core set of extensions,
  bindings, services, samples, whatever that should be part of a
  monolithic build.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
 
  Quote from Ant Elder:
  The specifics of what extensions are included in this release is left
  out of
  this vote and can be decided in the release plan discussion. All this
  vote
  is saying is that all the modules that are to be included in this 
next

  release will have the same version and that a top level pom.xml will
  exist
  to enable building all those modules at once.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html
 
  Thanks,
  dims
 
 

 Hi all,

 I think we have made good progress since we initially started this
 discussion. We have a simpler structure in trunk with a working 
top-down

 build. Samples and integration tests from the integration branch have
 been integrated back in trunk and most are now working.

 We have a more modular runtime with a simpler extension mechanism. For
 example we have separate modules for the various models, the core
 runtime and the Java component support. SPIs between the models and the
 rest of the runtime have been refactored and should become more stable.
 We need to do more work to further simplify the core runtime SPIs and
 improve the core runtime but I think this is going in the right
direction.

 I'm also happy to see better support for the SCA 1.0 spec, with support
 for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
 looks like extensions are starting to work again in the trunk, 
including

 Web Services, Java and scripting components. It shouldn't be too
 difficult to port some of the other extensions - Spring, JMS, JSON-RPC
 -  to the latest code base as well.

 So, the JavaOne conference is in three weeks, would it make sense to 
try

 to have a Tuscany release by then?

 We could integrate in that release what we already have working in
 trunk, mature and stabilize our SPIs and our extensibility story, and
 this would be a good foundation for people to use, embed or extend.

 On top of that, I think it would be really cool to do some work to:
 - Make it easier to assemble a distributed SCA domain with components
 running on different runtimes / machines.
 - Improve our scripting and JSON-RPC support a little and show how to
 build Web 2.0 applications with Tuscany.
 - Improve our integration story with Tomcat and also start looking 
at an

 integration with Geronimo.
 - Improve our Spring-based core variant implementation, as I think it's
 a good example to show how to integrate Tuscany with other IoC
containers.
 - Maybe start looking at the equivalent using Google Guice.
 - Start looking again at some of the extensions that we have in contrib
 or sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
 sandbox, more databindings etc).
 - ...

 I'm not sure we can do all of that in the next few weeks :) but I'd 
like

 to get your thoughts and see what people in the community would like to
 have in that next release...


I'm not sure we could do all that in three weeks either :)

+1 to a release soon, but to be honest, attempting all the above seems
rushed to me, I think it would be good to focus on a small core of things
and getting them working and tested and documented, and then use 

Re: svn commit: r530817 - in /incubator/tuscany/java/sca/modules/implementation-script: ./ src/main/java/org/apache/tuscany/implementation/script/ src/test/resources/org/apache/tuscany/implementation/

2007-04-20 Thread ant elder

This is great, the script container now supports properties! I took the
liberty of adding some property itests, see the
org.apache.tuscany.implementation.script.itests.properties folder. Probably
what we should do is extend those to also test other types like ints and
arrays etc. The JRuby tests don't work, it looks like a bug in the JRuby
script engine, but right now the way the script implementation does
references and properties is by setting them as global variables, changing
to use script instance variables may fix the problem and could be more
correct anyway.

  ...ant

On 4/20/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


Author: svkrish
Date: Fri Apr 20 07:06:38 2007
New Revision: 530817

URL: http://svn.apache.org/viewvc?view=revrev=530817
Log:
Extended a bit to support simple type properties

Added:


incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptPropertyValueObjectFactory.java
Modified:
incubator/tuscany/java/sca/modules/implementation-script/pom.xml


incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptArtifactProcessor.java


incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptComponent.java


incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptComponentBuilder.java


incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptModuleActivator.java


incubator/tuscany/java/sca/modules/implementation-script/src/test/resources/org/apache/tuscany/implementation/script/itests/helloworld/helloworld.componentType


incubator/tuscany/java/sca/modules/implementation-script/src/test/resources/org/apache/tuscany/implementation/script/itests/helloworld/helloworld.js

Modified: incubator/tuscany/java/sca/modules/implementation-script/pom.xml
URL:
http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/implementation-script/pom.xml?view=diffrev=530817r1=530816r2=530817

==
--- incubator/tuscany/java/sca/modules/implementation-script/pom.xml
(original)
+++ incubator/tuscany/java/sca/modules/implementation-script/pom.xml Fri
Apr 20 07:06:38 2007
@@ -115,6 +115,11 @@
 version1.0-incubating-SNAPSHOT/version
 scopetest/scope
 /dependency
+dependency
+groupIdorg.apache.tuscany.sca/groupId
+artifactIdtuscany-databinding/artifactId
+version1.0-incubating-SNAPSHOT/version
+/dependency

!-- TODO: big hack to add script engine dependencies till extension
dependencies fixed --


Modified:
incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptArtifactProcessor.java
URL:
http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptArtifactProcessor.java?view=diffrev=530817r1=530816r2=530817

==
---
incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptArtifactProcessor.java
(original)
+++
incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptArtifactProcessor.java
Fri Apr 20 07:06:38 2007
@@ -33,6 +33,9 @@
import javax.xml.stream.XMLStreamWriter;

import org.apache.tuscany.assembly.ComponentType;
+import org.apache.tuscany.assembly.Property;
+import org.apache.tuscany.assembly.Reference;
+import org.apache.tuscany.assembly.Service;
import org.apache.tuscany.assembly.impl.DefaultAssemblyFactory;
import org.apache.tuscany.assembly.xml.Constants;
import
org.apache.tuscany.contribution.processor.StAXArtifactProcessorExtension;
@@ -123,6 +126,15 @@
 ComponentType componentType = resolver.resolve(
ComponentType.class, ct);
 if (componentType.isUnresolved()) {
 throw new ContributionResolveException(missing
.componentType side file);
+}
+for (Reference reference : componentType.getReferences()) {
+scriptImplementation.getReferences().add(reference);
+}
+for (Service service : componentType.getServices()) {
+scriptImplementation.getServices().add(service);
+}
+for (Property property : componentType.getProperties()) {
+scriptImplementation.getProperties().add(property);
 }
 scriptImplementation.setComponentType(componentType);
 }

Modified:
incubator/tuscany/java/sca/modules/implementation-script/src/main/java/org/apache/tuscany/implementation/script/ScriptComponent.java
URL:

Lifecycle of runtime extensions, was: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Jean-Sebastien Delfino

[snip]
Simon Nash wrote:

I agree with the comments from Ant and Simon on focusing on stability
and consumability at the moment rather than adding a large amount of
new function.

I've been working on a complex bug fix recently (watch this space
for a JIRA and patch) and I noticed that we lost some previously
working support for AbstractLifecycle in the recent round of
runtime changes to remove the use of SCA to assemble the Tuscany
runtime.  I have had to code around this to make progress and I
would like to get a good story in place for how we manage the
lifecycle of runtime objects, focusing particularly on destruction
and resource reclamation in long-running scenarios.  Failure to do
this can lead to a variety of problems that are hard to track down.
I would be pleased to work on this to put in place a consistent
approach and document how it works.

  Simon



Simon,

There is support for managing the lifecycle of runtime extensions. 
ModuleActivators must implement the ModuleActivator.start() and stop() 
methods, and cleanup their state in the stop() method.  Could you 
describe the specific problem you've run into? Is one of the 
ModuleActivators in particular not implementing stop() correctly?


Thanks

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread Andy Grove

When we talk about being test-harness agnostic, I think we're really
saying not dependent on junit due to complexities that junit
introduces.

One option is to stop using junit completely and replicate the useful
features in a minimal test framework that supports parameterized tests
e.g. we could introduce a CTSTestCase interface:

interface CTSTestCase {

  void setup();

  void teardown();

  /** parameterized testing */
  void setData(Object data);

}

It would then be simple enough to have a TestRunner to invoke classes
that implement this interface. 

The advantage of this approach is that it enforces some constraints on
the complexity of the tests. If we use junit directly then we might end
up introducing dependencies on more junit features that are hard to
replicate in other test harnesses.

We could replicate the junit Assert class and use a static import so the
test code itself still looks the same as junit test case e.g. calling
assertEquals() and fail().

Andy.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of kelvin goodson
Sent: 20 April 2007 10:15
To: tuscany-dev@ws.apache.org
Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
classes don't inherit from TestCase

I'd agree in general that it's the naming convention that would be key
to readily being able to exercise the tests by another framework.
With regards to refactoring the parameterized tests, I like the concept
of being able to have a battery of data sets that can be used to
exercise tests.  Maybe we can put in place some simple bespoke function
for this kind of behaviour.  I've had this in the back of my mind while
looking at the code.

Another compilcation is that there's no precedent in junit 3.8 for the
@BeforeClass type of calls, which some of the new tests are using, so
we'll need to establish a convention for that.

A frustration that I find is that the current structure doesn't permit
running/debugging individual tests. If you want breakpoints deep in the
SDO /EMF code and then have to run 50 tests before getting to the one
you are interested in then that's a bit of a pain. Often in eclipse in
the SDO implementation tests,  I right click in the Junit panel on a
failing test and click run/debug, to exercise a single failing test. I
think the restriction is introduced into the CTS primarily because of
the one time
initialization of the implementation specific test helper.   I would
imagine
it could be very low cost to initialize this once per setUp() in a
superclass, the first initialization triggering some static code that
performed any real startup overhead and cached the helper.  This all
leads me to believing that to get true agnosticism wrt the test harness
we should perhaps introduce bespoke function, some of which replicates
the junit 4.1features, either by creating an abstract specialization of
TestCase or TestRunner or both.

--
Kelvin

On 20/04/07, Andy Grove [EMAIL PROTECTED] wrote:


 Just for clarification, I think we're saying that the important thing 
 here is the method naming convention, rather than requiring the tests 
 to extend TestCase?

 If we follow the junit 3.8 naming convention and always use the method

 names setUp / teardown (and make sure they are public methods) and 
 have all test methods start with test then it won't matter if the 
 tests extend TestCase or have junit 4.1 annotations.

 However, just to complicate matters, the tests in the 
 parameterizedTests package are making use of new junit 4.1 features 
 for providing parameters to the tests and these tests don't currently 
 fit into the simple junit 3.8 style and it will be much harder to 
 re-use these tests from other frameworks in their current form. If we 
 want to stick to the simple junit 3.8 style then these tests will need
some refactoring.

 Regards,

 Andy.

 -Original Message-
 From: kelvin goodson [mailto:[EMAIL PROTECTED]
 Sent: 19 April 2007 11:03
 To: tuscany-dev@ws.apache.org
 Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when 
 classes don't inherit from TestCase

 In fact I'd say for the purposed of introspection by some other 
 harness the old style is far preferable,  since it's easy to  examine 
 the method names/signatures to determine what is a test and what is a
setup method.
 I was about to start cleaning these up,  but I'd like to complete this

 discussion and decide whether we should be making everything use the 
 old 3.8style or the new
 4.1 annotations.  What I will do in the meantime is add setup methods 
 to all the files in their existing style in order to fix up the issues

 with reusing type helpers between tests, and then revisit the style 
 after the discussion has completed.  For simplicity I will use the 
 same method signatures for setup methods as are used in 3.8 when using

 4.1 annotations.

 Regards, Kelvin.


 On 18/04/07, Andy Grove [EMAIL PROTECTED] wrote:
 
 
  Frank,
 
  You're absolutely right. I guess 

Re: Question on ModelObject for binding extension

2007-04-20 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On 4/20/07, Snehit Prabhu [EMAIL PROTECTED] wrote:


Hi,
Is there an updated version of this document (Extending Tuscany) that
reflects the current state of the trunk? Most of the classes in the 
models
shown are nonexistent today. Is the whole programming model depicted 
here

irrelevant?
thanks
snehit

On 4/11/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Pamela Fong wrote:
  If I choose to use EMF to generate a model to represent my extended
SCDL
  schema, I would also need to generate EMF model to represent
  sca-core.xsdsince the binding schema extends from the core 
schema. So

  I would end up
  packaging two generated packages within one binding extension. 
Someone


  else
  comes along adding extension to sca-core and using EMF to 
generate the

  model
  code, also needs to package the core and the extended packages. 
How do

  things co-exist in the long run? Or do we just assume all generated
core
  packages should be identical and thus it's ok to have it multiple
  times in
  the classpath?
 
  On 4/10/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
 
  Pamela Fong wrote:
   Hi,
  
   I read the article Extending Tuscany by contributing a new
   implementation /
   binding type by Raymond and Jeremy. Got a question about the
   definition of
   ModelObject. The example in the article is a very simple java
  bean-like
   object. This is fine if all we have to deal with is some simple
   attributes
   in the extension. If the binding requires complex SCDL model
  extension,
   defining the ModelObject by hand may not be the best choice 
(let's

 say
   one
   could have mulitple layers of nested elements and arrays 
etc.). One


   obvious
   alternative would be to generate some model code based on the
  extension
   xsd using EMF. However, since the binding's xsd extends from
   sca-core.xsd,
   generating model code would require the core model, which doesn't
   exist in
   Tuscany. What would be the recommended mechanism to define the
   ModelObject
   in this case?
  
   -pam
  
 
  Hi,
 
  ModelObject does not exist anymore in the latest assembly model in
 trunk
  (now under java/sca/modules/assembly). The assembly model is now
  represented by a set of interfaces, so you have the flexibility to
  implement your model classes however you want, without having to
extend
  a ModelObject class.
 
  You can choose to use EMF or another suitable XML databinding
 technology
  to implement your model classes or, probably simpler, just create
plain
  Java classes that implement the model interfaces. The only
requirement
  for a binding model class is to implement the 
o.a.t.assembly.Binding
  interface. Then, if you choose the plain java class option, to 
read


  the model from XML use StAX as we've done for the other bindings 
(see

  java/sca/modules/binding-ws-xml for an example), it is actually
pretty
  easy thanks to StAX.
 
  Hope this helps.
 
  --
  Jean-Sebastien
 
 
  
-


  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 Right, if multiple extensions choose to use a specific databinding or
 modeling technology for their models they'll need to coordinate if 
they

 want to share some code.

 I would recommend to try implementing your model using plain Java
 classes first, at least this way you can share more code with the 
other

 model modules from Tuscany.

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi Snehit,

You are correct, the code base has moved on since the document you 
refer was
written. Not only is the code being tidied up but we are trying to 
improve
the docs as well.  I just took a look and Raymond has started a page 
on how
to extend Tuscany [1] on the project website/wiki. It's just a start 
and I
don't think that the way extension points are written has settled down 
100%

but you get the idea.

Most of the function of the Tuscany SCA runtime implementation is 
provided

using the extension point/module activator mechanism that Raymond is
starting to describe. If you look at the list of modules in the source 
code
[2] you see it is starting to grow in length. Not all of these are 
loaded as

extensions but all of the implementation, bindings and databindings are.

Picking one at random, say the runtime that support components 
implemented

using Java, you can see a module called implementation-java-runtime. Look
inside there and you see a module activator file [3] which in turn 
refers to

a class that is run automatically by the runtime (using the JDK service
loading mechanism) when it loads all of the extension modules it finds on
the classpath. If you look inside the referenced class [4] you can see 
what
it has to do to register support for the SCA 

Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread ant elder

On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:

snip/

I'm not against adding new features for the release b.t.w


Good point, me either. I hope my previous post didn't sound like I was
against adding new features, all I was saying was that I think those other
things are important to do and they're what I plan to focus on. If others
want to get new things into the release and can get them done in time thats
great.

  ...ant


Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread kelvin goodson

The Junit tooling is so useful I'd be loath to drop it as the harness that
the Tuscany implementation uses for exercising the tests. I'm going to do a
bit of playing to see what solutions are practical,  but I'm concerned that
we may be considering putting significant effort into a goal that's rather
too theoretical, as junit seems so ubiquitous.

Regards, Kelvin.

On 20/04/07, Andy Grove  [EMAIL PROTECTED] wrote:
snip/

One option is to stop using junit completely and replicate the useful

features in a minimal test framework that supports parameterized tests
e.g. we could introduce a CTSTestCase interface:


snip/


Re: Scoping SDO metadata, was: How to access a composite's data model scope in an application?

2007-04-20 Thread Jean-Sebastien Delfino

Fuhwei Lwo wrote:

Hi Sebastien,

Here is my understanding of requirements about getting rid of import.sdo and 
switching to contribution -

1) A contribution will be created by contribution processor for each 
application. - Contribution processor has been done for Jar and file system.

  


Yes


2) The contribution processor will create a SDO scope (HelperContext instance) 
to associate with the contribution. Currently calling 
SDOUtil.createHelperContext() is enough.
  


That's what I was poking at in my previous email. Creating our own 
context, different from the default SDO context forces SCA to introduce 
a new API to get to that context, and forces all SDO users to use that 
new API. So I'm wondering if it wouldn't be better to play more nicely 
with SDO, and have the SCA runtime just populate the default SDO context 
in use in a particular application in the server environment.



3) Tuscany SCA needs to provide a way for the application to get hold of the 
HelperContext in association with the contribution in step 2 above. Currently 
the  application is forced to use SDO API - HelperProvider.getDefaultContext() 
which is using TCCL.
  


I'm not getting this one :) Is it bad for an SDO user to be forced to 
use an SDO API to get an SDO context? It seems better to me than forcing 
an SDO user to use an SCA API, simply because his code may be used at 
some point in an SCA environment... and then his code wouldn't work in a 
JSP, a servlet, or any other non-SCA environment...


If the fact that HelperProvider.getDefaultContext() is using the TCCL to 
find the correct SDO context is a problem, then we just need to fix 
that. We went through the same discussion with SCA CompositeContext 
about a year ago. Associating context with the TCCL is not always 
convenient in a server environment, and it may be better to associate 
context with the current Thread (using a threadlocal or an inheritable 
thread local for example). This is what we did for SCA CompositeContext. 
Maybe SDO could provide a way to associate an SDO context with the 
current thread instead or in addition to associating the SDO context 
with the TCCL?


This would seem a good thing to have anyway since these contexts are not 
thread safe as far as I know :)


Thoughts?

I am not sure my understanding above is correct so please bear with me. Based 
on my understanding above, currently there is no additional requirement from 
SDO.


I wouldn't reach that conclusion so fast :) I think that there is a 
requirement to provide a way  to get to an SDO context independent of 
TCCL if people don't like that association with TCCL.



In the future, if we decided to support contribution import/export that may 
require SDO scoping hierarchy support. But I think we should start using 
contribution and getting rid of import.sdo as the first step.

  


Yes I'd like to get rid of import.sdo, as I indicated earlier in this 
discussion thread.


I would like to support contribution import/export at some point. I'm 
not sure that we'll be able to use SDO scope hierarchy support as an SCA 
contribution import does not necessarily import the whole scope of 
another SCA contribution, but I guess we'll know more when we start to 
look at the details.



What do you think?  Thanks for your reply.

Fuhwei Lwo

Jean-Sebastien Delfino [EMAIL PROTECTED] wrote: Fuhwei Lwo wrote:
  

Hi,

In my composite, I defined  in the default.scdl file that would prompt the SCA 
container to register my data types using SDO databinding. The question I have 
is what API I should use in my service implementation code to obtain the 
registered data types.  If I have two composites that are using two different 
data type definition but with the same namespace URI, I definitely don't want 
to obtain the wrong data type definition. Thanks for your help.

Below is the previous message from Raymond Feng about associating databinding 
type system context/scope with a composite. I think this is related to my 
question but from Tuscany SCA development perspective.

How to associate some context with a composite?
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200702.mbox/[EMAIL 
PROTECTED]
  



Hi,

The short (and not perfect) answer to your question is. With the current 
code in trunk, use:

commonj.sdo.impl.HelperProvider.getDefaultContext()

But I thought about this a bit and your question triggered some 
comments, and more questions :)


Import.sdo extension:
I think we should be able to remove that Tuscany extension to SCA 
assembly XML, now that we have the SCA contribution service in place. We 
know which WSDLs and XSDs are available in a given SCA contribution and, 
with sca-contribution.xml import elements, we also know which XML 
namespaces are imported from other SCA contributions or other locations 
outside of an SCA domain. So we probably don't need another  
element duplicating part of this information in .composite files.


Scope of XML metadata:
My 

Re: SPI reorg (was: Re: [Discussion] Tuscany kernel modulization

2007-04-20 Thread ant elder

On 3/29/07, ant elder [EMAIL PROTECTED] wrote:



On 3/27/07, Jeremy Boynes [EMAIL PROTECTED] wrote:

snip/

One reason the SPI module is so large is that it does define many the
 interfaces for the components in you diagram. I think there is room
 for a reorganization there to clarify the usage of those interfaces.
 I would propose we start with that ...


There have been several emails now which I think show some agreement that
we can do some SPI refactoring. Here's something that could be a start of
this work:

One area is the extension SPI, on our architecture diagram [1] thats the
Extensions box at the top right. In the past we've had a lot of problems
with extensions continually getting broken as the SPIs keep changing. This
has made maintaining extensions to be in a working state a big job and it
has led to some donated extension just being abandoned.

One of the reasons for this is that the SPIs more reflect the requirements
of the Tuscany runtime than the requirements of the extension being
contributed. For example, a container extension needs to enable creating,
initializing, invoking and destroying a component, along with exposing the
components services, references and properties. Those things have remained
pretty constant even though the SCA specs and Tuscany runtime and SPI have
undergone significant changes.

I think we should be able to create SPIs for these type of functions which
clearly represent the requirements of a particular extension type, and that
doing this would go along way to making things more stable. All this code is
there in the current SPI so its mainly just a mater of refactoring parts
out into separate bits with clearly defined function and adding adapter code
so the runtime can use them.

You can even envisage that if this is successful it could define a runtime
extension SPI for all the extensible areas of SCA assembly which could
eventually be standardized to provide portable SCA extensions in a way
similar to JBI.

What do people think, is this worth looking at? If so I'd like to make an
attempt at doing it for bindings and components using the Axis2 binding and
script container as guinea pigs. This should be pretty transparent to the
rest of the kernel as other than the adapter code it could all be separate
modules.



I mentioned on the release discussion thread  that I'd bring this thread up
again.

The new trunk code has made things better in the SPI area but I think
there's still a lot that could be improved (IMHO). The sort of thing i was
thinking about was coming up with runtime support and an spi package for
each extension time that made it clear what all the methods needed to be
implemented for  at least minimum functionality.

For example, for an implementation extension minimum functionality would
support services, references and properties (at least simple type properties
anyway), correctly merging introspected and sidefile defined component type
info, component instance life cycle and scope, and the correct invocation
semantics for things like pass-by-value support. And do all that in a way
where the majority of code is done generically in the runtime instead of the
extension either not supporting some of those things or just copying chunks
of code from other extensions to get the support.

Do others agree this is something we should try to do for the next release?
If so I thought about starting with a new modules for implementation-spi,
binding-spi etc to avoid changing the existing runtime code for now. And I'd
like to start on the implementation-spi one with the goal being to
eventually move all the implementation extensions to use it - so crud, java
and script.

WDYT?

  ...ant


Processing on Intents and PolicySets

2007-04-20 Thread Mark I. Dinges
I would like to start work on the ability to processing Intents and 
PolicySets in interceptors. Currently there is not any link from the 
core WireImpl object or from any of the objects in the core WireImpl 
back to the Assembly model that contains the Intents and PolicySets. 
First question does the community feel that being able to work with and 
process Intents and PolicySets from interceptors the right approach. If 
so, does it seem reasonable to put links back to the assembly model from 
various points in the runtime model the right approach to take?


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn co message

2007-04-20 Thread Fuhwei Lwo
Did anyone get the following message asking for accepting the server 
certificate when running svn co 
http://svn.apache.org/repos/asf/incubator/tuscany/java;? Is it safe to accept 
it?

Ujava\sca\modules\assembly-xml\src\main\java\org\apache\tuscany\assembly\xml
\ComponentTypeProcessor.java

Fetching external item into 'java\distribution\sca\tsss-demo\kernel'
Error validating server certificate for 'https://svn.apache.org:443':
 - The certificate is not issued by a trusted authority. Use the
   fingerprint to validate the certificate manually!
Certificate information:
 - Hostname: svn.apache.org
 - Valid: from Jan 26 14:18:55 2007 GMT until Jan 26 14:18:55 2009 GMT
 - Issuer: http://www.starfieldtech.com/repository, Starfield Technologies, Inc.
, Scottsdale, Arizona, US
 - Fingerprint: a7:a5:3f:1a:ae:bb:98:b2:f3:ec:91:1b:63:29:2d:e8:58:b6:53:28
(R)eject, accept (t)emporarily or accept (p)ermanently?

Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Raymond Feng

Hi,

Considering that we want to achieve this in about 3 weeks, I agree that we 
focus on the stability and consumability for the core functions.


Other additional features are welcome. We can decide if they will be part of 
the release based on the readiness.


Are any of you going to volunteer to be the release manager? If not, I can 
give a try.


Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Wednesday, April 18, 2007 6:07 PM
Subject: Re: [DISCUSS] Next version - What should be in it



Davanum Srinivas wrote:

Folks,

Let's keep the ball rolling...Can someone please come up with a master
list of extensions, bindings, services, samples which can then help
decide what's going to get into the next release. Please start a wiki
page to document the master list. Once we are done documenting the
list. We can figure out which ones are MUST, which ones are nice to
have, which ones are out of scope. Then we can work backwards to
figure out How tightly or loosely coupled each piece is/should be and
how we could decouple them if necessary using
interfaces/spi/whatever...

Quote from Bert Lamb:
I think there should be a voted upon core set of extensions,
bindings, services, samples, whatever that should be part of a
monolithic build.
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html

Quote from Ant Elder:
The specifics of what extensions are included in this release is left out 
of
this vote and can be decided in the release plan discussion. All this 
vote

is saying is that all the modules that are to be included in this next
release will have the same version and that a top level pom.xml will 
exist

to enable building all those modules at once.
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html

Thanks,
dims




Hi all,

I think we have made good progress since we initially started this 
discussion. We have a simpler structure in trunk with a working top-down 
build. Samples and integration tests from the integration branch have been 
integrated back in trunk and most are now working.


We have a more modular runtime with a simpler extension mechanism. For 
example we have separate modules for the various models, the core runtime 
and the Java component support. SPIs between the models and the rest of 
the runtime have been refactored and should become more stable. We need to 
do more work to further simplify the core runtime SPIs and improve the 
core runtime but I think this is going in the right direction.


I'm also happy to see better support for the SCA 1.0 spec, with support 
for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It 
looks like extensions are starting to work again in the trunk, including 
Web Services, Java and scripting components. It shouldn't be too difficult 
to port some of the other extensions - Spring, JMS, JSON-RPC -  to the 
latest code base as well.


So, the JavaOne conference is in three weeks, would it make sense to try 
to have a Tuscany release by then?


We could integrate in that release what we already have working in trunk, 
mature and stabilize our SPIs and our extensibility story, and this would 
be a good foundation for people to use, embed or extend.


On top of that, I think it would be really cool to do some work to:
- Make it easier to assemble a distributed SCA domain with components 
running on different runtimes / machines.
- Improve our scripting and JSON-RPC support a little and show how to 
build Web 2.0 applications with Tuscany.
- Improve our integration story with Tomcat and also start looking at an 
integration with Geronimo.
- Improve our Spring-based core variant implementation, as I think it's a 
good example to show how to integrate Tuscany with other IoC containers.

- Maybe start looking at the equivalent using Google Guice.
- Start looking again at some of the extensions that we have in contrib or 
sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in 
sandbox, more databindings etc).

- ...

I'm not sure we can do all of that in the next few weeks :) but I'd like 
to get your thoughts and see what people in the community would like to 
have in that next release...


--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread Andy Grove

I would certainly prefer to continue with junit.

There are frameworks such as cactus, that allow junit tests to be run in
J2EE environments, and if vendors need the ability to run the tests in
some other environment that is not supported by junit or cactus then
they always have the option of developing their own test runners or
tweaking the junit code to fit their requirements. This does seem like
an edge case and it would seem appropriate for those users to invest the
effort to solve the problem rather than putting an extra burden on
developing the general purpose CTS.

Thanks,

Andy. 

-Original Message-
From: kelvin goodson [mailto:[EMAIL PROTECTED] 
Sent: 20 April 2007 17:19
To: tuscany-dev@ws.apache.org
Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
classes don't inherit from TestCase

The Junit tooling is so useful I'd be loath to drop it as the harness
that the Tuscany implementation uses for exercising the tests. I'm going
to do a bit of playing to see what solutions are practical,  but I'm
concerned that we may be considering putting significant effort into a
goal that's rather too theoretical, as junit seems so ubiquitous.

Regards, Kelvin.

On 20/04/07, Andy Grove  [EMAIL PROTECTED] wrote:
snip/

One option is to stop using junit completely and replicate the useful
 features in a minimal test framework that supports parameterized tests

 e.g. we could introduce a CTSTestCase interface:


 snip/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Scoping SDO metadata, was: How to access a composite's data model scope in an application?

2007-04-20 Thread Raymond Feng

Hi,

Please see my comments inline.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, April 20, 2007 9:23 AM
Subject: Re: Scoping SDO metadata, was: How to access a composite's data 
model scope in an application?




Fuhwei Lwo wrote:

Hi Sebastien,

Here is my understanding of requirements about getting rid of import.sdo 
and switching to contribution -


1) A contribution will be created by contribution processor for each 
application. - Contribution processor has been done for Jar and file 
system.





Yes

2) The contribution processor will create a SDO scope (HelperContext 
instance) to associate with the contribution. Currently calling 
SDOUtil.createHelperContext() is enough.




That's what I was poking at in my previous email. Creating our own 
context, different from the default SDO context forces SCA to introduce a 
new API to get to that context, and forces all SDO users to use that new 
API. So I'm wondering if it wouldn't be better to play more nicely with 
SDO, and have the SCA runtime just populate the default SDO context in use 
in a particular application in the server environment.




I have a slightly different view here. IMHO, the SDO should provide the 
scoping mechanism and the pluggability of scoping schemes. I assume the 
HelperContext is provided by SDO for scoping metadata. What's missing from 
SDO is the pluggability of the scoping schemes. Currently, the default 
HelperContext is based on TCCL and it's not replaceable. I agree SDO cannot 
define scoping schemes for all environment so the pluggability is desirable.


3) Tuscany SCA needs to provide a way for the application to get hold of 
the HelperContext in association with the contribution in step 2 above. 
Currently the  application is forced to use SDO API - 
HelperProvider.getDefaultContext() which is using TCCL.




I'm not getting this one :) Is it bad for an SDO user to be forced to 
use an SDO API to get an SDO context? It seems better to me than forcing 
an SDO user to use an SCA API, simply because his code may be used at some 
point in an SCA environment... and then his code wouldn't work in a JSP, a 
servlet, or any other non-SCA environment...


If the fact that HelperProvider.getDefaultContext() is using the TCCL to 
find the correct SDO context is a problem, then we just need to fix that. 
We went through the same discussion with SCA CompositeContext about a year 
ago. Associating context with the TCCL is not always convenient in a 
server environment, and it may be better to associate context with the 
current Thread (using a threadlocal or an inheritable thread local for 
example). This is what we did for SCA CompositeContext. Maybe SDO could 
provide a way to associate an SDO context with the current thread instead 
or in addition to associating the SDO context with the TCCL?


I agree that we should try to use the SDO API to retrieve the current 
context. But I think in the SCA application, the default context should be 
associated with the Contribution. Then it would be a win-win situation if we 
can do the following:


1) SDO defines the pluggability to supply the default HelperContext.
2) SCA plugs its own scoping scheme to the SDO default HelperContext. The 
HelperContext will be populated based on the Contribution.
3) Application code will use HelperProvider.getDefaultContext() to retrieve 
the default HelperContext.




This would seem a good thing to have anyway since these contexts are not 
thread safe as far as I know :)


Thoughts?
I am not sure my understanding above is correct so please bear with me. 
Based on my understanding above, currently there is no additional 
requirement from SDO.


I wouldn't reach that conclusion so fast :) I think that there is a 
requirement to provide a way  to get to an SDO context independent of TCCL 
if people don't like that association with TCCL.


In the future, if we decided to support contribution import/export that 
may require SDO scoping hierarchy support. But I think we should start 
using contribution and getting rid of import.sdo as the first step.





Yes I'd like to get rid of import.sdo, as I indicated earlier in this 
discussion thread.


I would like to support contribution import/export at some point. I'm not 
sure that we'll be able to use SDO scope hierarchy support as an SCA 
contribution import does not necessarily import the whole scope of another 
SCA contribution, but I guess we'll know more when we start to look at the 
details.


I'm thinking of the following approach to discover SDO metadata from a SCA 
contribution.


When the Contribution is processed, the generated SDO factories (the class 
name and the namespace) are recognized. Other models such as WSDL/XSD are 
handled as well. We don't have to convert all of them into SDO model upfront 
as the conversion can be performed on-demand upon the query of a particular 
namespace.





What do 

[jira] Created: (TUSCANY-1218) java.net.ConnectException: Connection refused: connect when building binding-ws-axis2

2007-04-20 Thread Simon Nash (JIRA)
java.net.ConnectException: Connection refused: connect when building 
binding-ws-axis2
-

 Key: TUSCANY-1218
 URL: https://issues.apache.org/jira/browse/TUSCANY-1218
 Project: Tuscany
  Issue Type: Bug
  Components: Java SCA Axis Binding
Affects Versions: Java-SCA-Next
 Environment: Windows XP
Reporter: Simon Nash
 Assigned To: Simon Nash
 Fix For: Java-SCA-Next


When building java/sca from the trunk, the following error occurs:

---
 T E S T S
---
Running org.apache.tuscany.binding.axis2.itests.HelloWorldTestCase
log4j:WARN No appenders could be found for logger 
(org.apache.axiom.om.util.StAXUtils).
log4j:WARN Please initialize the log4j system properly.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.52 sec
Running org.apache.tuscany.binding.axis2.Axis2ServiceTestCase
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec
Running 
org.apache.tuscany.binding.axis2.itests.endpoints.WSDLRelativeURITestCase
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.975 sec
Running org.apache.tuscany.binding.axis2.itests.HelloWorldOMTestCase
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.307 sec  
FAILURE!
testCalculator(org.apache.tuscany.binding.axis2.itests.HelloWorldOMTestCase)  
Time elapsed: 4.247 sec   ERROR!
java.lang.reflect.UndeclaredThrowableException
at $Proxy8.getGreetings(Unknown Source)
at 
org.apache.tuscany.binding.axis2.itests.HelloWorldOMComponent.getGreetings(HelloWorldOMComponent.java:31)
at 
org.apache.tuscany.binding.axis2.itests.HelloWorldOMTestCase.testCalculator(HelloWorldOMTestCase.java:43)
Caused by: org.apache.axis2.AxisFault: Connection refused: connect; nested 
exception is: 
java.net.ConnectException: Connection refused: connect; nested 
exception is: 
org.apache.axis2.AxisFault: Connection refused: connect; nested 
exception is: 
java.net.ConnectException: Connection refused: connect
at 
org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:227)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:674)
at 
org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:237)
at 
org.apache.axis2.description.OutInAxisOperationClient.execute(OutInAxisOperation.java:202)
at 
org.apache.tuscany.binding.axis2.Axis2TargetInvoker.invokeTarget(Axis2TargetInvoker.java:77)
at 
org.apache.tuscany.spi.extension.TargetInvokerExtension.invoke(TargetInvokerExtension.java:52)
at 
org.apache.tuscany.core.wire.InvokerInterceptor.invoke(InvokerInterceptor.java:45)
at 
org.apache.tuscany.spi.wire.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:91)
at 
org.apache.tuscany.implementation.java.proxy.JDKInvocationHandler.invoke(JDKInvocationHandler.java:150)
... 29 more

On my machine, this occurs every time when building from the root.  It does not 
occur if I only build the binding-ws-axis2 module.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn co message

2007-04-20 Thread Raymond Feng

Hi,

It's safe to accept it.

Thanks,
Raymond

- Original Message - 
From: Fuhwei Lwo [EMAIL PROTECTED]

To: Tuscany-Dev tuscany-dev@ws.apache.org
Sent: Friday, April 20, 2007 9:58 AM
Subject: svn co message


Did anyone get the following message asking for accepting the server 
certificate when running svn co 
http://svn.apache.org/repos/asf/incubator/tuscany/java;? Is it safe to 
accept it?


U 
java\sca\modules\assembly-xml\src\main\java\org\apache\tuscany\assembly\xml

\ComponentTypeProcessor.java

Fetching external item into 'java\distribution\sca\tsss-demo\kernel'
Error validating server certificate for 'https://svn.apache.org:443':
- The certificate is not issued by a trusted authority. Use the
  fingerprint to validate the certificate manually!
Certificate information:
- Hostname: svn.apache.org
- Valid: from Jan 26 14:18:55 2007 GMT until Jan 26 14:18:55 2009 GMT
- Issuer: http://www.starfieldtech.com/repository, Starfield Technologies, 
Inc.

, Scottsdale, Arizona, US
- Fingerprint: a7:a5:3f:1a:ae:bb:98:b2:f3:ec:91:1b:63:29:2d:e8:58:b6:53:28
(R)eject, accept (t)emporarily or accept (p)ermanently? 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [Java SDO CTS] Junit 4.1 pattern for calling setUp when classes don't inherit from TestCase

2007-04-20 Thread Frank Budinsky
I agree.

Frank.

Andy Grove [EMAIL PROTECTED] wrote on 04/20/2007 01:14:51 PM:

 
 I would certainly prefer to continue with junit.
 
 There are frameworks such as cactus, that allow junit tests to be run in
 J2EE environments, and if vendors need the ability to run the tests in
 some other environment that is not supported by junit or cactus then
 they always have the option of developing their own test runners or
 tweaking the junit code to fit their requirements. This does seem like
 an edge case and it would seem appropriate for those users to invest the
 effort to solve the problem rather than putting an extra burden on
 developing the general purpose CTS.
 
 Thanks,
 
 Andy. 
 
 -Original Message-
 From: kelvin goodson [mailto:[EMAIL PROTECTED] 
 Sent: 20 April 2007 17:19
 To: tuscany-dev@ws.apache.org
 Subject: Re: [Java SDO CTS] Junit 4.1 pattern for calling setUp when
 classes don't inherit from TestCase
 
 The Junit tooling is so useful I'd be loath to drop it as the harness
 that the Tuscany implementation uses for exercising the tests. I'm going
 to do a bit of playing to see what solutions are practical,  but I'm
 concerned that we may be considering putting significant effort into a
 goal that's rather too theoretical, as junit seems so ubiquitous.
 
 Regards, Kelvin.
 
 On 20/04/07, Andy Grove  [EMAIL PROTECTED] wrote:
 snip/
 
 One option is to stop using junit completely and replicate the useful
  features in a minimal test framework that supports parameterized tests
 
  e.g. we could introduce a CTSTestCase interface:
 
 
  snip/
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Luciano Resende

+1 on focusing on the stability and consumability for the core functions,
other then helping on simplifying the runtime further and work on a Domain
concept, I also want to contribute around having a better integration with
App Servers, basically start by bringing back WAR plugin and TC integration.

+1 on Raymond as Release Manager

On 4/20/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

Considering that we want to achieve this in about 3 weeks, I agree that we
focus on the stability and consumability for the core functions.

Other additional features are welcome. We can decide if they will be part
of
the release based on the readiness.

Are any of you going to volunteer to be the release manager? If not, I can
give a try.

Thanks,
Raymond

- Original Message -
From: Jean-Sebastien Delfino [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Wednesday, April 18, 2007 6:07 PM
Subject: Re: [DISCUSS] Next version - What should be in it


 Davanum Srinivas wrote:
 Folks,

 Let's keep the ball rolling...Can someone please come up with a master
 list of extensions, bindings, services, samples which can then help
 decide what's going to get into the next release. Please start a wiki
 page to document the master list. Once we are done documenting the
 list. We can figure out which ones are MUST, which ones are nice to
 have, which ones are out of scope. Then we can work backwards to
 figure out How tightly or loosely coupled each piece is/should be and
 how we could decouple them if necessary using
 interfaces/spi/whatever...

 Quote from Bert Lamb:
 I think there should be a voted upon core set of extensions,
 bindings, services, samples, whatever that should be part of a
 monolithic build.
 http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html

 Quote from Ant Elder:
 The specifics of what extensions are included in this release is left
out
 of
 this vote and can be decided in the release plan discussion. All this
 vote
 is saying is that all the modules that are to be included in this next
 release will have the same version and that a top level pom.xml will
 exist
 to enable building all those modules at once.
 http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html

 Thanks,
 dims



 Hi all,

 I think we have made good progress since we initially started this
 discussion. We have a simpler structure in trunk with a working top-down
 build. Samples and integration tests from the integration branch have
been
 integrated back in trunk and most are now working.

 We have a more modular runtime with a simpler extension mechanism. For
 example we have separate modules for the various models, the core
runtime
 and the Java component support. SPIs between the models and the rest of
 the runtime have been refactored and should become more stable. We need
to
 do more work to further simplify the core runtime SPIs and improve the
 core runtime but I think this is going in the right direction.

 I'm also happy to see better support for the SCA 1.0 spec, with support
 for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
 looks like extensions are starting to work again in the trunk, including
 Web Services, Java and scripting components. It shouldn't be too
difficult
 to port some of the other extensions - Spring, JMS, JSON-RPC -  to the
 latest code base as well.

 So, the JavaOne conference is in three weeks, would it make sense to try
 to have a Tuscany release by then?

 We could integrate in that release what we already have working in
trunk,
 mature and stabilize our SPIs and our extensibility story, and this
would
 be a good foundation for people to use, embed or extend.

 On top of that, I think it would be really cool to do some work to:
 - Make it easier to assemble a distributed SCA domain with components
 running on different runtimes / machines.
 - Improve our scripting and JSON-RPC support a little and show how to
 build Web 2.0 applications with Tuscany.
 - Improve our integration story with Tomcat and also start looking at an
 integration with Geronimo.
 - Improve our Spring-based core variant implementation, as I think it's
a
 good example to show how to integrate Tuscany with other IoC containers.
 - Maybe start looking at the equivalent using Google Guice.
 - Start looking again at some of the extensions that we have in contrib
or
 sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
 sandbox, more databindings etc).
 - ...

 I'm not sure we can do all of that in the next few weeks :) but I'd like
 to get your thoughts and see what people in the community would like to
 have in that next release...

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: 

Re: Scoping SDO metadata, was: How to access a composite's data model scope in an application?

2007-04-20 Thread Fuhwei Lwo
Raymond,

I agree with your suggestion below. In addition, I think SCA still needs to 
provide an option (injection or API) for the applications to explicitly 
retrieve the data model scope from the Contribution. Other databinding 
technology APIs beside SDO may not have default context helper concept.

1) SDO defines the pluggability to supply the default HelperContext.
2) SCA plugs its own scoping scheme to the SDO default HelperContext. 
The 
HelperContext will be populated based on the Contribution.
3) Application code will use HelperProvider.getDefaultContext() to 
retrieve 
the default HelperContext.


Raymond Feng [EMAIL PROTECTED] wrote: Hi,

Please see my comments inline.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino 
To: 
Sent: Friday, April 20, 2007 9:23 AM
Subject: Re: Scoping SDO metadata, was: How to access a composite's data 
model scope in an application?


 Fuhwei Lwo wrote:
 Hi Sebastien,

 Here is my understanding of requirements about getting rid of import.sdo 
 and switching to contribution -

 1) A contribution will be created by contribution processor for each 
 application. - Contribution processor has been done for Jar and file 
 system.



 Yes

 2) The contribution processor will create a SDO scope (HelperContext 
 instance) to associate with the contribution. Currently calling 
 SDOUtil.createHelperContext() is enough.


 That's what I was poking at in my previous email. Creating our own 
 context, different from the default SDO context forces SCA to introduce a 
 new API to get to that context, and forces all SDO users to use that new 
 API. So I'm wondering if it wouldn't be better to play more nicely with 
 SDO, and have the SCA runtime just populate the default SDO context in use 
 in a particular application in the server environment.


I have a slightly different view here. IMHO, the SDO should provide the 
scoping mechanism and the pluggability of scoping schemes. I assume the 
HelperContext is provided by SDO for scoping metadata. What's missing from 
SDO is the pluggability of the scoping schemes. Currently, the default 
HelperContext is based on TCCL and it's not replaceable. I agree SDO cannot 
define scoping schemes for all environment so the pluggability is desirable.

 3) Tuscany SCA needs to provide a way for the application to get hold of 
 the HelperContext in association with the contribution in step 2 above. 
 Currently the  application is forced to use SDO API - 
 HelperProvider.getDefaultContext() which is using TCCL.


 I'm not getting this one :) Is it bad for an SDO user to be forced to 
 use an SDO API to get an SDO context? It seems better to me than forcing 
 an SDO user to use an SCA API, simply because his code may be used at some 
 point in an SCA environment... and then his code wouldn't work in a JSP, a 
 servlet, or any other non-SCA environment...

 If the fact that HelperProvider.getDefaultContext() is using the TCCL to 
 find the correct SDO context is a problem, then we just need to fix that. 
 We went through the same discussion with SCA CompositeContext about a year 
 ago. Associating context with the TCCL is not always convenient in a 
 server environment, and it may be better to associate context with the 
 current Thread (using a threadlocal or an inheritable thread local for 
 example). This is what we did for SCA CompositeContext. Maybe SDO could 
 provide a way to associate an SDO context with the current thread instead 
 or in addition to associating the SDO context with the TCCL?

I agree that we should try to use the SDO API to retrieve the current 
context. But I think in the SCA application, the default context should be 
associated with the Contribution. Then it would be a win-win situation if we 
can do the following:

1) SDO defines the pluggability to supply the default HelperContext.
2) SCA plugs its own scoping scheme to the SDO default HelperContext. The 
HelperContext will be populated based on the Contribution.
3) Application code will use HelperProvider.getDefaultContext() to retrieve 
the default HelperContext.


 This would seem a good thing to have anyway since these contexts are not 
 thread safe as far as I know :)

 Thoughts?
 I am not sure my understanding above is correct so please bear with me. 
 Based on my understanding above, currently there is no additional 
 requirement from SDO.

 I wouldn't reach that conclusion so fast :) I think that there is a 
 requirement to provide a way  to get to an SDO context independent of TCCL 
 if people don't like that association with TCCL.

 In the future, if we decided to support contribution import/export that 
 may require SDO scoping hierarchy support. But I think we should start 
 using contribution and getting rid of import.sdo as the first step.



 Yes I'd like to get rid of import.sdo, as I indicated earlier in this 
 discussion thread.

 I would like to support contribution import/export at some point. I'm not 
 sure that we'll be 

Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Simon Laws

On 4/20/07, Luciano Resende [EMAIL PROTECTED] wrote:


+1 on focusing on the stability and consumability for the core functions,
other then helping on simplifying the runtime further and work on a Domain
concept, I also want to contribute around having a better integration with
App Servers, basically start by bringing back WAR plugin and TC
integration.

+1 on Raymond as Release Manager

On 4/20/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 Considering that we want to achieve this in about 3 weeks, I agree that
we
 focus on the stability and consumability for the core functions.

 Other additional features are welcome. We can decide if they will be
part
 of
 the release based on the readiness.

 Are any of you going to volunteer to be the release manager? If not, I
can
 give a try.

 Thanks,
 Raymond

 - Original Message -
 From: Jean-Sebastien Delfino [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Wednesday, April 18, 2007 6:07 PM
 Subject: Re: [DISCUSS] Next version - What should be in it


  Davanum Srinivas wrote:
  Folks,
 
  Let's keep the ball rolling...Can someone please come up with a
master
  list of extensions, bindings, services, samples which can then help
  decide what's going to get into the next release. Please start a wiki
  page to document the master list. Once we are done documenting the
  list. We can figure out which ones are MUST, which ones are nice to
  have, which ones are out of scope. Then we can work backwards to
  figure out How tightly or loosely coupled each piece is/should be and
  how we could decouple them if necessary using
  interfaces/spi/whatever...
 
  Quote from Bert Lamb:
  I think there should be a voted upon core set of extensions,
  bindings, services, samples, whatever that should be part of a
  monolithic build.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
 
  Quote from Ant Elder:
  The specifics of what extensions are included in this release is left
 out
  of
  this vote and can be decided in the release plan discussion. All this
  vote
  is saying is that all the modules that are to be included in this
next
  release will have the same version and that a top level pom.xml will
  exist
  to enable building all those modules at once.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html
 
  Thanks,
  dims
 
 
 
  Hi all,
 
  I think we have made good progress since we initially started this
  discussion. We have a simpler structure in trunk with a working
top-down
  build. Samples and integration tests from the integration branch have
 been
  integrated back in trunk and most are now working.
 
  We have a more modular runtime with a simpler extension mechanism. For
  example we have separate modules for the various models, the core
 runtime
  and the Java component support. SPIs between the models and the rest
of
  the runtime have been refactored and should become more stable. We
need
 to
  do more work to further simplify the core runtime SPIs and improve the
  core runtime but I think this is going in the right direction.
 
  I'm also happy to see better support for the SCA 1.0 spec, with
support
  for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
  looks like extensions are starting to work again in the trunk,
including
  Web Services, Java and scripting components. It shouldn't be too
 difficult
  to port some of the other extensions - Spring, JMS, JSON-RPC -  to the
  latest code base as well.
 
  So, the JavaOne conference is in three weeks, would it make sense to
try
  to have a Tuscany release by then?
 
  We could integrate in that release what we already have working in
 trunk,
  mature and stabilize our SPIs and our extensibility story, and this
 would
  be a good foundation for people to use, embed or extend.
 
  On top of that, I think it would be really cool to do some work to:
  - Make it easier to assemble a distributed SCA domain with components
  running on different runtimes / machines.
  - Improve our scripting and JSON-RPC support a little and show how to
  build Web 2.0 applications with Tuscany.
  - Improve our integration story with Tomcat and also start looking at
an
  integration with Geronimo.
  - Improve our Spring-based core variant implementation, as I think
it's
 a
  good example to show how to integrate Tuscany with other IoC
containers.
  - Maybe start looking at the equivalent using Google Guice.
  - Start looking again at some of the extensions that we have in
contrib
 or
  sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
  sandbox, more databindings etc).
  - ...
 
  I'm not sure we can do all of that in the next few weeks :) but I'd
like
  to get your thoughts and see what people in the community would like
to
  have in that next release...
 
  --
  Jean-Sebastien
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: 

Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Raymond Feng

Hi,

After evaluating the features I would like to contribute to this release in 
the short timeframe, I don't think I would have enough time to handle the 
release as I'm new to this process. I would appreciate if somebody else with 
more experience volunteers to be the release manager. This way, I can learn 
more and get ready for the next time.


Thanks,
Raymond

- Original Message - 
From: Luciano Resende [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, April 20, 2007 10:25 AM
Subject: Re: [DISCUSS] Next version - What should be in it



+1 on focusing on the stability and consumability for the core functions,
other then helping on simplifying the runtime further and work on a Domain
concept, I also want to contribute around having a better integration with
App Servers, basically start by bringing back WAR plugin and TC 
integration.


+1 on Raymond as Release Manager

On 4/20/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

Considering that we want to achieve this in about 3 weeks, I agree that 
we

focus on the stability and consumability for the core functions.

Other additional features are welcome. We can decide if they will be part
of
the release based on the readiness.

Are any of you going to volunteer to be the release manager? If not, I 
can

give a try.

Thanks,
Raymond

- Original Message -
From: Jean-Sebastien Delfino [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Wednesday, April 18, 2007 6:07 PM
Subject: Re: [DISCUSS] Next version - What should be in it


 Davanum Srinivas wrote:
 Folks,

 Let's keep the ball rolling...Can someone please come up with a master
 list of extensions, bindings, services, samples which can then help
 decide what's going to get into the next release. Please start a wiki
 page to document the master list. Once we are done documenting the
 list. We can figure out which ones are MUST, which ones are nice to
 have, which ones are out of scope. Then we can work backwards to
 figure out How tightly or loosely coupled each piece is/should be and
 how we could decouple them if necessary using
 interfaces/spi/whatever...

 Quote from Bert Lamb:
 I think there should be a voted upon core set of extensions,
 bindings, services, samples, whatever that should be part of a
 monolithic build.
 http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html

 Quote from Ant Elder:
 The specifics of what extensions are included in this release is left
out
 of
 this vote and can be decided in the release plan discussion. All this
 vote
 is saying is that all the modules that are to be included in this next
 release will have the same version and that a top level pom.xml will
 exist
 to enable building all those modules at once.
 http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html

 Thanks,
 dims



 Hi all,

 I think we have made good progress since we initially started this
 discussion. We have a simpler structure in trunk with a working 
 top-down

 build. Samples and integration tests from the integration branch have
been
 integrated back in trunk and most are now working.

 We have a more modular runtime with a simpler extension mechanism. For
 example we have separate modules for the various models, the core
runtime
 and the Java component support. SPIs between the models and the rest of
 the runtime have been refactored and should become more stable. We need
to
 do more work to further simplify the core runtime SPIs and improve the
 core runtime but I think this is going in the right direction.

 I'm also happy to see better support for the SCA 1.0 spec, with support
 for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
 looks like extensions are starting to work again in the trunk, 
 including

 Web Services, Java and scripting components. It shouldn't be too
difficult
 to port some of the other extensions - Spring, JMS, JSON-RPC -  to the
 latest code base as well.

 So, the JavaOne conference is in three weeks, would it make sense to 
 try

 to have a Tuscany release by then?

 We could integrate in that release what we already have working in
trunk,
 mature and stabilize our SPIs and our extensibility story, and this
would
 be a good foundation for people to use, embed or extend.

 On top of that, I think it would be really cool to do some work to:
 - Make it easier to assemble a distributed SCA domain with components
 running on different runtimes / machines.
 - Improve our scripting and JSON-RPC support a little and show how to
 build Web 2.0 applications with Tuscany.
 - Improve our integration story with Tomcat and also start looking at 
 an

 integration with Geronimo.
 - Improve our Spring-based core variant implementation, as I think it's
a
 good example to show how to integrate Tuscany with other IoC 
 containers.

 - Maybe start looking at the equivalent using Google Guice.
 - Start looking again at some of the extensions that we have in contrib
or
 sandboxes (OSGI, ServiceMix, I 

Re: DataFactory::addType problem

2007-04-20 Thread Adriano Crestani

Thanks, I will try it out ; )

Adriano Crestani

On 4/20/07, Pete Robbins [EMAIL PROTECTED] wrote:


Interesting!

There are 2 methods:

   1. addType(const string, const string, ...etc.)
   2. addType(const char*, const char*, ...etc.)

the first variation calls the second. Where you pass char* it works. I've
seeen similar behaviour when the string is being passed across different
MS
c++ runtime libraries, e.g. if your program is built Debug and SDO is
Release. The bin distro in M3 is Release. You could try re-building the
SDO
as debug.

Cheers,


On 20/04/07, Adriano Crestani [EMAIL PROTECTED] wrote:

 I'm using the next SDO M3 RC4 and getting the
SDOInvalidArgumentException
 when trying to use DataFactory::addType or
DataFactory::addPropertyToType
 when passing a std::string argument


 std::string tableName = item;
 dataFactory-addType(dasnamespace, tableName); // doesn't work
 dataFactory-addType(dasnamespace, tableName.c_str() ); // works

 Has it something to do with some character codification option on my VC
 project?

 Adriano Crestani




--
Pete



SCA 1.0 compliance

2007-04-20 Thread N Williams
Hi all. I've been tracking your work and playing around with Tuscany for some 
time now, mainly using the M2 release. I see that you've recently released an 
integration/Alpha release with hybrid SCA 0.96/1.0 support. When do you think a 
more SCA 1.0 compliant release will be available? Thank you.

   
-
 Yahoo! Answers - Got a question? Someone out there knows the answer. Tryit now.

cross-composite locate service

2007-04-20 Thread Kevin Williams

I am interested in a way to dynamically find and invoke a service within the
Domain without having access to a pre-defined reference.  This is not called
out in the 1.0 specification but it would be a very useful capability.  It
also seems that many of the pieces required to implement this may soon be in
place; especially with the work around runtime simplification and Domain
proposed by Raymond in this thread:

http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg16792.html

This also seems related to Scott's recent query regarding default bindings
across top level composites:

*http://tinyurl.com/2xslxp*

Any thoughts on this?  I would appreciate any pointers.

Thanks!

--Kevin


Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On 4/19/07, ant elder [EMAIL PROTECTED] wrote:


On 4/19/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Davanum Srinivas wrote:
  Folks,
 
  Let's keep the ball rolling...Can someone please come up with a 
master
  list of extensions, bindings, services, samples which can then 
help
  decide what's going to get into the next release. Please start a 
wiki

  page to document the master list. Once we are done documenting the
  list. We can figure out which ones are MUST, which ones are nice to
  have, which ones are out of scope. Then we can work backwards to
  figure out How tightly or loosely coupled each piece is/should be 
and

  how we could decouple them if necessary using
  interfaces/spi/whatever...
 
  Quote from Bert Lamb:
  I think there should be a voted upon core set of extensions,
  bindings, services, samples, whatever that should be part of a
  monolithic build.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
 
  Quote from Ant Elder:
  The specifics of what extensions are included in this release is 
left

  out of
  this vote and can be decided in the release plan discussion. All 
this

  vote
  is saying is that all the modules that are to be included in this 
next

  release will have the same version and that a top level pom.xml will
  exist
  to enable building all those modules at once.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html
 
  Thanks,
  dims
 
 

 Hi all,

 I think we have made good progress since we initially started this
 discussion. We have a simpler structure in trunk with a working 
top-down

 build. Samples and integration tests from the integration branch have
 been integrated back in trunk and most are now working.

 We have a more modular runtime with a simpler extension mechanism. For
 example we have separate modules for the various models, the core
 runtime and the Java component support. SPIs between the models and 
the
 rest of the runtime have been refactored and should become more 
stable.

 We need to do more work to further simplify the core runtime SPIs and
 improve the core runtime but I think this is going in the right
direction.

 I'm also happy to see better support for the SCA 1.0 spec, with 
support

 for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
 looks like extensions are starting to work again in the trunk, 
including

 Web Services, Java and scripting components. It shouldn't be too
 difficult to port some of the other extensions - Spring, JMS, JSON-RPC
 -  to the latest code base as well.

 So, the JavaOne conference is in three weeks, would it make sense 
to try

 to have a Tuscany release by then?

 We could integrate in that release what we already have working in
 trunk, mature and stabilize our SPIs and our extensibility story, and
 this would be a good foundation for people to use, embed or extend.

 On top of that, I think it would be really cool to do some work to:
 - Make it easier to assemble a distributed SCA domain with components
 running on different runtimes / machines.
 - Improve our scripting and JSON-RPC support a little and show how to
 build Web 2.0 applications with Tuscany.
 - Improve our integration story with Tomcat and also start looking 
at an

 integration with Geronimo.
 - Improve our Spring-based core variant implementation, as I think 
it's

 a good example to show how to integrate Tuscany with other IoC
containers.
 - Maybe start looking at the equivalent using Google Guice.
 - Start looking again at some of the extensions that we have in 
contrib

 or sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
 sandbox, more databindings etc).
 - ...

 I'm not sure we can do all of that in the next few weeks :) but I'd 
like
 to get your thoughts and see what people in the community would 
like to

 have in that next release...


I'm not sure we could do all that in three weeks either :)

+1 to a release soon, but to be honest, attempting all the above seems
rushed to me, I think it would be good to focus on a small core of 
things
and getting them working and tested and documented, and then use that 
as a

stable base to build on and to attract others in the community to come
help
us with all the above work.

The website is starting to look much better these days, but there's 
still

a
a lot we can do to give each bit of supported function clear user
documentation. So as one example, for each feature we support - Tomcat,
Jetty, Java components, scripting, Axis2 etc - a page about what it does
and
how to use it. ServiceMix does this quite well I think, eg:
http://incubator.apache.org/servicemix/servicemix-http.html. Once we 
have

some good doc and an obvious website structure in place it will be much
easier for people adding new function  to Tuscany to also add doc to the
website instead of leaving things undocumented.

There's been a ton of work on the runtime code of the last few weeks and
some of it was done 

Re: cross-composite locate service

2007-04-20 Thread Raymond Feng

Hi, Kevin,

When one ore more deployable composites from a contribution are added to the 
SCA domain, all the components in the composite will become direct children 
of the SCA domain composite (the include semantics).


Then similar code as follows will fit your case. Am I right?

ComponentContext context = 
SCARuntime.getComponentContext(CalculatorServiceComponent);
ServiceReferenceCalculatorService service = 
context.createSelfReference(CalculatorService.class);

CalculatorService calculatorService = service.getService();

Thanks,
Raymond

- Original Message - 
From: Kevin Williams [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, April 20, 2007 2:03 PM
Subject: cross-composite locate service


I am interested in a way to dynamically find and invoke a service within 
the
Domain without having access to a pre-defined reference.  This is not 
called

out in the 1.0 specification but it would be a very useful capability.  It
also seems that many of the pieces required to implement this may soon be 
in

place; especially with the work around runtime simplification and Domain
proposed by Raymond in this thread:

http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg16792.html

This also seems related to Scott's recent query regarding default bindings
across top level composites:

*http://tinyurl.com/2xslxp*

Any thoughts on this?  I would appreciate any pointers.

Thanks!

--Kevin




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Represent the recursive composition in runtime

2007-04-20 Thread Jean-Sebastien Delfino

Comments inline.

Raymond Feng wrote:

Hi,

In the current code base, we use builder framework to create peer 
objects for runtime corresponding to the model and use the runtime 
metadata to drive the component interactions. This approach adds more 
complexity and redundance. I think now it should be possible to take 
advantage of the fully-resolved/configured model directly.


To acheive this, we need to normalize the model to represent the 
recursive composition in runtime. There are some cases:


1) For include, we need to merge all 
components/services/references/properties from the included composite 
to the enclosing composite.


We already do that (in CompositeUtil) but I'll review this code again to 
make sure that we're not missing anything.




2) Two components use the same non-composite implementation, for 
example, two components are implemented by the same java class. In 
this case, we should have two component model instances and one 
implementation model to avoid duplicate introspection.




Yes, isn't it how it already works? or are we still introspecting the 
Java class twice? If it's not already working like you describe then 
we'll need to use the ArtifactResolver to resolve the implementation and 
make sure that the two components point to it.




3) Two components are implemented by the same composite

This is a more interesting case. Please see the diagram @ 
http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Runtime+Component+Hierarchy. 



Path a: Composite1.ComponentB is implemented by Composite3
Path b: Composite2.ComponentD is implemented by Composite3

The service/reference can be promoted to different things:

a: the final target for the ComponentF.Reference1 is 
Composite1.ComponentA
b: the final target for the ComponentF.Reference1 is 
Composite1.Reference1 (pointing to an external service)


The property can be set to different value following different 
composition path:


a: Composite3.ComponentE.Property1 is overrided by 
Composite1.ComponentB.Property1 (say value=ABC)
b: Composite3.ComponentE.Property1 is overrided by 
Composite2.ComponentD.Property1 (say value=XYZ)


To represent the fully-configured components, we need to clone the 
model for Composite3 for Path a and b so that it can be used to hold 
different resolved values. With the flatten structure, we should be 
able to fully configure the components at model level.


That looks like the best approach to me. I had started to add code to 
the model to clone model instances, and this already works for includes. 
I can help and go over the various model classes and make sure that we 
have the correct support for this kind of cloning. Then we'll need to 
implement the logic to propagate service/reference/property 
configuration from the top to the bottom of the composition hierarchy.




Am I on the right track? Feedbacks are welcome.

Once we agree on this, I'll continue to bring up the discussions on 
additional runtime behaviors beyond the model that needs to be 
provided by the component implementation or binding extensions to work 
with the core invocation framework.


Thanks,
Raymond

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Scope management for SCA components

2007-04-20 Thread Raymond Feng

Hi,

We have code in core-spi/core today to provide scope management for SCA 
components. At spec level, it seems that only the Java spec defines the 
scope for java implementations. Do we need to generalize the scope concept 
for all component implementation types or should we refactor the code into 
implementation-java-runtime to support java components only?


Ant, I was under the impression that the scripting component types may 
require the scope management. Can you clarify?


Thanks,
Raymond 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA 1.0 compliance

2007-04-20 Thread Jean-Sebastien Delfino

N Williams wrote:

Hi all. I've been tracking your work and playing around with Tuscany for some 
time now, mainly using the M2 release. I see that you've recently released an 
integration/Alpha release with hybrid SCA 0.96/1.0 support. When do you think a 
more SCA 1.0 compliant release will be available? Thank you.

   
-

 Yahoo! Answers - Got a question? Someone out there knows the answer. Tryit now.
  


Hi,

We have made good progress towards support for the SCA 1.0 spec the last 
few weeks. The latest code in trunk now supports almost all the SCA 1.0 
assembly XML syntax. Most SCA Java annotations are supported as well. We 
have a partial implementation of the SCA 1.0  Java API. We have started 
to implement the new ComponentContext API. RequestContext, 
ServiceReference and the Conversational API and annotations are not 
there yet but I would like to have RequestContext and ServiceReference 
at least very soon. The Conversational stuff will probably take a little 
longer, unless people in the Tuscany community are interested in helping 
to expedite its implementation.


We are planning to have a release around the JavaOne timeframe, in the 
next few weeks, with good support for the SCA 1.0 assembly XML and most 
of these SCA 1.0 Java annotations and APIs.


If you are interested in any specific features, feel free to ask and we 
can tell you if they are in the current code base, or try to have them 
in that release :)


--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Problems with deploying bigbank sample

2007-04-20 Thread Mahi
After installing tuscany-sca-1.0-incubator-M2-samples and running the mvn 
command, when I deploy sample-bigbank-account.war and 
sample-bigbank-webclient.war, I see the following problems while it is 
deploying.. Any clue what I might be missing?

INFO: Deploying web application archive sample-bigbank-account.war
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___repo1.maven.org_maven2
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___people.apache.org_repo_
m2-incubating-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___repo1.maven.org_maven
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___people.apache.org_repo_
m2-snapshot-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___repo1.maven.org_mave
n2
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___people.apache.org_re
po_m2-incubating-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___repo1.maven.org_mave
n
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___people.apache.org_re
po_m2-snapshot-repository
[WARNING] POM for 'org.apache.axis2:axis2-kernel:pom:1.1:runtime' is invalid. 
It will be ignored for artifact resolution
. Reason: Failed to validate POM
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
org.apache.tuscany.runtime.webapp.ServletLauncherInitException: 
org.apache.tuscany.spi.component.TargetException: Error
initializing component instance [extender]
at 
org.apache.tuscany.runtime.webapp.WebappRuntimeImpl.initialize(WebappRuntimeImpl.java:147)
at 
org.apache.tuscany.runtime.webapp.TuscanyContextListener.contextInitialized(TuscanyContextListener.java:74)
at 
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3729)
at 
org.apache.catalina.core.StandardContext.start(StandardContext.java:4187)
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:759)
at 
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:739)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:809)
at 
org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:698)
at 
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:472)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1122)
at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:310)
at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:718)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013)
at 
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442)
at 
org.apache.catalina.core.StandardService.start(StandardService.java:450)
at 

Re: Problems with deploying bigbank sample

2007-04-20 Thread Mahi
Is there a link from where I can get self-contained 
sample-bigbank-webclient.war and sample-bigbank-account.war files that I can 
use?

Thanks

Mahi

Mahi [EMAIL PROTECTED] wrote: After installing 
tuscany-sca-1.0-incubator-M2-samples and running the mvn command, when I deploy 
sample-bigbank-account.war and sample-bigbank-webclient.war, I see the 
following problems while it is deploying.. Any clue what I might be missing?

INFO: Deploying web application archive sample-bigbank-account.war
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___repo1.maven.org_maven2
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___people.apache.org_repo_
m2-incubating-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___repo1.maven.org_maven
[INFO] snapshot org.apache.ws.commons.axiom:axiom-api:SNAPSHOT: checking for 
updates from http___people.apache.org_repo_
m2-snapshot-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___repo1.maven.org_mave
n2
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___people.apache.org_re
po_m2-incubating-repository
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___repo1.maven.org_mave
n
[INFO] snapshot org.apache.ws.commons.axiom:axiom-parent:SNAPSHOT: checking for 
updates from http___people.apache.org_re
po_m2-snapshot-repository
[WARNING] POM for 'org.apache.axis2:axis2-kernel:pom:1.1:runtime' is invalid. 
It will be ignored for artifact resolution
. Reason: Failed to validate POM
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven2 
(http://repo1.maven.org/maven2)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-incubating-repository (http://people.a
pache.org/repo/m2-incubating-repository)
[WARNING] Unable to get resource from repository http___repo1.maven.org_maven 
(http://repo1.maven.org/maven)
[WARNING] Unable to get resource from repository 
http___people.apache.org_repo_m2-snapshot-repository (http://people.apa
che.org/repo/m2-snapshot-repository)
org.apache.tuscany.runtime.webapp.ServletLauncherInitException: 
org.apache.tuscany.spi.component.TargetException: Error
initializing component instance [extender]
at 
org.apache.tuscany.runtime.webapp.WebappRuntimeImpl.initialize(WebappRuntimeImpl.java:147)
at 
org.apache.tuscany.runtime.webapp.TuscanyContextListener.contextInitialized(TuscanyContextListener.java:74)
at 
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3729)
at 
org.apache.catalina.core.StandardContext.start(StandardContext.java:4187)
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:759)
at 
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:739)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:809)
at 
org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:698)
at 
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:472)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1122)
at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:310)
at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:718)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013)
at 

Re: Databinding itests

2007-04-20 Thread Jean-Sebastien Delfino

Simon Laws wrote:
I believe the basic databinding itests work now against the code 
currently
in the trunk. I have a minimum of types plugged in currently and only 
test

SDO and JAXB so I have a background task to enhance the number of types
tested and also extend the number of bindings tested. It would be good if
someone else could give it a spin and see if works as it stands.

Regards

Simon


Simon,

The databinding tests work. I added them to the main build.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Created: (TUSCANY-1219) Modification in Connection and Statement classes. Creation of new classes : SqlException, PreparedStatement

2007-04-20 Thread Douglas Siqueira Leite (JIRA)
Modification in Connection and Statement classes. Creation of new classes : 
SqlException, PreparedStatement
---

 Key: TUSCANY-1219
 URL: https://issues.apache.org/jira/browse/TUSCANY-1219
 Project: Tuscany
  Issue Type: Improvement
  Components: C++ DAS
Affects Versions: Wish list
 Environment: Plattaform Windows with Microsoft Visual C++ 2005 Express 
Edition.
Reporter: Douglas Siqueira Leite
 Fix For: Wish list


There are some modifications in the Connection class, mainly in the 
constructor. Now the constructor´s arguments are the dns, user, and password, 
instead of the SQLHENV and SQLHDBC.
SqlException class  was created to indicate an ocurrence of a exception related 
to a sql error.
The main modifications in the Statement class are the call of some ODBC 
functions, and the change of some variable sql types that was depreciated, like 
HSTMT to SQLHSTMT.
The PreparedStatement class definition was created.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (TUSCANY-1219) Modification in Connection and Statement classes. Creation of new classes : SqlException, PreparedStatement

2007-04-20 Thread Adriano Crestani (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adriano Crestani updated TUSCANY-1219:
--

Environment: WIN32  (was: Plattaform Windows with Microsoft Visual C++ 2005 
Express Edition.)

 Modification in Connection and Statement classes. Creation of new classes : 
 SqlException, PreparedStatement
 ---

 Key: TUSCANY-1219
 URL: https://issues.apache.org/jira/browse/TUSCANY-1219
 Project: Tuscany
  Issue Type: Improvement
  Components: C++ DAS
Affects Versions: Wish list
 Environment: WIN32
Reporter: Douglas Siqueira Leite
 Fix For: Wish list


 There are some modifications in the Connection class, mainly in the 
 constructor. Now the constructor´s arguments are the dns, user, and password, 
 instead of the SQLHENV and SQLHDBC.
 SqlException class  was created to indicate an ocurrence of a exception 
 related to a sql error.
 The main modifications in the Statement class are the call of some ODBC 
 functions, and the change of some variable sql types that was depreciated, 
 like HSTMT to SQLHSTMT.
 The PreparedStatement class definition was created.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (TUSCANY-1219) Modification in Connection and Statement classes. Creation of new classes : SqlException, PreparedStatement

2007-04-20 Thread Douglas Siqueira Leite (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Douglas Siqueira Leite updated TUSCANY-1219:


Attachment: tuscany-1219-douglassleite.04212007

 Modification in Connection and Statement classes. Creation of new classes : 
 SqlException, PreparedStatement
 ---

 Key: TUSCANY-1219
 URL: https://issues.apache.org/jira/browse/TUSCANY-1219
 Project: Tuscany
  Issue Type: Improvement
  Components: C++ DAS
Affects Versions: Wish list
 Environment: WIN32
Reporter: Douglas Siqueira Leite
 Fix For: Wish list

 Attachments: tuscany-1219-douglassleite.04212007


 There are some modifications in the Connection class, mainly in the 
 constructor. Now the constructor´s arguments are the dns, user, and password, 
 instead of the SQLHENV and SQLHDBC.
 SqlException class  was created to indicate an ocurrence of a exception 
 related to a sql error.
 The main modifications in the Statement class are the call of some ODBC 
 functions, and the change of some variable sql types that was depreciated, 
 like HSTMT to SQLHSTMT.
 The PreparedStatement class definition was created.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Processing on Intents and PolicySets

2007-04-20 Thread Raymond Feng

Hi, Mark.

Thank you for looking into this area.

The current WireImpl doesn't provide the link to the model. But we're 
starting to change the runtime to be driven by the model and hopefully we 
can achieve that next week. By then, the interceptors (at least the code 
that creates/inserts the interceptors) will have access to the Intents and 
PolicySets.


I'll keep you updated on the ML.

Thanks,
Raymond

- Original Message - 
From: Mark I. Dinges [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, April 20, 2007 9:16 AM
Subject: Processing on Intents and PolicySets


I would like to start work on the ability to processing Intents and 
PolicySets in interceptors. Currently there is not any link from the core 
WireImpl object or from any of the objects in the core WireImpl back to the 
Assembly model that contains the Intents and PolicySets. First question 
does the community feel that being able to work with and process Intents 
and PolicySets from interceptors the right approach. If so, does it seem 
reasonable to put links back to the assembly model from various points in 
the runtime model the right approach to take?


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]