Re: Implementation.script module is broken due to a change in grovy-all-1.0 POM

2007-10-25 Thread Raymond Feng

Hi,

This looks like a diaster in the open source world. 
http://www.nabble.com/-jira--Created:-(MEV-550)-Missing-castor-version-or-incorrect-groovy-openejb-dependencies-t4683384s177.html.


It seems that Geronimo build is broken too. I have to delay the SCA Java 
1.0.1 RC as I cannot build it any more  :-(.


Thanks,
Raymond

- Original Message - 
From: Raymond Feng [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Wednesday, October 24, 2007 8:11 PM
Subject: Implementation.script module is broken due to a change in 
grovy-all-1.0 POM




Hi,

Starting from today, the implementation.script module is broken due to a 
change of grovy-all-1.0 POM in the maven repo. Please see the similar 
report at:


http://permalink.gmane.org/gmane.comp.lang.groovy.user/25859

It's unfortunate that they could change the POM after the 1.0 release was 
published for a long time. I tried to use newer version of groovy but bsf 
is not happy.


Thanks,
Raymond 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Resolved: (TUSCANY-1815) das.applyChanges will always do insert instead of update if createDataObject was used

2007-10-25 Thread Amita Vadhavkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amita Vadhavkar resolved TUSCANY-1815.
--

Resolution: Fixed

Changes applied under revision 588159

 das.applyChanges will always do insert instead of update if createDataObject 
 was used
 -

 Key: TUSCANY-1815
 URL: https://issues.apache.org/jira/browse/TUSCANY-1815
 Project: Tuscany
  Issue Type: Bug
  Components: Java DAS RDB
Affects Versions: Java-DAS-beta1, Java-DAS-Next
 Environment: DB2 Iseries
Reporter: Nick Duncan
Assignee: Amita Vadhavkar
 Fix For: Java-DAS-Next

 Attachments: 1815.patch


 If I do something like: 
 ---
   DataObject root = das.getCommand(AllAutos).executeQuery();
   
   DataObject dao = root.createDataObject(t_test);
   dao.set(NAME, NICK);
   dao.set(ID, 100);
   
   das.applyChanges(root);
 -
 There is already a row in the table with primary key 100.  ID is defined in 
 the config xml as being the primary key,  it ignores that ID was set and does 
 an insert statement.  ID is also defined as an auto generated column.
 Basically I was expecting something like Hibernate's savorOrUpdate...   Maybe 
 if the field that represents primary key  is shown to have been changed in 
 the changeSummary, then  DAS will figure out what statement to generate.  
 Where I'm seeing this being a problem is if we get a dataobject that 
 represents a row in the database, and then send it off to a service, we get 
 back another object that has been updated, but since it is a different object 
 then the DAS will think it should do an update?  Merge doesn't  seem to  
 alleviate this problem either.  Please let me know if I am way off base here. 
  Thanks

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [NOTICE] Michael Yoder voted as Tuscany committer

2007-10-25 Thread kelvin goodson
Michael,

   good to have you on board,  Welcome!

Kelvin.

On 24/10/2007, Pete Robbins [EMAIL PROTECTED] wrote:
 The Tuscany PPMC and Incubator PMC have voted for Michael to become a
 Tuscany committer.

 Congratulations and welcome!

 I look forward to your continued excellent contributions to Tuscany.

 Cheers,

 --
 Pete

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Implementation.script module is broken due to a change in grovy-all-1.0 POM

2007-10-25 Thread ant elder
It simply needs an exclude of the openejb dependency which will  prevent
castor being draged in :

exclusions
exclusion
groupIdopenejb/groupId
artifactIdopenejb-loader/artifactId
/exclusion
/exclusions

We could still make an October release if we can get an RC out for review
ASAP

   ...ant

On 10/25/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 This looks like a diaster in the open source world.

 http://www.nabble.com/-jira--Created:-(MEV-550)-Missing-castor-version-or-incorrect-groovy-openejb-dependencies-t4683384s177.html
 .

 It seems that Geronimo build is broken too. I have to delay the SCA Java
 1.0.1 RC as I cannot build it any more  :-(.

 Thanks,
 Raymond

 - Original Message -
 From: Raymond Feng [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Wednesday, October 24, 2007 8:11 PM
 Subject: Implementation.script module is broken due to a change in
 grovy-all-1.0 POM


  Hi,
 
  Starting from today, the implementation.script module is broken due to a
  change of grovy-all-1.0 POM in the maven repo. Please see the similar
  report at:
 
  http://permalink.gmane.org/gmane.comp.lang.groovy.user/25859
 
  It's unfortunate that they could change the POM after the 1.0 release
 was
  published for a long time. I tried to use newer version of groovy but
 bsf
  is not happy.
 
  Thanks,
  Raymond


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Classloading in Tuscany

2007-10-25 Thread Rajini Sivaram
Sebastien,

Comments inline.


On 10/25/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Some comments and use cases.

 Rajini Sivaram wrote:
  The bundles that I have at the moment are:
 
 1. org.apache.tuscany.sca.api.jar
 14,942 bytes
 2. org.apache.tuscany.sca.tuscany.corespi.jar
  370,228 bytes
 3. org.apache.tuscany.sca.tuscany.runtime.jar
   571,159 bytes
 4. org.apache.tuscany.sca.tuscany.extensions.jar
 996,238
 bytes
 5. org.apache.tuscany.sca.tuscany.dependencies.jar
  43,928,926 bytes
 

 The dependencies bundle seems pretty big :)

  From a packaging point of view, it doesn't make sense to split Tuscany,

 Why doesn't it make sense? (more below about scenarios and requirements)


I was talking in relative terms, and purely in terms of the size of the
bundles. The whole of Tuscany is only about 2MB, while the 3rd party code is
nearly 44MB, so it kind of makes sense to concentrate on splitting the 3rd
party code rather than Tuscany. I think the value in splitting Tuscany is
more in terms of running multiple versions of Tuscany bundles (under OSGi)
rather than reducing the total size of the bundles required in any
particular scenario. Unless there are use cases which require only a very
small part of the 44MB of 3rd party code, making 900k of extensions look
big (your last use case may be one, so I take my statement back).

 and
  it makes a lot more sense to split the 3rd party code.
 
 
 Did you mean split the 3rd party code from the rest of Tuscany as one
 monster 40Mb bundle? or split that monster bundle in a set of smaller
 bundles?


I would like to split the monster bundle into as many chunks as is
manageable.


  I dont want to sound like I am doing a sales pitch for OSGi, but I am
 not
  sure there is a cleaner or easier way to sort out versioning issues that
  Tuscany would face if different versions of 3rd party libraries were
  used, compared to running Tuscany under OSGi. An OSGi runtime would
 enable
  different versions of 3rd party libraries, different versions of Tuscany
  runtime and different versions of Tuscany extensions to coexist in a
 single
  VM with very little extra effort. Implementing something similar with a
  classloader architecture outside of OSGi would be significantly more
 complex
  (and will eventually reinvent OSGi).
 

 OK, I like OSGi too :) My previous question about requirements was an
 attempt to step back from the technology details and generate some
 discussion around the bundle use cases.

 Let me try to help and list the use cases I can think of, and how they
 relate to the bundles you've listed.

 - I develop a business application, and I'm using the SCA APIs, I don't
 want any dependency on a particular runtime or whatever version of it.
 -- happy with oat.sca.api!

 - I developing an extension, using the Tuscany SPI, and running with the
 Tuscany runtime
 -- happy with oat.corespi
 -- would be happy too if oat.core.spi and oat.runtime were a single
 bundle as long as the runtime packages are not exported. I guess any
 single project that wants to run something is going to have a dependency
 on oat.runtime anyway to be able to start the runtime, right?


As an extension developer, you should only import packages that are
published as SPIs. It shouldn't matter whether the runtime bundle exported
packages (for instance to split the runtime itself into multiple bundles).

- I'm developing an extension which depends on a 3rd party dependency
 from oat.dependencies
 -- not happy with a 40Mb bundle
 -- what'll happen if I have two copies of Axis2's Axiom in 2 bundles?
 can I still pass an axiom element between 2 extensions using these 2
 copies? I guess not...


Now I think you are asking for too much.

Presumably you have two different versions of Axiom in the two bundles. You
can have extensionA(Axiom v1) and extensionB(Axiom v2) coexist as long as
they dont want to pass axiom elements between them. And you can have
extensionA(Axiom v1) and extension B(Axiom v1) coexist and pass axiom
elements between them, and these can coexist which extensionC(Axiom v2) and
extensionD(Axiom v2) which pass axiom elements between C and D.


 - I just developed version v4.0.3 of implementation-java, which requires
 xml-api v47.12, unfortunately incompatible with the xml-api version in
 Tuscany v4.0.2.
 -- can we walk through the steps to upgrade my bundle-ized Tuscany
 v4.0.2 working with my implementation-java v4.0.3?


At the moment, I doubt this will be possible (I think you are deliberately
trying to make my life difficult).

I can give you an alternative scenario that is similar, but is more
achievable. If you have developed v4.0.3 of binding-ws-axis2, which requires
a new version of Axis2, which is incompatible with the version of Axis2 used
in Tuscany v4.0.2, you should be able to upgrade Tuscany 4.0.2 to work with
binding-ws-axis2 v4.0.3.

I think you will run into problems if you upgraded implementation-java to a

Re: Classloading in Tuscany

2007-10-25 Thread ant elder
On 10/25/07, Rajini Sivaram [EMAIL PROTECTED] wrote:

snip

 This does imply splitting both Tuscany extension bundle
 and the big 3rd party bundle, into smaller chunks. Because of its size, I
 am
 more inclined to split the 3rd party bundle into smaller bundles first
 (though I have no idea where to start with this huge big list of jar
 files).


I can help with that, after doing lots of releases i've a good understanding
of what each jar is for and what uses it. How about starting with whatever
bundle make up is easiest for you and then we can juggle things around to
get to something everyone is happy with.

   ...ant


Re: Classloading in Tuscany

2007-10-25 Thread Rajini Sivaram
Thank you, Ant. That will be very helpful.

Let me finish off the classloading changes first, and I will get back to you
(hopefully sometime next week).


Thank you...

Regards,

Rajini


On 10/25/07, ant elder [EMAIL PROTECTED] wrote:

 On 10/25/07, Rajini Sivaram [EMAIL PROTECTED] wrote:

 snip

 This does imply splitting both Tuscany extension bundle
  and the big 3rd party bundle, into smaller chunks. Because of its size,
 I
  am
  more inclined to split the 3rd party bundle into smaller bundles first
  (though I have no idea where to start with this huge big list of jar
  files).


 I can help with that, after doing lots of releases i've a good
 understanding
 of what each jar is for and what uses it. How about starting with whatever
 bundle make up is easiest for you and then we can juggle things around to
 get to something everyone is happy with.

   ...ant



Re: [SDO+DAS] order of changed data objects in change summary

2007-10-25 Thread Amita Vadhavkar
I am trying to see if the below is a potential problem or a rare case.
Please give your comments.

Let us take an example - customer doing bulk purchase.

When DAS does query to Customer+Order it gets back -
Customer1-Order1...Order50
(DAS starts ChangeSummary logging after query is transformed into DataGraph)

At first time,

Customer deletes Order1...Order10
He then modifies Order11Order20
He then creates Order1...Order10 (say there is no auto-gen keys etc. and
Customer
just preserves the deleted Orders' IDs in the new ones.)

Customer finally wants to push the changes to DB thru SDO+DAS

DAS has a DataGraph which says - 10 delete, 10 updates and 10 inserts

As DAS has no way to know in which sequence to fire DB statements as per
current code DAS buckets all deletes first, then all inserts and then all
updates - form required DB statements and all goes well.

-
at another time,

Customer creates 10 new orders - Order51...Order60
updates same - Order51...Order60
and then due to some reason (say no funds :)), deletes some from those
Order51...Order55

Customer finally wants to push the changes to DB thru SDO+DAS

DAS again has a DataGraph which says - 5 deletes, 10 updates and 10 inserts.

DAS has no way to know that now inserts should happen first, then updates
and then deletes as there is no time tracked. So DAS follows the previous
logic -
1 delete Order51...Order55, 2insert Order51...Order60, 3 Update
Order51...Order60

So now, as 1 is trying to operate on data which is not there in DB yet,
there is SQL Exception and further operation will not complete
-


Regards,
Amita

On 10/2/07, kelvin goodson [EMAIL PROTECTED] wrote:

 Comments inline ...
 Kelvin.

 On 02/10/2007, Amita Vadhavkar [EMAIL PROTECTED] wrote:
  Hi,
 
  Question in SDO -
  ChangeSummary changeSummary = root.getDataGraph().getChangeSummary();
  List changedDOs =  commonj.sdo.ChangeSummary.getChangedDataObjects();
 
  is changedDOs an ordered list with order based on time of the change (
 i.e.
  if a user does a particular sequence of create, delete etc.
  is the sequence preserved in the changedDOs?)

 A change summary describes the net effect of changes on the data
 objects in the scope of the change summary since the point that
 logging began.  It doesn't reveal a full temporally sequenced change
 history.

 
  Question in DAS -
  when das.applyChanges(root) happens with multiple changes, in
  Changes.execute() - DAS uses buckets of 1) insert, 2)update, 3) delete
 to
  fire the SQL statements in
  that order. Thus DAS does not attempt to preserve the order from
 changedDOs.
  As a result, even if a user has done delete(customer.id=1) and then
 create(
  customer.id=1), DAS
  attempts INSERT first and then DELETE and give SQL Integrity exception.
 
  If SDO change summary is preserving the order of changes, will it be
 better
  to propagate the same order in Changes.execute()?
 
  {Please refer to JIRA-1815 for some history.}
 
  Regards,
  Ammita
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: JSONRPC Enhancements

2007-10-25 Thread Simon Nash

OK, this member of the community will bite :-)

Now that we have released 1.0, we should not break compatibility with
user applications or user extensions without a very good reason.
We should always try to deprecate previously supported APIs and keep
them working rather than disabling them.  Is there any way to
keep the old applications using SCADomain.js running, while supporting
and recommending the new approach using jsonrpc.js?

  Simon

ant elder wrote:


On 10/22/07, Luciano Resende [EMAIL PROTECTED] wrote:


The upgrade process is very harmless, two line of code, one for the js
reference and another for the reference declaration. Also, seems like
there are some bugs on the scaDomain.js [1] that would not happen
while using the manual reference. I also think that, having the two
very similar bindings will make confusion and other maintenance
headaches. Have said that, and as I'm working towards getting the
web2.0 References working soon, I'd like to keep this as one binding,
but I'm open if the community feels otherwise.




Doesn't look like Mr Community is answering...Trying to maintain backward
compatibility where possible is important. It doesn't matter that its a
harmless two line change, if some guy upgrades from Tuscany 1.0 to 1.1 and
his applications don't work any more then that is a bad user experience
which we should try hard to avoid. Is there a reason the scaDoamin.js can't
work anymore? If there is a reason then a separate new binding seems better
to me just so we can avoid breaking anyone.

   ...ant





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: JSONRPC Enhancements

2007-10-25 Thread ant elder
I've had a quick look and i think should be possible to support both
approaches, i'll go give it a try. Once the sca.js approach is working and
the old scaDomain.js is used we could log a warning message saying its
scaDomain.js is deprecated.

   ...ant

On 10/25/07, Simon Nash [EMAIL PROTECTED] wrote:

 OK, this member of the community will bite :-)

 Now that we have released 1.0, we should not break compatibility with
 user applications or user extensions without a very good reason.
 We should always try to deprecate previously supported APIs and keep
 them working rather than disabling them.  Is there any way to
 keep the old applications using SCADomain.js running, while supporting
 and recommending the new approach using jsonrpc.js?

Simon

 ant elder wrote:

  On 10/22/07, Luciano Resende [EMAIL PROTECTED] wrote:
 
 The upgrade process is very harmless, two line of code, one for the js
 reference and another for the reference declaration. Also, seems like
 there are some bugs on the scaDomain.js [1] that would not happen
 while using the manual reference. I also think that, having the two
 very similar bindings will make confusion and other maintenance
 headaches. Have said that, and as I'm working towards getting the
 web2.0 References working soon, I'd like to keep this as one binding,
 but I'm open if the community feels otherwise.
 
 
 
  Doesn't look like Mr Community is answering...Trying to maintain
 backward
  compatibility where possible is important. It doesn't matter that its a
  harmless two line change, if some guy upgrades from Tuscany 1.0 to 1.1and
  his applications don't work any more then that is a bad user experience
  which we should try hard to avoid. Is there a reason the scaDoamin.jscan't
  work anymore? If there is a reason then a separate new binding seems
 better
  to me just so we can avoid breaking anyone.
 
 ...ant
 



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Implementation.script module is broken due to a change in grovy-all-1.0 POM

2007-10-25 Thread Raymond Feng

Hi,

I did try that last night, for some reasone, I saw the same problem. But now 
it works. Thanks. I'll merge it into the 1.0.1 branch now.


Raymond


- Original Message - 
From: ant elder [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Thursday, October 25, 2007 1:42 AM
Subject: Re: Implementation.script module is broken due to a change in 
grovy-all-1.0 POM




It simply needs an exclude of the openejb dependency which will  prevent
castor being draged in :

   exclusions
   exclusion
   groupIdopenejb/groupId
   artifactIdopenejb-loader/artifactId
   /exclusion
   /exclusions

We could still make an October release if we can get an RC out for review
ASAP

  ...ant

On 10/25/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

This looks like a diaster in the open source world.

http://www.nabble.com/-jira--Created:-(MEV-550)-Missing-castor-version-or-incorrect-groovy-openejb-dependencies-t4683384s177.html
.

It seems that Geronimo build is broken too. I have to delay the SCA Java
1.0.1 RC as I cannot build it any more  :-(.

Thanks,
Raymond

- Original Message -
From: Raymond Feng [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Wednesday, October 24, 2007 8:11 PM
Subject: Implementation.script module is broken due to a change in
grovy-all-1.0 POM


 Hi,

 Starting from today, the implementation.script module is broken due to 
 a

 change of grovy-all-1.0 POM in the maven repo. Please see the similar
 report at:

 http://permalink.gmane.org/gmane.comp.lang.groovy.user/25859

 It's unfortunate that they could change the POM after the 1.0 release
was
 published for a long time. I tried to use newer version of groovy but
bsf
 is not happy.

 Thanks,
 Raymond


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]







-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Resolving Component type files

2007-10-25 Thread Rajini Sivaram
Hello,

Is there any reason why unlike CompositeModelResolver and
ConstrainingTypeModelResolver, ComponentTypeModelResolver does not look at
imported namespaces for resolving component type files?

My test case contains:
   ContributionA : contains a composite file, with a component C1
   ContributionB: contains the Java implementation classes for C1 (
x.y.C1.class), and the componentType file (x.y.C1.componentType)

The model resolver used to resolve the composite is associated with
ContributionA, and when implementation.java looks for the componentType
file using this model resolver, it does not find it, since it doesn't look
anywhere except in ContributionA.

Is this a valid test case, or should the componentType file always be in
ContributionA, along with the composite?

If the componentType file is allowed to be inside ContributionB (since
componentType file describes an implementation, I would have expected it to
be colocated with the implementation), what type of import/export statement
should be used in ContributionA? ContributionA contains import.javapackage=
x.y/ to find the implementation class x.y.C1. Should that be somehow used
to resolve the componentType file as well, or should there be another
namespace import specifically for the componentType file (import
namespace=x.y/)?


Thank you...

Regards,

Rajini


[jira] Resolved: (TUSCANY-1694) sdotest does not compile

2007-10-25 Thread Adriano Crestani (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adriano Crestani resolved TUSCANY-1694.
---

Resolution: Fixed

 sdotest does not compile
 

 Key: TUSCANY-1694
 URL: https://issues.apache.org/jira/browse/TUSCANY-1694
 Project: Tuscany
  Issue Type: Bug
  Components: C++ SDO
Affects Versions: Cpp-Next
 Environment: Linux Fedora Core  2.6.21-1.3194.fc7
 g++ (GCC) 4.1.2 20070502 (Red Hat 4.1.2-12)
Reporter: Ralf Henschkowski
 Fix For: Cpp-Next

 Attachments: sdotest.patch


 gcc complains that RefCountBase::~RefCountBase() is incorrect. Removing the 
 RefCountBase namespace from the destructor fixes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Resolved: (TUSCANY-1695) SDOUtils.cpp does not compile due to problematic cast

2007-10-25 Thread Adriano Crestani (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adriano Crestani resolved TUSCANY-1695.
---

Resolution: Fixed

 SDOUtils.cpp does not compile due to problematic cast
 -

 Key: TUSCANY-1695
 URL: https://issues.apache.org/jira/browse/TUSCANY-1695
 Project: Tuscany
  Issue Type: Bug
  Components: C++ Build
Affects Versions: Cpp-Next
 Environment: Linux Fedora Core 2.6.21-1.3194.fc7 
 g++ (GCC) 4.1.2 20070502 (Red Hat 4.1.2-12)
Reporter: Ralf Henschkowski
 Fix For: Cpp-Next


 compiler sees ambiguitiy in the following line:
 ===
 --- sdo/runtime/core/src/commonj/sdo/SDOUtils.cpp   (revision 575198)
 +++ sdo/runtime/core/src/commonj/sdo/SDOUtils.cpp   (working copy)
 @@ -275,7 +275,7 @@
 j != pl.end();
 j++)
{
 - PropertyPtr current = *j;
 + PropertyPtr current = (PropertyPtr)*j;
   out  \tProperty: 
current-getName()
 Explicit cast helped to compile it, but I'm not sure if it is a fix ...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (TUSCANY-1868) Schema imports not working in multiple class-loader environment

2007-10-25 Thread Kelvin Goodson (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kelvin Goodson updated TUSCANY-1868:


Component/s: Java SDO Implementation

 Schema imports not working in multiple class-loader environment
 ---

 Key: TUSCANY-1868
 URL: https://issues.apache.org/jira/browse/TUSCANY-1868
 Project: Tuscany
  Issue Type: Bug
  Components: Java SDO Implementation
Affects Versions: Java-SDO-1.0
 Environment: n/a
Reporter: David T. Adcox
 Fix For: Java-SDO-Next

 Attachments: Test1868.java


 There is a problem with loading schemas that import an additional namespace 
 when in a multiple class-loader environment.  The problem exists due to a 
 design flaw in how the EcoreBuilder and package registries interact under a 
 specific circumstance.  That circumstance exists when the default helper 
 context is used to load a schema model using the XSDHelper.define() method in 
 two different, peer class-loaders.  These peer class-loaders must share the 
 same parentage.  
---
   | CL parent  | 
---
  /\
   /  \
  /\
 /  \
 ---   --- 
   | CL1  || CL2  | 
 ---   --- 
 In this scenario, the same EcoreBuilder is used on both passes of 
 XSDHelper.define(), because the EcoreBuilder is associated with the CL 
 Parent.  A unique registry is created for CL1 and another for CL2.  During 
 the processing for CL1, the target namespace and all imported namespaces are 
 properly parsed and the models stored in the associated registry, but during 
 the processing for CL2, because the same EcoreBuilder is used, only the 
 target namespace is processed.  So, any imported models are missing from the 
 CL2 registry.   
 I've attached an example that will demonstrate this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Resolved: (TUSCANY-1780) [JAVA-SDO] Incorrect generation of class with default value for a list

2007-10-25 Thread Kelvin Goodson (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kelvin Goodson resolved TUSCANY-1780.
-

Resolution: Fixed

Resolved in 588261.

 [JAVA-SDO] Incorrect generation of class with default value for a list
 --

 Key: TUSCANY-1780
 URL: https://issues.apache.org/jira/browse/TUSCANY-1780
 Project: Tuscany
  Issue Type: Bug
  Components: Java SDO Tools
Affects Versions: Java-SDO-1.0, Java-SDO-Next
 Environment: Windows XP, JRE 1.4.2 and JRE 1.5
Reporter: Chris Mildebrandt
Priority: Critical
 Fix For: Java-SDO-Next

 Attachments: Address.xsd, Address2.xsd, SDOClass.java


 Hello,
 There seems to be a problem when generating static classes when lists are 
 involved. I have the following lines in my schema:
 xsd:attribute name=categoryType type=address:CategoryType use=required 
 default=myCat/
 simpleType name=CategoryType
 list itemType=category /
 /simpleType
 This generates the following line in the impl class:
 protected static final Object CATEGORY_TYPE_DEFAULT_ = 
 ((EFactory)ModelFactory.INSTANCE).createFromString(ModelPackageImpl.eINSTANCE.getObject(),
  myCat);
 The class ModelPackageImpl doesn't exist.
 I've tried this with the 1.0 version of SDO and a version I built today.
 Let me know if you need any more information. Thanks,
 -Chris

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Please review the SCA Java 1.0.1 RC1

2007-10-25 Thread Raymond Feng

Hi,

The SCA Java 1.0.1 RC1 is ready for review.

SVN Tag: 
http://svn.apache.org/repos/asf/incubator/tuscany/tags/java/sca/1.0.1-RC1/


Stage maven repo: http://people.apache.org/~rfeng/tuscany/maven/

RAT report: 
http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/1.0.1-RC1.rat.txt


Distros (zip/gz/asc/md5) : 
http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/


Thanks,
Raymond

FYI:
To build the source distro from the stage maven repo, you can add the 
profiles element below to your maven settings.xml and run mvn -Pstaging 
clean install.


settings
   ...
   profiles
   profile
   idstaging/id
   activation
   activeByDefaultfalse/activeByDefault
   /activation
   repositories
   repository
   idtuscany.staging/id
   urlhttp://people.apache.org/~rfeng/tuscany/maven/url
   /repository
   /repositories
   pluginRepositories
   pluginRepository
   idtuscany.staging/id
   urlhttp://people.apache.org/~rfeng/tuscany/maven/url
   /pluginRepository
   /pluginRepositories
   /profile
   /profiles
   ...
/settings 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Please review the SCA Java 1.0.1 RC1

2007-10-25 Thread Ignacio Silva-Lepe
Quick comment, the notification sample readmes should really use release
neutral jar and dir names, my bad for not changing them before the spin of
RC1. In case a respin happens, the trivial change is the same as that of
r588330.

On 10/25/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 The SCA Java 1.0.1 RC1 is ready for review.

 SVN Tag:
 http://svn.apache.org/repos/asf/incubator/tuscany/tags/java/sca/1.0.1-RC1/

 Stage maven repo: http://people.apache.org/~rfeng/tuscany/maven/

 RAT report:
 http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/1.0.1-RC1.rat.txt

 Distros (zip/gz/asc/md5) :
 http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/

 Thanks,
 Raymond

 FYI:
 To build the source distro from the stage maven repo, you can add the
 profiles element below to your maven settings.xml and run mvn -Pstaging
 clean install.

 settings
...
profiles
profile
idstaging/id
activation
activeByDefaultfalse/activeByDefault
/activation
repositories
repository
idtuscany.staging/id
urlhttp://people.apache.org/~rfeng/tuscany/maven
 /url
/repository
/repositories
pluginRepositories
pluginRepository
idtuscany.staging/id
urlhttp://people.apache.org/~rfeng/tuscany/maven
 /url
/pluginRepository
/pluginRepositories
/profile
/profiles
...
 /settings


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Please review the SCA Java 1.0.1 RC1

2007-10-25 Thread Ignacio Silva-Lepe
Trying out RC1 I am seeing the same problem as in TUSCANY-1791. Not sure why
a EOF occurs in Tomcat's Servlet engine and not on Jetty.

On 10/25/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 The SCA Java 1.0.1 RC1 is ready for review.

 SVN Tag:
 http://svn.apache.org/repos/asf/incubator/tuscany/tags/java/sca/1.0.1-RC1/

 Stage maven repo: http://people.apache.org/~rfeng/tuscany/maven/

 RAT report:
 http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/1.0.1-RC1.rat.txt

 Distros (zip/gz/asc/md5) :
 http://people.apache.org/~rfeng/tuscany/1.0.1-RC1/

 Thanks,
 Raymond

 FYI:
 To build the source distro from the stage maven repo, you can add the
 profiles element below to your maven settings.xml and run mvn -Pstaging
 clean install.

 settings
...
profiles
profile
idstaging/id
activation
activeByDefaultfalse/activeByDefault
/activation
repositories
repository
idtuscany.staging/id
urlhttp://people.apache.org/~rfeng/tuscany/maven
 /url
/repository
/repositories
pluginRepositories
pluginRepository
idtuscany.staging/id
urlhttp://people.apache.org/~rfeng/tuscany/maven
 /url
/pluginRepository
/pluginRepositories
/profile
/profiles
...
 /settings


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Registration now open for SCA/SDO events in India

2007-10-25 Thread Doug Tidwell
Friends, the registration page for the SCA/SDO briefings in India is up 
and running.  These are vendor-neutral events in which we talk about the 
joys of SCA and SDO, using Tuscany for the demos. 

The registration page for these events is at 
http://www-07.ibm.com/in/events/soaedge/index.html.

Here is the schedule:

November 6 - Chennai, Taj Coromandel Hotel, 1315-1715
November 7 - Kolkata, Taj Bengal Hotel, 1315-1715
November 13 - Bangalore, Taj West End Hotel, 1315-1715
November 14 - Delhi, Taj Ambassador Hotel, 1315-1715

The event is billed as SOA Edge 2007; the SCA/SDO session is the second 
half of the day.  Feel free to attend the SOA Lifecycle Management seminar 
before lunch if you want

Cheers, 
-Doug

senior software engineer 
emerging technology evangelism, ibm swg strategy
http://www.ibm.com/developerWorks/

Re: JSONRPC Enhancements

2007-10-25 Thread Luciano Resende
I managed to get the scaDomain working for the JSON-RPC again. I have
also updated the helloworld-jsonrpc-webapp to use that, instead of a
local json.js proxy.


On 10/25/07, ant elder [EMAIL PROTECTED] wrote:
 I've had a quick look and i think should be possible to support both
 approaches, i'll go give it a try. Once the sca.js approach is working and
 the old scaDomain.js is used we could log a warning message saying its
 scaDomain.js is deprecated.

...ant

 On 10/25/07, Simon Nash [EMAIL PROTECTED] wrote:
 
  OK, this member of the community will bite :-)
 
  Now that we have released 1.0, we should not break compatibility with
  user applications or user extensions without a very good reason.
  We should always try to deprecate previously supported APIs and keep
  them working rather than disabling them.  Is there any way to
  keep the old applications using SCADomain.js running, while supporting
  and recommending the new approach using jsonrpc.js?
 
 Simon
 
  ant elder wrote:
 
   On 10/22/07, Luciano Resende [EMAIL PROTECTED] wrote:
  
  The upgrade process is very harmless, two line of code, one for the js
  reference and another for the reference declaration. Also, seems like
  there are some bugs on the scaDomain.js [1] that would not happen
  while using the manual reference. I also think that, having the two
  very similar bindings will make confusion and other maintenance
  headaches. Have said that, and as I'm working towards getting the
  web2.0 References working soon, I'd like to keep this as one binding,
  but I'm open if the community feels otherwise.
  
  
  
   Doesn't look like Mr Community is answering...Trying to maintain
  backward
   compatibility where possible is important. It doesn't matter that its a
   harmless two line change, if some guy upgrades from Tuscany 1.0 to 
   1.1and
   his applications don't work any more then that is a bad user experience
   which we should try hard to avoid. Is there a reason the scaDoamin.jscan't
   work anymore? If there is a reason then a separate new binding seems
  better
   to me just so we can avoid breaking anyone.
  
  ...ant
  
 
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 



-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Created: (TUSCANY-1869) one step further, enlist *ws over jms* and local XAResource in one transaction

2007-10-25 Thread gengshaoguang (JIRA)
one step further, enlist *ws over jms* and local XAResource in one transaction
--

 Key: TUSCANY-1869
 URL: https://issues.apache.org/jira/browse/TUSCANY-1869
 Project: Tuscany
  Issue Type: Improvement
  Components: Java SCA Core Runtime
Affects Versions: Java-SCA-Next
 Environment: svn trunk, jencks, activemq
Reporter: gengshaoguang




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (TUSCANY-1869) one step further, enlist *ws over jms* and local XAResource in one transaction

2007-10-25 Thread gengshaoguang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TUSCANY-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gengshaoguang updated TUSCANY-1869:
---

Attachment: sca_diagram_2_references_in_1_transaction.gif

Hi every one:
This diagram shows that a further intention to enlist XAResources into one 
transaction.

Here are a local component which operate on any XAResources and another 
reference of a remote service.

Any failure of the two will cause both rollback.

But one problem is: JCA under a transaction will cause the message not reach 
the endpoint until the transaction commit. This is much different from a 
database transaction which could be leveraged be isolation.

 one step further, enlist *ws over jms* and local XAResource in one transaction
 --

 Key: TUSCANY-1869
 URL: https://issues.apache.org/jira/browse/TUSCANY-1869
 Project: Tuscany
  Issue Type: Improvement
  Components: Java SCA Core Runtime
Affects Versions: Java-SCA-Next
 Environment: svn trunk, jencks, activemq
Reporter: gengshaoguang
 Attachments: sca_diagram_2_references_in_1_transaction.gif




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Created: (TUSCANY-1870) XMLStreamHelper should generate xsi:type if the root element of the XMLDocument is not an existing global element

2007-10-25 Thread Raymond Feng (JIRA)
XMLStreamHelper should generate xsi:type if the root element of the XMLDocument 
is not an existing global element
-

 Key: TUSCANY-1870
 URL: https://issues.apache.org/jira/browse/TUSCANY-1870
 Project: Tuscany
  Issue Type: Bug
  Components: Java SDO Implementation
Affects Versions: Java-SDO-Next
Reporter: Raymond Feng


Trying to convert a Customer DataObject into XMLStreamReader, the root element 
is dummy which is not a global element. We would like to see xsi:type as 
follows.

dummy xmlns=http://dummy; xsi:type=c:Customer xmlns:xsi=... 
xmlns:c=http://customer;

/dummy

I have a fix. If you agree, I can commit.

Thanks,
Raymond

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]