[jira] Assigned: (TUSCANY-1353) Exception attempting to insert rows using DAS w/DataDirect JDBC driver
[ https://issues.apache.org/jira/browse/TUSCANY-1353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amita Vadhavkar reassigned TUSCANY-1353: Assignee: Amita Vadhavkar Exception attempting to insert rows using DAS w/DataDirect JDBC driver -- Key: TUSCANY-1353 URL: https://issues.apache.org/jira/browse/TUSCANY-1353 Project: Tuscany Issue Type: Bug Components: Java DAS RDB Affects Versions: Java-DAS-M2 Environment: Windows XP, WebLogic 8.1SP6, Sybase 12.5, DataDirect Sybase JDBC driver (embedded within BEA WebLogic) Reporter: Ron Gavlin Assignee: Amita Vadhavkar Priority: Critical Greetings, I am having problems inserting rows with Tuscany DAS M2 using the BEA WebLogic Sybase JDBC driver (DataDirect Connect for JDBC 3.6 June 2007)) which is an embedded version of the popular DataDirect JDBC driver. Although I have not tested it, I suspect this problem appears in non-Sybase versions of the driver as well. The code below generates the listed stacktrace. Note: BEA apparently renames the DataDirect Connect for JDBC classes as part of its embedding process. ... Command insert = das.createCommand(insert into Test (testCol1, testCol2) values (?, ?)); insert.setParameter(1, str1); insert.setParameter(2, str2); insert.execute(); Stacktrace: Caused by: java.sql.SQLException: [BEA][Sybase JDBC Driver]No rows affected. at weblogic.jdbc.base.BaseExceptions.createException(Unknown Source) at weblogic.jdbc.base.BaseException.getException(Unknown Source) at weblogic.jdbc.base.BaseStatement.executeUpdateInternal(Unknown Source) at weblogic.jdbc.base.BasePreparedStatement.executeUpdate(Unknown Source) at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:159) at org.apache.tusany.das.rdb.impl.Statement.executeUpdate(Statement.java:173) at org.apache.tusany.das.rdb.impl.Statement.executeUpdate(Statement.java:133) at org.apache.tusany.das.rdb.impl.InsertCommandImpl.execute(InsertCommandImpl.java:44) While interactively debugging org.apache.tuscany.das.rdb.impl.ConnectionImpl.prepareStatement(String queryString, String[] returnKeys), I noticed if I manually change the boolean member variable useGetGeneratedKeys to false, no exception is generated and the insert works as designed. The DataDirect Connect for JDBC drivers are either supported or embedded into numerous commercial application servers, including IBM WebSphere 6.1, jBoss 4.x, and BEA WebSphere. Folks using these platforms are likely to quickly hit this problem if they attempt to use the DAS. - Ron -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Merging changes in Release 0.91 branch with the trunk
HI, Now that the 0.91 release is out, I am going to merge the changes that we done to the branch during the release, over to the trunk. As of now here are the areas that I am considering for a merge - distribution (since we've went about quite a few changes to this during this release... I'll make sure that Luciano's current exclusion of the source build is kept intact to let the nightly builds continue) - samples - demos Please let me know if any of you would like me to take care of something specifically. Thanks - Venkat - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Release process guide checklist, was Fwd: [VOTE] Release SDO Java 1.0-incubating
Luciano Resende wrote: Should we start thinking on a formal release guide, merging together couple documents we already have as of today, and also creating a checklist as it looks like couple release candidates are having the same issues ? +1 - yes, most definitely. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: What should be the right Parent pom version ?
On 7/23/07, Luciano Resende [EMAIL PROTECTED] wrote: Any more inputs here ? Maybe we could stage this in two phases, for now just make all project consistent and pointing to version2-incubating/version as suggested on this thread. And then continue discussions on pros/cons of having multiple poms, one for each sub-project. Thoughts ? On 7/17/07, Luciano Resende [EMAIL PROTECTED] wrote: My main reason for asking the question around version, is that we have learned from past experiences that maven does NOT work OK when you have projects with different versions on the reactor. Today, if you have a clean repo, and try to build from java, build is failing trying to download some artifacts that are indeed in the reactor to be built, and I suspect that the issue is due to different versions on the reactor. I just wanted to be consistent to avoid strange errors, but now that the discussion jumped to a different direction, I have couple questions : What's the user experience with each project having it's own parent pom ? Are we going to still have a top-down build (e.g from java) ? or builds are going to be top down based on sub-projects (e.g from java/sca) ? Currently, the parent pom stores general standard information/configuration for all sub-projects. If we move away from this, and each sub-project have it's own parent, aren't we going to be more susceptible to have unsynchronized common information, as people will likely update a specific info/config in one of the sub-project parent pom, and forget the others ? Maybe I'm interested to learn more about thoughts on what are the advantages of having multiple poms ? On 7/17/07, Venkata Krishnan [EMAIL PROTECTED] wrote: +1 for each subproject to have its own.. unless we want to consciously tie in some commonality through this between the subprojects. - Venkat On 7/17/07, ant elder [EMAIL PROTECTED] wrote: On 7/17/07, Simon Laws [EMAIL PROTECTED] wrote: On 7/17/07, Luciano Resende [EMAIL PROTECTED] wrote: Doing a quick search on the code, looks like we have a combination of parent pom references in our current trunk code. Searching for: version2-incubating/version cts\pom.xml(24): version2-incubating/version sca\pom.xml(25): version2-incubating/version sca\pom.xml(160): version2-incubating/version Found 3 occurrence(s) in 2 file(s) Searching for: version2-incubating-SNAPSHOT/version buildtools\pom.xml(25): version2-incubating-SNAPSHOT/version buildtools\pom.xml(30): version2-incubating-SNAPSHOT/version das\pom.xml(25): version2-incubating-SNAPSHOT/version pom\parent\pom.xml(32): version2-incubating-SNAPSHOT/version sdo\pom.xml(25): version2-incubating-SNAPSHOT/version sdo\sdo-api\pom.xml(25): version2-incubating-SNAPSHOT/version spec\sdo-api\pom.xml(25): version2-incubating-SNAPSHOT/version Found 7 occurrence(s) in 6 file(s) I guess, we should be using the SNAPSHOT version in trunk, but I want to ask before I make these changes. -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Hi Luciano Shouldn't we be developing against a released/stable version of the parent pom unless there are changes that are required? I.e if we develop against a snapshot , and rely on its features, then we should release the snapshot parent pom before we release packages that depend on it. Simon From past discussion thats been the intention i think - that we'd use the released non-SNAPSHOT version, but thats going back to the days when there was separate releases of all the different modules. Probably a bit late for this now - but do we really even want/need this parent pom? It seems simpler and more flexible to me if each sub project just has its own. ...ant - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Luciano Sounds sensible to me. As we have a released artifact that meets our needs being consistent around this is good. At least then, assuming we move to separate parent poms, we are moving from one consistent state to another. Simon
Re: Release process guide checklist, was Fwd: [VOTE] Release SDO Java 1.0-incubating
I agree we could do things to improve our releases. Most ASF releases end up having several RCs, its a natural part of the process, I'm not sure it indicates any failing somewhere. We've been restructuring our builds and distributions recently, with changes like that going on there will be wrinkles. There's already lots of doc about doing releases in the ASF - on the ASF main dev pages and within the Incubator site etc. If there's omissions from those existing guides we should get them updated. Tuscany having a 'formal release guide' makes me nervous it would just be used as a stick to beat people with when some issue is discovered. An issue with is that currently making our releases is quite a manual process, fixing this would be more worthwhile than writing more documentation (IMHO). So :- - change builds to use the maven release plugin to avoid most of the manual steps when creating a release - use maven to as much as possible automate the adding of dependency information to the LICENSE and NOTICE files - update the RAT tool to validate the legal aspects (LICENSE/NOTICE/DISCLAIMER exist) of things like artifacts in the temp maven repository - update RAT to validate the signatures of all downloadable artifacts All those do require some effort and time though. ...ant On 7/24/07, Luciano Resende [EMAIL PROTECTED] wrote: Should we start thinking on a formal release guide, merging together couple documents we already have as of today, and also creating a checklist as it looks like couple release candidates are having the same issues ? -- Forwarded message -- From: ant elder [EMAIL PROTECTED] Date: Jul 23, 2007 2:48 AM Subject: Re: [VOTE] Release SDO Java 1.0-incubating To: tuscany-dev@ws.apache.org, [EMAIL PROTECTED] On 7/21/07, kelvin goodson [EMAIL PROTECTED] wrote: Please vote to release the 1.0-incubating distribution of Tuscany SDO for Java The release candidate RC2 for Tuscany Java SDO archive distribution files are posted at [1] The release audit tool (rat) files and associated exceptions are posted at [1] also The maven repository artifacts are posted in a staging repository [2] http://people.apache.org/%7Ekelvingoodson/sdo_java/M3/RC2/The tag for the source code is at [3] [1] http://people.apache.org/~kelvingoodson/sdo_java/1.0-incubating/RC2/ [2] http://people.apache.org/~kelvingoodson/repo/org/apache/tuscany/sdo/ [3] http://svn.apache.org/repos/asf/incubator/tuscany/tags/java/sdo/1.0-incubating/ Changes in this release are attached below Kelvin. What's New in SDO Java 1.0-incubating Apache Tuscany's SDO Java Release 1.0-incubating is the first such release with full coverage of the SDO 2.1 specification. In addition to adding the few remaining SDO 2.1 features not included in the 1.0-incubating-beta1 release and fixing a number of bugs (see below for detail) there are a number of new features relating to XML serialization, and new support for handling dynamic derivation from static classes. For previous revision history, take a look at http://svn.apache.org/viewvc/incubator/tuscany/tags/java/sdo/1.0-incubating-beta1/sdo/distribution/RELEASE_NOTES.txt SDO Java 1.0-incubating is a superset of previous SDO 1.0-incubating-beta1release. Anything in 1.0-incubating-beta1 is also in 1.0-incubating, but 1.0-incubating contains features and bugfixes not present in 1.0-incubating-beta1 release. Downloading === Please visit http://incubator.apache.org/tuscany/sdo-java-releases.html Binary Artifact Changes === PLEASE NOTE that Since the 1.0-incubating-beta release the following binary artifacts have been renamed The maven groupId of the SDO API binary artifact has changed from commonj to org.apache.tuscany.sdo The maven artifactId for the SDO API binary artifact has changed from sdo-api-r2.1 to tuscany-sdo-api-r2.1 The jar file containing the SDO API has a new tuscany- prefix, so what was .. sdo-api-r2.1-1.0-incubating-beta1.jar in the beta1 release becomes tuscany-sdo-api-r2.1-1.0-incubating.jar in this release. In addition a new maven artifact and jar has appeared. maven groupId=org.apache.tuscany.sdo maven artifactId=tuscany-sdo-lib jar archive=tuscany-sdo-lib-1.0-incubating This artifact provides a cleear distinction between Tuscany SDO implementation, and the Tuscany API which extends the SDO API. See the javadoc contained in the binary release for details of the function provded by this artifact. New Features and Fixes == For more detail on these fixes and features please see ... https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truepid=12310210fixfor=12312521resolution=1sorter/field=issuekeysorter/order=DESCsorter/field=issuetypesorter/order=DESC New Feature TUSCANY-1213SDO 2.1 feature: DataHelper.convert() TUSCANY-1212SDO 2.1 feature: Property.isNullable() and Property.isOpenContent()
[jira] Commented: (TUSCANY-1355) DAS-RDB does not support Oracle or SqlServer well
[ https://issues.apache.org/jira/browse/TUSCANY-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514922 ] wangful commented on TUSCANY-1355: -- Hi, binary distribution at http://people.apache.org/~lresende/tuscany/das-beta1-distribution/ worked well for this problem. Thanks. DAS-RDB does not support Oracle or SqlServer well - Key: TUSCANY-1355 URL: https://issues.apache.org/jira/browse/TUSCANY-1355 Project: Tuscany Issue Type: Bug Components: Java DAS RDB Affects Versions: Java-DAS-M2 Environment: DAS-RDB to access the database of Oracle Reporter: wangful I have used the following simple codes to use DAS to access an Oracle database. //String url = jdbc:db2j:D:/RAD6/runtimes/base_v6/cloudscape/DAS; String url = jdbc:oracle:thin:wcs/wcs1@//raptor08:1521/g10; String query = select * from MYCUSTOMER; String query_result=; Connection conn = null; // Class.forName(com.ibm.db2j.jdbc.DB2jDriver).newInstance(); DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver()); conn = DriverManager.getConnection(url); conn.setAutoCommit(false); DAS das = DAS.FACTORY.createDAS(conn); Command readStores = das.createCommand(query); DataObject root = (DataObject)readStores.executeQuery(); DataObject cus1 = root.getDataObject(MYCUSTOMER[1]); System.out.println(root.getInt(MYCUSTOMER[1]/ID)); System.out.println(root.getString(MYCUSTOMER[1]/NAME)); It will caused the following error: Exception in thread main java.lang.IllegalArgumentException: Class 'DataGraphRoot' does not have a feature named 'MYCUSTOMER' at org.apache.tuscany.sdo.util.DataObjectUtil.getOpenFeature(DataObjectUtil.java:1804) at org.apache.tuscany.sdo.util.DataObjectUtil.getProperty(DataObjectUtil.java:2367) at org.apache.tuscany.sdo.impl.DataObjectImpl.getProperty(DataObjectImpl.java:1287) at org.apache.tuscany.sdo.util.DataObjectUtil$Accessor.setFeatureName(DataObjectUtil.java:2054) at org.apache.tuscany.sdo.util.DataObjectUtil$Accessor.process(DataObjectUtil.java:2161) at org.apache.tuscany.sdo.util.DataObjectUtil$Accessor.init(DataObjectUtil.java:1940) at org.apache.tuscany.sdo.util.DataObjectUtil$Accessor.create(DataObjectUtil.java:1860) at org.apache.tuscany.sdo.util.DataObjectUtil.get(DataObjectUtil.java:744) at org.apache.tuscany.sdo.impl.DataObjectImpl.get(DataObjectImpl.java:216) at org.apache.tuscany.sdo.impl.DataObjectImpl.getDataObject(DataObjectImpl.java:326) at TestDAS.main(TestDAS.java:47) But the same code and same config will work well for cloudscape database. There are also some other problems for Oracle, seems DAS can't work for oracle. Will someone look into this problem? Thanks. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
change build method to incrementalBuild,throws assembly problem
hi,all I checked out all the code and found the composite building method has been changed to the method named incrementalBuild,not used the method named build.The purpose is to build every composite separately,instead of the old implementation merged all included composites first. This will make the building processor on a inner composite running several times when a component is implemented by a inner composite and the inner composite is deployable and the inner composite is builded early than the outer composite.If the inner composite is builded twice,this will occur the assembly problem: Composite assembly problem: Service not found for component service: ComponentOne/$promoted$.Service_One Thanks Wang Feng - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: SOAP over JMS?
Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Release process guide checklist, was Fwd: [VOTE] Release SDO Java 1.0-incubating
snip.. There's already lots of doc about doing releases in the ASF - on the ASF main dev pages and within the Incubator site etc. The problem with there being lots of docs is that there are, ahem, lots of docs. Where is the definitive set of guides that provide the detail required to release Tuscany for someone, like me, who hasn't done it before? Possibly an impossible question to answer as you don't know what I don't know and I don't know what you do know so our view of what is an acceptable level of detail probably differs. Here are the top level guides I found. http://www.apache.org/dev/#rreleases http://incubator.apache.org/guides/releasemanagement.html I can't say whether the above links are satisfactory as I haven't been through the process but I agree that we should propose updates if they are found to be wanting. For example, a discussion of RAT. For consistency I would expect to see all the keystrokes recorded that are required to produce and distribute a release. The fewer the better so, yes, more automation would be good. I expect automation does not completely remove the responsibility to check the release against release criteria, e,g, legal, but gives a good indication of what is required to make the release happen. Again anything we can do to automate these checks is good. I don't expect that release is a process that should involve a lot of imagination on our part other than in providing more automation of the required steps. To put it another way is there anything specific we have to do for Tuscany that would not be included in the general guide? I note that many projects do have release guides. Why is this the case? http://httpd.apache.org/dev/release.html http://cayenne.apache.org/release-guide.html http://incubator.apache.org/servicemix/release-guide.html http://activemq.apache.org/release-guide.html I do note that the Incubator release guide states Different options or opinions are encouraged.. If options are offered (and I can't say that there are without reading the detail) then we need a place to document which are chosen for Tuscany releases. Simon
Re: SOAP over JMS?
On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Couldn't this just use the existing Axis2 facilities, the soap/jms uri format and be done with the scdl binding uri attribute, eg: binding.wsuri=jms:/dynamicTopics/something.TestTopic?transport.jms.ConnectionFactoryJNDIName=TopicConnectionFactoryjava.naming.factory.initial= org.apache.activemq.jndi.ActiveMQInitialContextFactoryjava.naming.provider.url=tcp://localhost:61616java.naming.security.principal=systemjava.naming.security.credentials=manager / ...ant
Re: SOAP over JMS?
On 7/24/07, ant elder [EMAIL PROTECTED] wrote: On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Couldn't this just use the existing Axis2 facilities, the soap/jms uri format and be done with the scdl binding uri attribute, eg: binding.wsuri= jms:/dynamicTopics/something.TestTopic?transport.jms.ConnectionFactoryJNDIName=TopicConnectionFactoryjava.naming.factory.initial= org.apache.activemq.jndi.ActiveMQInitialContextFactoryjava.naming.provider.url=tcp://localhost:61616java.naming.security.principal=systemjava.naming.security.credentials=manager / ...ant This provides a useable optional way of providing this information and a convenient way to start. It would seem a little odd, though, that the JMS binding lays all this out: * binding.jms correlationScheme=string? * initialContextFactory=xs:anyURI? * jndiURL=xs:anyURI? * requestConnection=QName? * responseConnection=QName? * operationProperties=QName? * ... * * destination name=xs:anyURI type=string? create=string? * property name=NMTOKEN type=NMTOKEN* * /destination? * * connectionFactory name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /connectionFactory? * * activationSpec name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /activationSpec? * * response * destination name=xs:anyURI type=string? create=string? * property name=NMTOKEN type=NMTOKEN* * /destination? * * connectionFactory name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /connectionFactory? * * activationSpec name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /activationSpec? * /response? * * resourceAdapter name=NMTOKEN? * property name=NMTOKEN type=NMTOKEN* * /resourceAdapter? * * headers JMSType=string? * JMSCorrelationId=string? * JMSDeliveryMode=string? * JMSTimeToLive=int? * JMSPriority=string? * property name=NMTOKEN type=NMTOKEN* * /headers? * * operationProperties name=string nativeOperation=string? * property name=NMTOKEN type=NMTOKEN* * headers JMSType=string? * JMSCorrelationId=string? * JMSDeliveryMode=string? * JMSTimeToLive=int? * JMSPriority=string? * property name=NMTOKEN type=NMTOKEN* * /headers? * /operationProperties* * /binding.jms if the only way to provide it for SOAP/JMS is through the URI. Simon
Re: SOAP over JMS?
On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] So you would end up with binding.ws transport.??? some transport config /transport.??? binding.ws Is that right?
Re: SOAP over JMS?
The sca jms binding spec also supports defining all that with the uri attribute doesn't it. The ws soap/jms uri format has been an evolving technique used for quite a while, and there is an attempt to standardized it, see : http://mail-archives.apache.org/mod_mbox/ws-axis-dev/200701.mbox/[EMAIL PROTECTED] ...ant On 7/24/07, Simon Laws [EMAIL PROTECTED] wrote: On 7/24/07, ant elder [EMAIL PROTECTED] wrote: On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Couldn't this just use the existing Axis2 facilities, the soap/jms uri format and be done with the scdl binding uri attribute, eg: binding.wsuri= jms:/dynamicTopics/something.TestTopic?transport.jms.ConnectionFactoryJNDIName=TopicConnectionFactoryjava.naming.factory.initial= org.apache.activemq.jndi.ActiveMQInitialContextFactoryjava.naming.provider.url=tcp://localhost:61616java.naming.security.principal=systemjava.naming.security.credentials=manager / ...ant This provides a useable optional way of providing this information and a convenient way to start. It would seem a little odd, though, that the JMS binding lays all this out: * binding.jms correlationScheme=string? * initialContextFactory=xs:anyURI? * jndiURL=xs:anyURI? * requestConnection=QName? * responseConnection=QName? * operationProperties=QName? * ... * * destination name=xs:anyURI type=string? create=string? * property name=NMTOKEN type=NMTOKEN* * /destination? * * connectionFactory name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /connectionFactory? * * activationSpec name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /activationSpec? * * response * destination name=xs:anyURI type=string? create=string? * property name=NMTOKEN type=NMTOKEN* * /destination? * * connectionFactory name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /connectionFactory? * * activationSpec name=xs:anyURI create=string? * property name=NMTOKEN type=NMTOKEN* * /activationSpec? * /response? * * resourceAdapter name=NMTOKEN? * property name=NMTOKEN type=NMTOKEN* * /resourceAdapter? * * headers JMSType=string? * JMSCorrelationId=string? * JMSDeliveryMode=string? * JMSTimeToLive=int? * JMSPriority=string? * property name=NMTOKEN type=NMTOKEN* * /headers? * * operationProperties name=string nativeOperation=string? * property name=NMTOKEN type=NMTOKEN* * headers JMSType=string? * JMSCorrelationId=string? * JMSDeliveryMode=string? * JMSTimeToLive=int? * JMSPriority=string? * property name=NMTOKEN type=NMTOKEN* * /headers? * /operationProperties* * /binding.jms if the only way to provide it for SOAP/JMS is through the URI. Simon
Re: SOAP over JMS?
ant elder wrote: Couldn't this just use the existing Axis2 facilities, the soap/jms uri format and be done with the scdl binding uri attribute, eg: binding.wsuri=jms:/dynamicTopics/something.TestTopic?transport.jms.ConnectionFactoryJNDIName=TopicConnectionFactoryjava.naming.factory.initial= org.apache.activemq.jndi.ActiveMQInitialContextFactoryjava.naming.provider.url=tcp://localhost:61616java.naming.security.principal=systemjava.naming.security.credentials=manager / ...ant Simple that ain't ;-) Agreed that if you get into requiring all those details, that is one way to supply them. As Simon notes elsewhere, the JMS binding provides a more structured way of providing all that gorp. I was hoping for something simple, but perhaps I'm being too simplistic. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: SOAP over JMS?
Simon Laws wrote: On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] So you would end up with binding.ws transport.??? some transport config /transport.??? binding.ws Is that right? Nope. That's not what I mean. Sorry but I assumed too much knowledge of the SCA Policy spec... What you would get is this: binding.ws requires=transport.jms/ or binding.ws requires=transport.http/ If you require specific configuration details for the given transport, then this would have to be supplied by additional attributes or by additional child elements. The URI attribute is one possible approach, but it can get to look very messy very quickly. Whether you need a load of configuration really depends on whether you are going external to the SCA Domain. If you are going external, then detailed config is probably necessary. However, some good sensible defaulting can probably give a usable service with minimal information. References are a different matter since the target endpoint is what it is. Using SOAP over HTTP should in principle simplify things to some extent since the message format is known and hence there is less configuration required. Hope this helps, Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Created: (TUSCANY-1477) CompositeActivatorImpl.deactivate() is empty
CompositeActivatorImpl.deactivate() is empty Key: TUSCANY-1477 URL: https://issues.apache.org/jira/browse/TUSCANY-1477 Project: Tuscany Issue Type: Bug Affects Versions: Java-SCA-0.91 Reporter: Vamsavardhana Reddy CompositeActivatorImpl.deactivate() is empty. Should there be some code in there? I am seeing a problem with removing components and composites from EmbeddedSCADomain. I have called EmbeddedSCADomain.DomainCompositeHelper ().stopComponent() with all the component names in my composite and then EmbeddedSCADomain.DomainCompositeHelper ().removeComposite(). I am noticing that the components are not getting removed from EmbeddedSCADomain.domainComposite. EmbeddedSCADomain.DomainCompositeHelper.removeComposite() is calling compositeActivator.deactivate(). But CompositeActivatorImpl.deactivate() is empty. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Release process guide checklist, was Fwd: [VOTE] Release SDO Java 1.0-incubating
On 7/24/07, ant elder [EMAIL PROTECTED] wrote: I agree we could do things to improve our releases. Most ASF releases end up having several RCs, its a natural part of the process, I'm not sure it indicates any failing somewhere. IMHO when you use the RC method, having multiple RCs does not indicate a failing. it really indicates that committers are carefully vetting the candidates. There's already lots of doc about doing releases in the ASF - on the ASF main dev pages and within the Incubator site etc. If there's omissions from those existing guides we should get them updated. Tuscany having a 'formal release guide' makes me nervous it would just be used as a stick to beat people with when some issue is discovered. An issue with is that currently making our releases is quite a manual process, fixing this would be more worthwhile than writing more documentation (IMHO). IMO there's a balance to be struck. each project develops it's own house style for releases. recording this house style allows more developers to act as release managers. IMHO automation is difficult to perfect. recording the house style helps to manage the automation process. though a worthy investment, it is best to adopt an incremental approach. automate more but do not put off automation or releases to wait for the other. the incubator release guide is the next document on my personal hit list. i'd like to see a menu of ways that releases are done at apache allowing project to pick and choose their house style by combining a number of well documentation alternatives. it'd be great if the tuscany team would consider feeding any release documentation they develop back into the release guide and create a style guide linked to the details. - robert - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Release process guide checklist, was Fwd: [VOTE] Release SDO Java 1.0-incubating
On 7/24/07, ant elder [EMAIL PROTECTED] wrote: snip - change builds to use the maven release plugin to avoid most of the manual steps when creating a release - use maven to as much as possible automate the adding of dependency information to the LICENSE and NOTICE files - update the RAT tool to validate the legal aspects (LICENSE/NOTICE/DISCLAIMER exist) of things like artifacts in the temp maven repository - update RAT to validate the signatures of all downloadable artifacts i've talked to brett before about integrating parts of RAT into the maven release plugin jochen has developed a maven plugin but being able to pass or fail relies on more function being added to RAT i had hoped to be able to find more cycles now but IMAP is tough and a personal priority (since i use it to read my mail) so i'm not sure when i'll be able to find the cycles. SO any help on RAT would be gratefully accepted ;-) - robert - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: SOAP over JMS?
On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: On 7/24/07, Mike Edwards [EMAIL PROTECTED] wrote: Simon Laws wrote: Has anyone in Tuscany made a binding that ships SOAP messages over JMS instead of HTTP? Looking at the current code base and at the old code in the sandbox I don't see anything. Simon Simon, Shouldn't this be a simple extension of the Web services binding? The interesting question is how to indicate that a JMS transport should be used instead of HTTP. The spec only allows for this to be done via WSDL at the moment - not so good if you didn't want to create the WSDL yourself. How about the idea of adding an intent for the Web services binding which can be used to indicate the transport?? eg: transport.http = use the HTTP transport (default) transport.jms = use the JMS transport transport.foo = use the foo transport The Web services binding can indicate which of these intents it supports - since that depends on the support being available in the Web services stack that you are using. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] So you would end up with binding.ws transport.??? some transport config /transport.??? binding.ws Is that right? Nope. That's not what I mean. Sorry but I assumed too much knowledge of the SCA Policy spec... What you would get is this: binding.ws requires=transport.jms/ or binding.ws requires=transport.http/ If you require specific configuration details for the given transport, then this would have to be supplied by additional attributes or by additional child elements. The URI attribute is one possible approach, but it can get to look very messy very quickly. Whether you need a load of configuration really depends on whether you are going external to the SCA Domain. If you are going external, then detailed config is probably necessary. However, some good sensible defaulting can probably give a usable service with minimal information. References are a different matter since the target endpoint is what it is. Using SOAP over HTTP should in principle simplify things to some extent since the message format is known and hence there is less configuration required. Hope this helps, Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Ah, thanks Mike. I didn't latch onto the implication of the word intent. The choice we are talking about here seems more like a concrete decision than an intent. Does this match well with the, erm, intention of intents? Simon
Re: SOAP over JMS?
Simon Laws wrote: snip Ah, thanks Mike. I didn't latch onto the implication of the word intent. The choice we are talking about here seems more like a concrete decision than an intent. Does this match well with the, erm, intention of intents? Simon It is one use of intents - and, in my opinion, it is a reasonable match. It tells the binding to apply a particular policy - the policy of using a specific transport. Yours, Mike. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Flexibility in supporting JDBC's Statement.RETURN_GENERATED_KEYS in RDB DAS (JIRA-1417)
Hi, Below are some details about the solution for JIRA-1353. Please review the patch. http://issues.apache.org/jira/browse/DERBY-242 - indicates that for 10.1.1.0, DatabaseMetadata.supportsGetGeneratedKeys() returns false. Also, checked that for the current Maven Repo's Derby version (10.1.2.1) same is happening. DatabaseMetadata.supportsGetGeneratedKeys() is not available in JDBC 2.0. (We can catch exception if it is thrown in the supports...() , but we can not detect cases like above - Derby) So, using DatabaseMetadata.supportsGetGeneratedKeys() (when config attribute is not set) may not be reliable in all cases. To keep the fix simple and also not to break existing test cases (which assume default TRUE), the following is changed in patch 1) New Config attribute xsd:attribute name=useGetGeneratedKeys type=xsd:boolean default=true/ 2) Default to TRUE - so old test cases and old configs continue to work 3) Remove vendor name hardcoding logic to set flag useGetGeneratedKeys So, in effect, with this patch (JIRA-1353) user will get an option to pass FALSE, when it is sure that the dbms driver in use does not support this feature. Thus, instead of hardcoding vendor names (without driver versions), the responsibility is given to user to pass FALSE when needed. Have tested this fix on Derby, DB2, MySQL and PostgreSQL. Also, new testcases (6) added - CheckSupportGeneratedKeys example Config XML using the above attribute (say for PostgreSQL), the XML will look as below Config xmlns=http:///org.apache.tuscany.das.rdb/config.xsd; useGetGeneratedKeys=false /Config -- User will need to pass the Config during creation of DAS instance. DAS.FACTORY.createDAS(config, getConnection()) or DAS.FACTORY.createDAS(config) or DAS.FACTORY.createDAS(InputStream configStream) The value of the attrib can be true/false. And Driver may/may not support GeneratedKeys. Based on this, following situations can arrive- A Driver supports GeneratedKeys 1]Table is created with one column having GENERATED ALWAYS AS IDENTITY clause, Irrespective of value of useGetGeneratedKeys flag, insert command will succeed true flag value - insert.getGeneratedKey() will return key value false flag value - insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! 2]Table is created with no column having GENERATED ALWAYS AS IDENTITY clause, Irrespective of value of useGetGeneratedKeys flag, insert command will succeed true flag value - insert.getGeneratedKey() - how should it behave? In case of Derby it is returning wrong results. false flag value - insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! B Driver does not support GeneratedKeys (say PostgreSQL) - tested with a test client - 1]Table can be created with no column having GENERATED ALWAYS AS IDENTITY clause, When value of useGetGeneratedKeys flag is false, insert command will succeed, insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! When value of useGetGeneratedKeys flag is true, insert command will fail C setConnection(java.sql.Connection) is called (and not setConnection( java.sql.Connection, Config)), default TRUE is assumed. When DBMS Driver does not support useGetGeneratedKeys, user needs to pass Config with useGetGeneratedKeys FALSE. After resolution of JIRA-1353, can we link JIRA-1417 to it? Regards, Amita On 7/10/07, Luciano Resende [EMAIL PROTECTED] wrote: It would be great if you could force a different exception in your investigation (e.g jdbc driver supports returning the generated keys, but the call gives back a different exception), and see what is the resulted behavior of your proposal. On 7/10/07, Amita Vadhavkar [EMAIL PROTECTED] wrote: That is right, I will also give it a try with some known rdbms dirvers, versions and list the results in JIRA-1417, and we will also analyze further for alternatives. I also, saw some relevant links related to JIRA-1416 (PostgreSQL...generated keys). [1] http://gborg.postgresql.org/project/pgjdbc/bugs/bugupdate.php?984 [2] http://archives.postgresql.org/pgsql-jdbc/2007-02/msg00074.php Based on [2], it looks like , server =8.2 , has some support for auto gen keys in PostgreSQL. So, validity of JIRA-1416 will be based on the exact version of PostgreSQL, Regards, Amita On 7/10/07, Luciano Resende [EMAIL PROTECTED] wrote: Hi Amita Indeed we need a better way to handle this, my only concern with this approach are the unknown side effects we can get if the exception returned when you first pass the Statement.RETURN_GENERATED_KEYS is not related to the JDBC driver supporting or not generated keys. On 7/9/07, Amita Vadhavkar [EMAIL PROTECTED] wrote: Hi, We are at present hardcoding some
[jira] Updated: (TUSCANY-1353) Exception attempting to insert rows using DAS w/DataDirect JDBC driver
[ https://issues.apache.org/jira/browse/TUSCANY-1353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amita Vadhavkar updated TUSCANY-1353: - Attachment: 1353.patch Please see details posted on http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg19752.html thread Exception attempting to insert rows using DAS w/DataDirect JDBC driver -- Key: TUSCANY-1353 URL: https://issues.apache.org/jira/browse/TUSCANY-1353 Project: Tuscany Issue Type: Bug Components: Java DAS RDB Affects Versions: Java-DAS-M2 Environment: Windows XP, WebLogic 8.1SP6, Sybase 12.5, DataDirect Sybase JDBC driver (embedded within BEA WebLogic) Reporter: Ron Gavlin Assignee: Amita Vadhavkar Priority: Critical Attachments: 1353.patch Greetings, I am having problems inserting rows with Tuscany DAS M2 using the BEA WebLogic Sybase JDBC driver (DataDirect Connect for JDBC 3.6 June 2007)) which is an embedded version of the popular DataDirect JDBC driver. Although I have not tested it, I suspect this problem appears in non-Sybase versions of the driver as well. The code below generates the listed stacktrace. Note: BEA apparently renames the DataDirect Connect for JDBC classes as part of its embedding process. ... Command insert = das.createCommand(insert into Test (testCol1, testCol2) values (?, ?)); insert.setParameter(1, str1); insert.setParameter(2, str2); insert.execute(); Stacktrace: Caused by: java.sql.SQLException: [BEA][Sybase JDBC Driver]No rows affected. at weblogic.jdbc.base.BaseExceptions.createException(Unknown Source) at weblogic.jdbc.base.BaseException.getException(Unknown Source) at weblogic.jdbc.base.BaseStatement.executeUpdateInternal(Unknown Source) at weblogic.jdbc.base.BasePreparedStatement.executeUpdate(Unknown Source) at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:159) at org.apache.tusany.das.rdb.impl.Statement.executeUpdate(Statement.java:173) at org.apache.tusany.das.rdb.impl.Statement.executeUpdate(Statement.java:133) at org.apache.tusany.das.rdb.impl.InsertCommandImpl.execute(InsertCommandImpl.java:44) While interactively debugging org.apache.tuscany.das.rdb.impl.ConnectionImpl.prepareStatement(String queryString, String[] returnKeys), I noticed if I manually change the boolean member variable useGetGeneratedKeys to false, no exception is generated and the insert works as designed. The DataDirect Connect for JDBC drivers are either supported or embedded into numerous commercial application servers, including IBM WebSphere 6.1, jBoss 4.x, and BEA WebSphere. Folks using these platforms are likely to quickly hit this problem if they attempt to use the DAS. - Ron -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: [SCA Native] preliminary ant build
That was the only drawback that I could see too. Each depot ought to be basically stand-alone. As for a top-level build.xml for all 3 projects, that would be very simple and would not require any of the ant infrastructure used by the individual projects. It would be very similar to the root build.xml for TuscanySCA. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Pete Robbins [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 11:30 PM To: tuscany-dev@ws.apache.org Subject: Re: [SCA Native] preliminary ant build A top level build in tuscany/cpp should be easy to do. I'm not sure we should move (as Brady suggested) the common ant scripts up into cpp/etc though. I think it's important that I can extract tuscany/cpp/sdo, for example, and build it without using anything outside of that tree. Cheers, On 24/07/07, Adriano Crestani [EMAIL PROTECTED] wrote: Great idea, soon I will try to apply this idea to Native DAS and see how it works. I think the idea could also be easily applied to Native SDO, as it does not have too much dependencies and code generation as Native SCA does. A folder ant-core could be created under tuscany/cpp/ folder to place the ant scripts shared by the projects. Also, we could add a build.xml under tuscany/ccp/ that builds all 3 subprojects at once, if the 3 to implement this ant build process. What do you think? Regards, Adriano Crestani On 7/23/07, Brady Johnson [EMAIL PROTECTED] wrote: Correction, it should be like this: target name=compile.core cpp-compile srcdir=${core.abs.dir} objdir=${lib.dir} infiles=${core.cpp.files} custom-cc-elements defineset if=windows define=SCA_EXPORTS/ /custom-cc-elements /cpp-compile /target Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Brady Johnson [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 5:05 PM To: tuscany-dev@ws.apache.org Subject: RE: [SCA Native] preliminary ant build Pete, Good catch. That's an easy fix. I'll submit it with the next patch tomorrow. Basically it involves removing SCA_EXPORTS from the Tuscany-BaseCompiler and adding it to the runtime/core/src targets: compile.core compile.extension compile.model compile.util Like this: target name=compile.core cpp-compile srcdir=${core.abs.dir} objdir=${lib.dir} infiles=${core.cpp.files}/ custom-cc-element defineset if=windows define=SCA_EXPORTS/ /custom-cc-element /target Tomorrow I'll have the python, ruby, rest, and maybe php extensions complete. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Pete Robbins [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 2:41 PM To: tuscany-dev@ws.apache.org Subject: Re: [SCA Native] preliminary ant build I think there is a problem in the extension compilations. The SCA_EXPORTS directive should only be set when compiling the runtime/core. When compiling for dlls on windows which use the core dll SCA_EXPORTS must not be set. I guess this means we have to move the setting of this directive from the definition of the Tuscany-BaseCompiler Cheers, On 23/07/07, Pete Robbins [EMAIL PROTECTED] wrote: I've applied the patch. How are you creating the patches? I had trouble applying it on Windows using ToirtoiseSVN. I've included the changes in the patch to the tools/TuscanyDriver build. I haven't tested this and I'm not sure if it works with the system.xml etc. Can you do a clean extract as a base for future patches? Cheers, On 23/07/07, Pete Robbins [EMAIL PROTECTED] wrote: I'll give this a go. I should be able to run it on Mac as well. Cheers, On 23/07/07, Brady Johnson [EMAIL PROTECTED] wrote: I updated the jira1438 with update 3, which includes the following: https://issues.apache.org/jira/browse/TUSCANY-1438 - added build.xml for the following dirs: runtime/extensions/build.xml runtime/extensions/cpp/build.xml runtime/extensions/sca/build.xml runtime/extensions/ws/build.xml - changed system.xml to check for necessary axis, php, python, rest, and ruby env vars. If they're not set in the env, look for them in platform.properties - changed compile-targets.xml targets cpp-install-headers/ to cpp-install-files/ cpp-install-lib/ to cpp-install-file/ - added compile-targets.xml target: cpp-symlink/ - added library versioning and the platform.tuscanySCA.library.version
Synapse using SCA assembly model for configuration
I recently read Dan's blog entry on the SCA assembly model: http://netzooid.com/blog/2007/07/22/sca-assembly-vs-spring-cxf/ That and some other discussions I've had made me think about maybe offering the SCA assembly model to configure Synapse. So it seems to me that you can draw a direct correlation between: Synapse Proxy and SCA Service Synapse Endpoint and SCA reference Synapse Mediator - a specific type of SCA Component Synapse Property - SCA Property If we were to make the XMLConfigurationBuilder pluggable then we could just use this as an alternative configuration language. We did talk about this in the beginning of Synapse [we discussed having a LEX/YACC style config language - which I would still LOVE if someone wants to do that - it would make a great Computer Science project] Anyway back to SCA, it seems to me that this would be a pretty nice alternative config model, using an independent third party language. I'm guessing that there is plenty of Tuscany code could help us implement this. Maybe we might do it jointly? So I'm imagining the existing runtime being *exactly* the same as today, but being able to use a subset of the SCA Assembly model to configure it. Maybe some of the SCA wizards on tusc-dev can jump in and let me know if this is feasible? Paul PS If someone is looking at http://www.infoq.com/news/2007/07/scaproblem and wondering where this is coming from I offer a few thoughts. Firstly, I'm always open to being proved wrong! Secondly, this would not be adding any layers of indirection... I'm mapping directly from SCA concepts into the Synapse runtime with this idea. Finally, I see nothing wrong with holding several inconsistent viewpoints at the same time :) -- Paul Fremantle Co-Founder and VP of Technical Sales, WSO2 OASIS WS-RX TC Co-chair blog: http://pzf.fremantle.org [EMAIL PROTECTED] Oxygenating the Web Service Platform, www.wso2.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: How does xsd:ID property type is distinguished from xsd:string
Hi Pinaki, They can't be distinguished in the current version of SDO metadata, you need to look at the original XSD. The next version of SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata. In Tuscany, you can currently determine this by downcasting to the EMF implementation class, although we don't recommend people do that: System.out.println( Property isID: + ((property.getType().isDataType()) ((EAttribute)property).isID())); Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 01:00:03 AM: Hi, A newbie question: How does two Property: one defined as xsd:string and other as xsd:ID can be distiguished? Assume: 1. we have a simple XML schema defining a Person SDO Type with two properties as follows: xsd:complexType name=Person xsd:attribute name=firstName type=xsd:string/ xsd:attribute name=idtype=xsd:ID/ /xsd:complexType 2. TypeHelper.INSTANCE.define() defines SDO Type with two commonj.sdo.Property, p1 (for firstName) and p2 (for id) 3. both p1.getType().getInstanceClass() and p2.getType().getInstanceClass() return java.lang.String both p1.getType().isDataType() and p2.getType().isDataType() return true The question is, what can be done to identify p2 as a property that were defined as xsd:ID? Thanks for your help -- Pinaki Poddar 972.834.2865 Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: How does xsd:ID property type is distinguished from xsd:string
Hi Frank, Thanks. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is good news. However, identity mechanics should appear more distinctly on API surface e.g. boolean Proprty.isIdentifier(); ListProperty Type.getIdentifiers(); I will qualify absence of any identity semantics from SDO a major drawback. Especially, if it comes to any persistence operation on SDO DataObject/DataGraph. Hopefully some of the SDO spec writers would notice this omission and add it to future spec version. After a quick pick in current DAS implementation, it appeared that 'primary key' identification is based on existing database column name ID (yes, hardcoded) -- but I may be wrong and am ready to learn how DAS is handling identity issue. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is a good decision. Wrapping should always provide access to what is being wrapped. downcasting to the EMF implementation class Thanks for this info. I will do this for now. But I heed your advice and have already a scheme in place that programs against *only* commonj.sdo API but can access underlying implementaion, if available, without any compile-time binding. Slightly costly -- but works for, say, extracting package name from Types. Pinaki Poddar 972.834.2865 -Original Message- From: Frank Budinsky [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 9:16 AM To: tuscany-dev@ws.apache.org Subject: Re: How does xsd:ID property type is distinguished from xsd:string Hi Pinaki, They can't be distinguished in the current version of SDO metadata, you need to look at the original XSD. The next version of SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata. In Tuscany, you can currently determine this by downcasting to the EMF implementation class, although we don't recommend people do that: System.out.println( Property isID: + ((property.getType().isDataType()) ((EAttribute)property).isID())); Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 01:00:03 AM: Hi, A newbie question: How does two Property: one defined as xsd:string and other as xsd:ID can be distiguished? Assume: 1. we have a simple XML schema defining a Person SDO Type with two properties as follows: xsd:complexType name=Person xsd:attribute name=firstName type=xsd:string/ xsd:attribute name=idtype=xsd:ID/ /xsd:complexType 2. TypeHelper.INSTANCE.define() defines SDO Type with two commonj.sdo.Property, p1 (for firstName) and p2 (for id) 3. both p1.getType().getInstanceClass() and p2.getType().getInstanceClass() return java.lang.String both p1.getType().isDataType() and p2.getType().isDataType() return true The question is, what can be done to identify p2 as a property that were defined as xsd:ID? Thanks for your help -- Pinaki Poddar 972.834.2865 Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [LDAP DAS] 1.0.0 Just about Done + Question
Luciano Resende wrote: Are we trying to make it much more complex ? :-) I'm a big believer in as simple as possible, but no simpler. Are users going to get confused to think they have a local copy of the XML config file, but the LDAP DAS is really using the one stored on the server ? You are right. Having the whole configuration file in the DIT does not make much sense. Initially I was thinking that the config file would contain a list of all the xsd namespaces representing the schemas that are written to the server. But now I'm thinking that this list should be contained on the server, but will remain independent of the config file. The DAS then goes through the following sequence before writing a graph: - Lookup the supported schemas (List of xsd namespaces) in the DIT (Just creates a ListString of xsd namespaces) - See whether the list contains the xsd namespace for the graph that is about to be written - If it does, write the graph - If it does not, write the schema - Add the xsd namespace string to the supported schema list - Update this list on the server Sound OK? SNIP BTW, it would be great if you could add some overview design doc on the Wiki, also some sample code, or pointers to sample code, etc... Sure - I'm just implementing the things we are going over right now, and then I'm going to write a users guide, followed by updates to the design guide. The remaining (For a working LDAP DAS) task list looks approximately like this: - Finish the DAS Interface / object (Main CRUD Interface (LdapDAS.write(EDataGraph), ) - Test the DAS Interface / object - Finish and test JNDI Connection pooling configuration (Low priority) - Update the EDataGraphCreator to ignore Transient properties (Right now it will write all the properties) - Add support for multiplicity many EAttributes (Right now it just assumes that they are singular) - Complete users guide - Complete design guide - Formal Apache review Thoughts? SNIP - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Flexibility in supporting JDBC's Statement.RETURN_GENERATED_KEYS in RDB DAS (JIRA-1417)
Hi Amita, Since DAS has JDK 1.4 as a requirement and the JDBC 3.0 APIs are built into JDK 1.4, isn't it sufficient to interpret a JDBC 2.0 driver throwing a exception from supportsGetGeneratedKeys() as a false. Also, since the DAS is currently a pre-1.0 release, I don't think our solution needs to be driven by backwards-compatibility or whether test cases get broken or not. From my perspective, the default case (the absence of the attribute) should be driven by the JDBC driver's DatabaseMetadata.supportsGetGeneratedKeys() value. The useGetGeneratedKeys attribute could be explicitly set to true for cases like Derby where the driver's partial support for this feature is sufficient for the needs of the DAS. In the case of Oracle, they just started supporting getGeneratedKeys() in Oracle 10g R2. It is not supported in earlier versions of the database or drivers. So, using the DatabaseMetadata-driven approach, the DAS should be able to support all Oracle versions out of the box with no special config attribute. In the future, hopefully Derby will implement full getGeneratedKeys() support and thus would not require special configuration within the DAS. My two cents... - Ron - Original Message From: Amita Vadhavkar [EMAIL PROTECTED] To: [EMAIL PROTECTED]; tuscany-dev@ws.apache.org Sent: Tuesday, July 24, 2007 8:56:36 AM Subject: Re: Flexibility in supporting JDBC's Statement.RETURN_GENERATED_KEYS in RDB DAS (JIRA-1417) Hi, Below are some details about the solution for JIRA-1353. Please review the patch. http://issues.apache.org/jira/browse/DERBY-242 - indicates that for 10.1.1.0, DatabaseMetadata.supportsGetGeneratedKeys() returns false. Also, checked that for the current Maven Repo's Derby version (10.1.2.1) same is happening. DatabaseMetadata.supportsGetGeneratedKeys() is not available in JDBC 2.0. (We can catch exception if it is thrown in the supports...() , but we can not detect cases like above - Derby) So, using DatabaseMetadata.supportsGetGeneratedKeys() (when config attribute is not set) may not be reliable in all cases. To keep the fix simple and also not to break existing test cases (which assume default TRUE), the following is changed in patch 1) New Config attribute xsd:attribute name=useGetGeneratedKeys type=xsd:boolean default=true/ 2) Default to TRUE - so old test cases and old configs continue to work 3) Remove vendor name hardcoding logic to set flag useGetGeneratedKeys So, in effect, with this patch (JIRA-1353) user will get an option to pass FALSE, when it is sure that the dbms driver in use does not support this feature. Thus, instead of hardcoding vendor names (without driver versions), the responsibility is given to user to pass FALSE when needed. Have tested this fix on Derby, DB2, MySQL and PostgreSQL. Also, new testcases (6) added - CheckSupportGeneratedKeys example Config XML using the above attribute (say for PostgreSQL), the XML will look as below Config xmlns=http:///org.apache.tuscany.das.rdb/config.xsd;; useGetGeneratedKeys=false /Config -- User will need to pass the Config during creation of DAS instance. DAS.FACTORY.createDAS(config, getConnection()) or DAS.FACTORY.createDAS(config) or DAS.FACTORY.createDAS(InputStream configStream) The value of the attrib can be true/false. And Driver may/may not support GeneratedKeys. Based on this, following situations can arrive- A Driver supports GeneratedKeys 1]Table is created with one column having GENERATED ALWAYS AS IDENTITY clause, Irrespective of value of useGetGeneratedKeys flag, insert command will succeed true flag value - insert.getGeneratedKey() will return key value false flag value - insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! 2]Table is created with no column having GENERATED ALWAYS AS IDENTITY clause, Irrespective of value of useGetGeneratedKeys flag, insert command will succeed true flag value - insert.getGeneratedKey() - how should it behave? In case of Derby it is returning wrong results. false flag value - insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! B Driver does not support GeneratedKeys (say PostgreSQL) - tested with a test client - 1]Table can be created with no column having GENERATED ALWAYS AS IDENTITY clause, When value of useGetGeneratedKeys flag is false, insert command will succeed, insert.getGeneratedKey() will throw RuntimeException - Could not obtain generated key! When value of useGetGeneratedKeys flag is true, insert command will fail C setConnection(java.sql.Connection) is called (and not setConnection( java.sql.Connection, Config)), default TRUE is assumed. When DBMS Driver does not support useGetGeneratedKeys, user needs to pass Config with useGetGeneratedKeys
Notification binding breaking continuum build, was: svn commit: r558796 - in /incubator/tuscany/java/sca: modules/pom.xml samples/pom.xml
This change breaks the continuum nightly build (even though a local build is successful). Could you please investigate and help fix it? Thanks. [EMAIL PROTECTED] wrote: Author: isilval Date: Mon Jul 23 09:50:14 2007 New Revision: 558796 URL: http://svn.apache.org/viewvc?view=revrev=558796 Log: Add notification to main build Modified: incubator/tuscany/java/sca/modules/pom.xml incubator/tuscany/java/sca/samples/pom.xml Modified: incubator/tuscany/java/sca/modules/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/modules/pom.xml (original) +++ incubator/tuscany/java/sca/modules/pom.xml Mon Jul 23 09:50:14 2007 @@ -44,6 +44,7 @@ modulebinding-ejb/module modulebinding-feed/module modulebinding-jsonrpc/module +modulebinding-notification/module modulebinding-rmi/module modulebinding-sca/module modulebinding-ws/module @@ -87,6 +88,7 @@ moduleimplementation-java/module moduleimplementation-java-xml/module moduleimplementation-java-runtime/module +moduleimplementation-notification/module moduleimplementation-osgi/module moduleimplementation-resource/module moduleimplementation-script/module Modified: incubator/tuscany/java/sca/samples/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/samples/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/samples/pom.xml (original) +++ incubator/tuscany/java/sca/samples/pom.xml Mon Jul 23 09:50:14 2007 @@ -39,6 +39,9 @@ modulebinding-echo/module modulebinding-echo2-extension/module modulebinding-echo-extension/module +modulebinding-notification-broker/module +modulebinding-notification-consumer/module +modulebinding-notification-producer/module modulecalculator/module !-- modulecalculator-distributed/module @@ -59,6 +62,7 @@ moduleimplementation-crud/module moduleimplementation-crud2-extension/module moduleimplementation-crud-extension/module +moduleimplementation-notification/module moduleimplementation-pojo2-extension/module !-- moduleloanapplication/module - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Notification binding breaking continuum build, was: svn commit: r558796 - in /incubator/tuscany/java/sca: modules/pom.xml samples/pom.xml
There was an issue with the sca-api group id, I have fixed under revision #559106 and I have started a new build in continuum. On 7/24/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote: This change breaks the continuum nightly build (even though a local build is successful). Could you please investigate and help fix it? Thanks. [EMAIL PROTECTED] wrote: Author: isilval Date: Mon Jul 23 09:50:14 2007 New Revision: 558796 URL: http://svn.apache.org/viewvc?view=revrev=558796 Log: Add notification to main build Modified: incubator/tuscany/java/sca/modules/pom.xml incubator/tuscany/java/sca/samples/pom.xml Modified: incubator/tuscany/java/sca/modules/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/modules/pom.xml (original) +++ incubator/tuscany/java/sca/modules/pom.xml Mon Jul 23 09:50:14 2007 @@ -44,6 +44,7 @@ modulebinding-ejb/module modulebinding-feed/module modulebinding-jsonrpc/module +modulebinding-notification/module modulebinding-rmi/module modulebinding-sca/module modulebinding-ws/module @@ -87,6 +88,7 @@ moduleimplementation-java/module moduleimplementation-java-xml/module moduleimplementation-java-runtime/module +moduleimplementation-notification/module moduleimplementation-osgi/module moduleimplementation-resource/module moduleimplementation-script/module Modified: incubator/tuscany/java/sca/samples/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/samples/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/samples/pom.xml (original) +++ incubator/tuscany/java/sca/samples/pom.xml Mon Jul 23 09:50:14 2007 @@ -39,6 +39,9 @@ modulebinding-echo/module modulebinding-echo2-extension/module modulebinding-echo-extension/module +modulebinding-notification-broker/module +modulebinding-notification-consumer/module +modulebinding-notification-producer/module modulecalculator/module !-- modulecalculator-distributed/module @@ -59,6 +62,7 @@ moduleimplementation-crud/module moduleimplementation-crud2-extension/module moduleimplementation-crud-extension/module +moduleimplementation-notification/module moduleimplementation-pojo2-extension/module !-- moduleloanapplication/module - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (TUSCANY-1438) Change TuscanySCA Native build system to use ant
[ https://issues.apache.org/jira/browse/TUSCANY-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brady Johnson updated TUSCANY-1438: --- Attachment: tuscany_patch_update4_jira1438 Attaching update4 which includes the following new build.xml files: - runtime/extensions/python - runtime/extensions/rest - runtime/extensions/ruby I'm working on php (which is the only one left for the source code) now, but its giving me problems, so it might take a while. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] Change TuscanySCA Native build system to use ant Key: TUSCANY-1438 URL: https://issues.apache.org/jira/browse/TUSCANY-1438 Project: Tuscany Issue Type: Improvement Components: C++ SCA Affects Versions: Cpp-Next Environment: all platforms Reporter: Brady Johnson Priority: Minor Fix For: Cpp-Next Attachments: tuscany_patch_update2_jira1438, tuscany_patch_update3_jira1438, tuscany_patch_update4_jira1438, tuscanySCAnative_ant.tar.gz, tuscanySCAnative_ant_update1.tar.gz In an effort to simplify the build process, I would like to propose switching over to use ant instead of automake. It will be much easier to maintain, and is used by many more developers today than automake. Per a request by Pete Robbins to show what the build scripts would look like for cpp/sca/runtime/core, I will attach a patch with the build infrastructure to build, link, and install said library. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Is there a problem in EmbeddedSCADomain.activateDomain() ?
Hi Luciano, I have created TUSCANY-1477 for the issue. Here is another situation I am running into. Step 0: Create an EmbeddedSCADomain. Step 1: Add contribution from contribution1.jar (which provides say Service1) to EmbeddedSCADomain from Step 0. Step 2: Call EmbeddedSCADomain.DomainCompositeHelper.activateDomain() Step 3: Call EmbeddedSCADomain.DomainCompositeHelper.startComponent() on all componets from contribution1.jar. Step 4: Add contribution from contribution2.jar (which provides say Service2) to EmbeddedSCADomain from Step 0. Step 5: Call EmbeddedSCADomain.DomainCompositeHelper.activateDomain() Step 6: Call EmbeddedSCADomain.DomainCompositeHelper.startComponent() on all componets from contribution2.jar. After Step 3, I am able to run Service1 successfully. At Step 5, I get a message like the following: Composite assembly problem: Service not found for component service: CalculatorServiceComponent/$promoted$.CalculatorService After Step 6, Service1 no longer runs, where as Service2 runs successfully. Am I doing anything wrong in using EmbeddedSCADomain? Can additional contribution be added to an EmbeddedSCADomain after it has been activated without disrupting the existing services? contribution1.jar and contribution2.jar are similar except that I add a 2 to each component name and the composite name. Thanks and regards, Vamsi On 7/24/07, Luciano Resende [EMAIL PROTECTED] wrote: Yes, you are right, could you please create a JIRA. If you are comfortable, it would be great if you could also help on the fix by producing a patch, otherwise I'll investigate the issue. On 7/23/07, Vamsavardhana Reddy [EMAIL PROTECTED] wrote: Hi Luciano, Thank you for your reply. It was very helpful. I am seeing another problem and this is with removing components and composites from EmbeddedSCADomain. I have called eScaDomain.getDomainCompositeHelper().stopComponent( eScaDomain.getDomainCompositeHelper().getComponent(component.getName())) with all the component names in my composite and then eScaDomain.getDomainCompositeHelper().removeComposite(). I am noticing that the components are not getting removed from EmbeddedSCADomain.domainComposite. EmbeddedSCADomain.DomainCompositeHelper.removeComposite() is calling compositeActivator.deactivate(). But CompositeActivatorImpl.deactivate() is empty. Is there any other method to remove the components and composite added to EmbeddedSCADomain? Your input will be very helpful. Thank you for your time. Best regards, Vamsi On 7/23/07, Luciano Resende [EMAIL PROTECTED] wrote: When using EmbeddedSCADomain, this is the expected behavior, activate would include in the domain, and then you would need to start/stop specific components. See the following as an example [1] [1] https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/itest/contribution-import-export/test-import-composite/src/test/java/helloworld/HelloWorldServerTestCase.java On 7/23/07, Vamsavardhana Reddy [EMAIL PROTECTED] wrote: Hi, I have the following piece of code to add contribution to an EmbeddedSCADomain: snip EmbeddedSCADomain eScaDomain = new EmbeddedSCADomain(classLoader, domainUri); ModelResolverImpl modelResolver = new ModelResolverImpl(classLoader); Contribution contribution = eScaDomain.getContributionService().contribute(contributionURI, new URL(contributionRoot), modelResolver, false); for (DeployedArtifact artifact : contribution.getArtifacts()) { if (artifact.getModel() instanceof Composite) { eScaDomain.getDomainCompositeHelper ().addComposite((Composite)artifact.getModel()); } } eScaDomain.getDomainCompositeHelper ().activateDomain(); /snip Service lookup is fine. But, service invocation is throwing a NullPointerException. Upon debugging I notice that the references inside the composite are not wired. If I add a call compositeActivator.start(domainComposite) inside EmbeddedSCADomain.activateDomain() method , I am getting my code to run as expected. I am wondering if there is a problem in EmbeddedSCADomain.activateDomain() method. Thanks and regards, Vamsi -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Notification binding breaking continuum build, was: svn commit: r558796 - in /incubator/tuscany/java/sca: modules/pom.xml samples/pom.xml
Thanks Luciano, let me know if there are any other issues. On 7/24/07, Luciano Resende [EMAIL PROTECTED] wrote: There was an issue with the sca-api group id, I have fixed under revision #559106 and I have started a new build in continuum. On 7/24/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote: This change breaks the continuum nightly build (even though a local build is successful). Could you please investigate and help fix it? Thanks. [EMAIL PROTECTED] wrote: Author: isilval Date: Mon Jul 23 09:50:14 2007 New Revision: 558796 URL: http://svn.apache.org/viewvc?view=revrev=558796 Log: Add notification to main build Modified: incubator/tuscany/java/sca/modules/pom.xml incubator/tuscany/java/sca/samples/pom.xml Modified: incubator/tuscany/java/sca/modules/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/modules/pom.xml (original) +++ incubator/tuscany/java/sca/modules/pom.xml Mon Jul 23 09:50:14 2007 @@ -44,6 +44,7 @@ modulebinding-ejb/module modulebinding-feed/module modulebinding-jsonrpc/module +modulebinding-notification/module modulebinding-rmi/module modulebinding-sca/module modulebinding-ws/module @@ -87,6 +88,7 @@ moduleimplementation-java/module moduleimplementation-java-xml/module moduleimplementation-java-runtime/module +moduleimplementation-notification/module moduleimplementation-osgi/module moduleimplementation-resource/module moduleimplementation-script/module Modified: incubator/tuscany/java/sca/samples/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/samples/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/samples/pom.xml (original) +++ incubator/tuscany/java/sca/samples/pom.xml Mon Jul 23 09:50:14 2007 @@ -39,6 +39,9 @@ modulebinding-echo/module modulebinding-echo2-extension/module modulebinding-echo-extension/module +modulebinding-notification-broker/module +modulebinding-notification-consumer/module +modulebinding-notification-producer/module modulecalculator/module !-- modulecalculator-distributed/module @@ -59,6 +62,7 @@ moduleimplementation-crud/module moduleimplementation-crud2-extension/module moduleimplementation-crud-extension/module +moduleimplementation-notification/module moduleimplementation-pojo2-extension/module !-- moduleloanapplication/module - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Notification binding breaking continuum build, was: svn commit: r558796 - in /incubator/tuscany/java/sca: modules/pom.xml samples/pom.xml
All set, build ran sucessful. On 7/24/07, Ignacio Silva-Lepe [EMAIL PROTECTED] wrote: Thanks Luciano, let me know if there are any other issues. On 7/24/07, Luciano Resende [EMAIL PROTECTED] wrote: There was an issue with the sca-api group id, I have fixed under revision #559106 and I have started a new build in continuum. On 7/24/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote: This change breaks the continuum nightly build (even though a local build is successful). Could you please investigate and help fix it? Thanks. [EMAIL PROTECTED] wrote: Author: isilval Date: Mon Jul 23 09:50:14 2007 New Revision: 558796 URL: http://svn.apache.org/viewvc?view=revrev=558796 Log: Add notification to main build Modified: incubator/tuscany/java/sca/modules/pom.xml incubator/tuscany/java/sca/samples/pom.xml Modified: incubator/tuscany/java/sca/modules/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/modules/pom.xml (original) +++ incubator/tuscany/java/sca/modules/pom.xml Mon Jul 23 09:50:14 2007 @@ -44,6 +44,7 @@ modulebinding-ejb/module modulebinding-feed/module modulebinding-jsonrpc/module +modulebinding-notification/module modulebinding-rmi/module modulebinding-sca/module modulebinding-ws/module @@ -87,6 +88,7 @@ moduleimplementation-java/module moduleimplementation-java-xml/module moduleimplementation-java-runtime/module +moduleimplementation-notification/module moduleimplementation-osgi/module moduleimplementation-resource/module moduleimplementation-script/module Modified: incubator/tuscany/java/sca/samples/pom.xml URL: http://svn.apache.org/viewvc/incubator/tuscany/java/sca/samples/pom.xml?view=diffrev=558796r1=558795r2=558796 == --- incubator/tuscany/java/sca/samples/pom.xml (original) +++ incubator/tuscany/java/sca/samples/pom.xml Mon Jul 23 09:50:14 2007 @@ -39,6 +39,9 @@ modulebinding-echo/module modulebinding-echo2-extension/module modulebinding-echo-extension/module +modulebinding-notification-broker/module +modulebinding-notification-consumer/module +modulebinding-notification-producer/module modulecalculator/module !-- modulecalculator-distributed/module @@ -59,6 +62,7 @@ moduleimplementation-crud/module moduleimplementation-crud2-extension/module moduleimplementation-crud-extension/module +moduleimplementation-notification/module moduleimplementation-pojo2-extension/module !-- moduleloanapplication/module - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Is there a problem in EmbeddedSCADomain.activateDomain() ?
Hi Luciano, If I call DomainCompositeHelper.startComponent() on each of the components from the first contribution after deploying the second, (that is after Step 6 from below), it results in an Exception which I ignore and continue. But towards the end I have services from both contributions running successfully. I am not able to figure an explanation. Thanks and regards, Vamsi On 7/24/07, Vamsavardhana Reddy [EMAIL PROTECTED] wrote: Hi Luciano, I have created TUSCANY-1477 for the issue. Here is another situation I am running into. Step 0: Create an EmbeddedSCADomain. Step 1: Add contribution from contribution1.jar (which provides say Service1) to EmbeddedSCADomain from Step 0. Step 2: Call EmbeddedSCADomain.DomainCompositeHelper.activateDomain() Step 3: Call EmbeddedSCADomain.DomainCompositeHelper.startComponent() on all componets from contribution1.jar. Step 4: Add contribution from contribution2.jar (which provides say Service2) to EmbeddedSCADomain from Step 0. Step 5: Call EmbeddedSCADomain.DomainCompositeHelper.activateDomain() Step 6: Call EmbeddedSCADomain.DomainCompositeHelper.startComponent() on all componets from contribution2.jar. After Step 3, I am able to run Service1 successfully. At Step 5, I get a message like the following: Composite assembly problem: Service not found for component service: CalculatorServiceComponent/$promoted$.CalculatorService After Step 6, Service1 no longer runs, where as Service2 runs successfully. Am I doing anything wrong in using EmbeddedSCADomain? Can additional contribution be added to an EmbeddedSCADomain after it has been activated without disrupting the existing services? contribution1.jar and contribution2.jar are similar except that I add a 2 to each component name and the composite name. Thanks and regards, Vamsi On 7/24/07, Luciano Resende [EMAIL PROTECTED] wrote: Yes, you are right, could you please create a JIRA. If you are comfortable, it would be great if you could also help on the fix by producing a patch, otherwise I'll investigate the issue. On 7/23/07, Vamsavardhana Reddy [EMAIL PROTECTED] wrote: Hi Luciano, Thank you for your reply. It was very helpful. I am seeing another problem and this is with removing components and composites from EmbeddedSCADomain. I have called eScaDomain.getDomainCompositeHelper().stopComponent( eScaDomain.getDomainCompositeHelper().getComponent(component.getName ())) with all the component names in my composite and then eScaDomain.getDomainCompositeHelper ().removeComposite(). I am noticing that the components are not getting removed from EmbeddedSCADomain.domainComposite. EmbeddedSCADomain.DomainCompositeHelper.removeComposite() is calling compositeActivator.deactivate(). But CompositeActivatorImpl.deactivate() is empty. Is there any other method to remove the components and composite added to EmbeddedSCADomain? Your input will be very helpful. Thank you for your time. Best regards, Vamsi On 7/23/07, Luciano Resende [EMAIL PROTECTED] wrote: When using EmbeddedSCADomain, this is the expected behavior, activate would include in the domain, and then you would need to start/stop specific components. See the following as an example [1] [1] https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/itest/contribution-import-export/test-import-composite/src/test/java/helloworld/HelloWorldServerTestCase.java On 7/23/07, Vamsavardhana Reddy [EMAIL PROTECTED] wrote: Hi, I have the following piece of code to add contribution to an EmbeddedSCADomain: snip EmbeddedSCADomain eScaDomain = new EmbeddedSCADomain(classLoader, domainUri); ModelResolverImpl modelResolver = new ModelResolverImpl(classLoader); Contribution contribution = eScaDomain.getContributionService().contribute(contributionURI, new URL(contributionRoot), modelResolver, false); for (DeployedArtifact artifact : contribution.getArtifacts()) { if (artifact.getModel() instanceof Composite) { eScaDomain.getDomainCompositeHelper ().addComposite((Composite)artifact.getModel()); } } eScaDomain.getDomainCompositeHelper().activateDomain(); /snip Service lookup is fine. But, service invocation is throwing a NullPointerException. Upon debugging I notice that the references inside the composite are not wired. If I add a call compositeActivator.start(domainComposite) inside EmbeddedSCADomain.activateDomain() method , I am getting my code to run as expected. I am wondering if there is a problem in EmbeddedSCADomain.activateDomain() method. Thanks and regards, Vamsi -- Luciano Resende Apache Tuscany Committer
Distributed domain support in 0.92 was: SCA 0.92 release?
To get the distributed domain support up to a level that is suitable for including in the next release I think we need to make the node configuration and management more dynamic. Scenarios -- The current scenario being used to test distributed support is the calculator-distributed sample where the CalculatorComponent runs in nodeA and the AddComponent and SubtractComponent run on NodeB and NodeC respectively. This is a simple stand alone application and I think we should continue with it. There has also been conversation on the list about how the distributed domain can help when working in a web app environment. What are the salient points here we need to consider? SCA Binding -- Currently the code uses JMS to implement the default remote SCA binding. The remote SCA binding is used when the system finds that two components that are wired together locally are deployed to separate Nodes. As an alternative it would be good to support web services here also and have this fit in with the new SCA binding mechanism that Simon Nash has been working on. To make a web services SCA binding work we need an EndpointLookup interface so that components out there in the distributed domain can locate other components that they are wired to. Node Management --- Currently each node runs in isolation and starts a local SCA domain configured from .topology and .composite files. It would be good to define NodeMaganement interfaces so that this information can be provided remotely and so that the node can expose remotely accessible management interfaces, for example. Join a domain Start/Stop domains and components in domains Retrieve domain topology and topology changes relevant to the node Retrieve default domain URIs for this node Record any events that occur in the domain (could be offered as a feed) The domain management interface Ant has recently been added that may help us shape this. Also Sebastien's work to allow local domains to be modified more dynamically should help make this work. Distributed Domain Management --- The notion of a distributed domain running across a series of nodes gives us the opportunity to provide some centralized control, for example Accept configuration changes Notify interested nodes/domains that configuration changes are available Record the endpoints of services offered by each Node/Domain Collect together events that occur in nodes (again could be offered as a feed) For both NodeManagment and DistributedDomainManagement, SCA itself seems to provide a good foundation for implementing the various management services that are required. This is how the implementation to date implements its component registry. Defining such components allows us to provide different implementations, for example, we could retain the file based management we have now for batch operation and create network based management components for dynamic runtime environments. Anyhow, if anyone has any thoughts about what is required or wants to get involved in moving this forward then you are most welcome Simon
Re: Synapse using SCA assembly model for configuration
On 7/24/07, Paul Fremantle [EMAIL PROTECTED] wrote: I recently read Dan's blog entry on the SCA assembly model: http://netzooid.com/blog/2007/07/22/sca-assembly-vs-spring-cxf/ That and some other discussions I've had made me think about maybe offering the SCA assembly model to configure Synapse. So it seems to me that you can draw a direct correlation between: Synapse Proxy and SCA Service Synapse Endpoint and SCA reference Synapse Mediator - a specific type of SCA Component Synapse Property - SCA Property If we were to make the XMLConfigurationBuilder pluggable then we could just use this as an alternative configuration language. We did talk about this in the beginning of Synapse [we discussed having a LEX/YACC style config language - which I would still LOVE if someone wants to do that - it would make a great Computer Science project] Anyway back to SCA, it seems to me that this would be a pretty nice alternative config model, using an independent third party language. I'm guessing that there is plenty of Tuscany code could help us implement this. Maybe we might do it jointly? So I'm imagining the existing runtime being *exactly* the same as today, but being able to use a subset of the SCA Assembly model to configure it. Maybe some of the SCA wizards on tusc-dev can jump in and let me know if this is feasible? Paul PS If someone is looking at http://www.infoq.com/news/2007/07/scaproblem and wondering where this is coming from I offer a few thoughts. Firstly, I'm always open to being proved wrong! Secondly, this would not be adding any layers of indirection... I'm mapping directly from SCA concepts into the Synapse runtime with this idea. Finally, I see nothing wrong with holding several inconsistent viewpoints at the same time :) Great idea. This is definitely feasible, and also i think it would be really useful - so good for Synapse and good for Tuscany. You're right, we do have plenty of code in Tuscany that we can use, a big part of recent Tuscany releases has been around modularizing the code base to make exactly this type of thing easy to do. So I'd like take you up on the suggestion to do this jointly, as it turns out, i can even spend a bit of time helping make this happen. Let me go pull some things together and I'll post more about it tomorrow. ...ant
Resolving WSDL/XSD import/include for SCA contributions
Hi, I'm working on the artifact processing of WSDL/XSD from SCA contributions, especially for the import/include directives. I would like to share what I have so far to get your feedback. Let's assume we have the following artifacts ([1][2]). * helloworld-service.wsdl (definition) imports helloworld-interface.wsdl * helloworld-interface.wsdl (inline schema) imports greeting.xsd * greeting.xsd includes name.xsd For the import/include, we could have different ways to use the location attribute for a WSDL import. Please note the SCA spec says the explicit location attribute should be honored. If it's not present, then we use the namespace-based resolution defined by SCA. 1. location=helloworld-interface.wsdl (relative to the base document where the import is defined) 2. location=/wsdl/helloworld-interface.wsdl (relative to a SCA contribution) 3. location=http://example.com/helloworld-interface.wsdl; (absolute URL pointing to an external resource) 4. location= or location is not present: Use the namespace to resolve the imported definition We have two options here: a) Plugin a tuscany-specific resolver for WSDL4J (javax.wsdl.xml.WSDLLocator) and XmlSchema (org.apache.ws.commons.schema.resolver.URIResolver). This option can handle location case 1, 2 and 3. For 2, we probably need some context from the contribution service. The difficulty is that both resolvers expect to take an InputSource. For location case 4 (empty or not present), we don't have a corresponding InputSource. To make WSDL4J happy, we might be able to provide a dummy InputSource pointing to a byte array which contains the empty definition (AFAIK, null InputSource won't work) and then resolve the imported definition by QName during the resolve() phase. b) Disable the import/include resolving feature and re-link the related artifacts by Tuscany There are two challenges: How to disable the aggressive resolving of import/include? How to re-link the artifacts after the fragments are loaded? WSDL4J: We can disable the import processing by WSDL4J and then resolve the imported artifacts in the different step. Some of the elements are undefined and we have to navigate the WSDL4J model and resolve them. During the procedure, we can use the location as a key to constrain the scope of resolution. XmlSchema doesn't seem to have a way to disable the aggressive resolving. What do you guys think? Any opinions are welcome. Thanks, Raymond [1] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/wsdl [2] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/xsd - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Synapse using SCA assembly model for configuration
Hi, I think it's a great opportunity for both Synapse and Tuscany. From Tuscany side, the following features may be helpful. 1) We have the SCA assembly model defined as pure interfaces in Tuscany and factories can be plugged in to provide different implementations. 2) We already have a module in Tuscany to consume the SCA assembly model by the Spring runtime. Please see https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/core-spring. It's a good example for how the SCA assembly model can be consumed by other runtimes. Thanks, Raymond - Original Message - From: ant elder [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: tuscany-dev@ws.apache.org Sent: Tuesday, July 24, 2007 1:32 PM Subject: Re: Synapse using SCA assembly model for configuration On 7/24/07, Paul Fremantle [EMAIL PROTECTED] wrote: I recently read Dan's blog entry on the SCA assembly model: http://netzooid.com/blog/2007/07/22/sca-assembly-vs-spring-cxf/ That and some other discussions I've had made me think about maybe offering the SCA assembly model to configure Synapse. So it seems to me that you can draw a direct correlation between: Synapse Proxy and SCA Service Synapse Endpoint and SCA reference Synapse Mediator - a specific type of SCA Component Synapse Property - SCA Property If we were to make the XMLConfigurationBuilder pluggable then we could just use this as an alternative configuration language. We did talk about this in the beginning of Synapse [we discussed having a LEX/YACC style config language - which I would still LOVE if someone wants to do that - it would make a great Computer Science project] Anyway back to SCA, it seems to me that this would be a pretty nice alternative config model, using an independent third party language. I'm guessing that there is plenty of Tuscany code could help us implement this. Maybe we might do it jointly? So I'm imagining the existing runtime being *exactly* the same as today, but being able to use a subset of the SCA Assembly model to configure it. Maybe some of the SCA wizards on tusc-dev can jump in and let me know if this is feasible? Paul PS If someone is looking at http://www.infoq.com/news/2007/07/scaproblem and wondering where this is coming from I offer a few thoughts. Firstly, I'm always open to being proved wrong! Secondly, this would not be adding any layers of indirection... I'm mapping directly from SCA concepts into the Synapse runtime with this idea. Finally, I see nothing wrong with holding several inconsistent viewpoints at the same time :) Great idea. This is definitely feasible, and also i think it would be really useful - so good for Synapse and good for Tuscany. You're right, we do have plenty of code in Tuscany that we can use, a big part of recent Tuscany releases has been around modularizing the code base to make exactly this type of thing easy to do. So I'd like take you up on the suggestion to do this jointly, as it turns out, i can even spend a bit of time helping make this happen. Let me go pull some things together and I'll post more about it tomorrow. ...ant - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Resolving WSDL/XSD import/include for SCA contributions
Hi, I would like to mention that the location attribute is required for for wsdl:import by the WSDL 1.1 spec. But schemaLocation is optional for xsd:import and xsd:include. BTw, wsdl:import can also be used to import an XSD in addtion to WSDL. I don't think it's a good practice but the WSDL 1.1 spec has an example for that. Thanks, Raymond - Original Message - From: Raymond Feng [EMAIL PROTECTED] To: tuscany-dev@ws.apache.org Sent: Tuesday, July 24, 2007 1:38 PM Subject: Resolving WSDL/XSD import/include for SCA contributions Hi, I'm working on the artifact processing of WSDL/XSD from SCA contributions, especially for the import/include directives. I would like to share what I have so far to get your feedback. Let's assume we have the following artifacts ([1][2]). * helloworld-service.wsdl (definition) imports helloworld-interface.wsdl * helloworld-interface.wsdl (inline schema) imports greeting.xsd * greeting.xsd includes name.xsd For the import/include, we could have different ways to use the location attribute for a WSDL import. Please note the SCA spec says the explicit location attribute should be honored. If it's not present, then we use the namespace-based resolution defined by SCA. 1. location=helloworld-interface.wsdl (relative to the base document where the import is defined) 2. location=/wsdl/helloworld-interface.wsdl (relative to a SCA contribution) 3. location=http://example.com/helloworld-interface.wsdl; (absolute URL pointing to an external resource) 4. location= or location is not present: Use the namespace to resolve the imported definition We have two options here: a) Plugin a tuscany-specific resolver for WSDL4J (javax.wsdl.xml.WSDLLocator) and XmlSchema (org.apache.ws.commons.schema.resolver.URIResolver). This option can handle location case 1, 2 and 3. For 2, we probably need some context from the contribution service. The difficulty is that both resolvers expect to take an InputSource. For location case 4 (empty or not present), we don't have a corresponding InputSource. To make WSDL4J happy, we might be able to provide a dummy InputSource pointing to a byte array which contains the empty definition (AFAIK, null InputSource won't work) and then resolve the imported definition by QName during the resolve() phase. b) Disable the import/include resolving feature and re-link the related artifacts by Tuscany There are two challenges: How to disable the aggressive resolving of import/include? How to re-link the artifacts after the fragments are loaded? WSDL4J: We can disable the import processing by WSDL4J and then resolve the imported artifacts in the different step. Some of the elements are undefined and we have to navigate the WSDL4J model and resolve them. During the procedure, we can use the location as a key to constrain the scope of resolution. XmlSchema doesn't seem to have a way to disable the aggressive resolving. What do you guys think? Any opinions are welcome. Thanks, Raymond [1] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/wsdl [2] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/xsd - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Created: (TUSCANY-1478) For schemas with elementFormDefault=true, serialized instance documents are invalid
For schemas with elementFormDefault=true, serialized instance documents are invalid --- Key: TUSCANY-1478 URL: https://issues.apache.org/jira/browse/TUSCANY-1478 Project: Tuscany Issue Type: Bug Components: C++ SDO Environment: all Reporter: Michael Yoder This appears to be a regression in XML serialization. The SCA CppBigBank example is currently failing to get a response from the StockQuote service due to sending an invalid request. Using the XML Schema embedded in StockQuoteService.wsdl, the following code: DataFactoryPtr mdg = DataFactory::getDataFactory(); XSDHelperPtr xsh = HelperProvider::getXSDHelper(mdg); xsh-defineFile(StockQuoteService.wsdl); DataObjectPtr doObj = mdg-create(http://swanandmokashi.com;, GetQuotes); doObj-setCString(QuoteTicker, IBM); XMLHelperPtr xmlHelper = HelperProvider::getXMLHelper(mdg); XMLDocumentPtr doc = xmlHelper-createDocument(doObj, http://swanandmokashi.com;, GetQuotes); xmlHelper-save(doc, out.xml); Will produce the invalid instance document: ?xml version=1.0 encoding=UTF-8? tns:GetQuotes xmlns:tns=http://swanandmokashi.com; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;QuoteTickerIBM/QuoteTicker/tns:GetQuotes The element QuoteTicker should be namespace qualified. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (TUSCANY-1478) For schemas with elementFormDefault=true, serialized instance documents are invalid
[ https://issues.apache.org/jira/browse/TUSCANY-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Yoder updated TUSCANY-1478: --- Attachment: TUSCANY-1478.txt This patch resolves the issue and adds a unit test. For schemas with elementFormDefault=true, serialized instance documents are invalid --- Key: TUSCANY-1478 URL: https://issues.apache.org/jira/browse/TUSCANY-1478 Project: Tuscany Issue Type: Bug Components: C++ SDO Environment: all Reporter: Michael Yoder Attachments: TUSCANY-1478.txt This appears to be a regression in XML serialization. The SCA CppBigBank example is currently failing to get a response from the StockQuote service due to sending an invalid request. Using the XML Schema embedded in StockQuoteService.wsdl, the following code: DataFactoryPtr mdg = DataFactory::getDataFactory(); XSDHelperPtr xsh = HelperProvider::getXSDHelper(mdg); xsh-defineFile(StockQuoteService.wsdl); DataObjectPtr doObj = mdg-create(http://swanandmokashi.com;, GetQuotes); doObj-setCString(QuoteTicker, IBM); XMLHelperPtr xmlHelper = HelperProvider::getXMLHelper(mdg); XMLDocumentPtr doc = xmlHelper-createDocument(doObj, http://swanandmokashi.com;, GetQuotes); xmlHelper-save(doc, out.xml); Will produce the invalid instance document: ?xml version=1.0 encoding=UTF-8? tns:GetQuotes xmlns:tns=http://swanandmokashi.com; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;QuoteTickerIBM/QuoteTicker/tns:GetQuotes The element QuoteTicker should be namespace qualified. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [SCA Native] preliminary ant build
I've applied the patch. I'm having a few problems on Windows with my environment I'm getting errors like: compile.cpp.osoa: [cc] 2 total files to be compiled. [cc] CompositeContext.cpp [cc] D:\tuscanysvn\cpp\sca\runtime\extensions\cpp\src\osoa\sca\CompositeContext.cpp(44) : warning C4273: 'osoa::sca::CompositeContext::CompositeContext' : inconsistent dll linkage [cc] D:\tuscanysvn\cpp\sca\runtime\extensions\cpp\src\osoa/sca/CompositeContext.h(86) : see previous definition of '{ctor}' I thought that the SCA_EXPORTS definition was causing this... but maybe not. The PHP extension is not complete and was not included in the M3 release. What problems are you seeing getting it to build? Cheers, On 24/07/07, Brady Johnson [EMAIL PROTECTED] wrote: I updated the jira1438 with update 4, which includes the following: https://issues.apache.org/jira/browse/TUSCANY-1438 New build.xml files for the following - runtime/extensions/python - runtime/extensions/rest - runtime/extensions/ruby Moved -DSCA_EXPORT from Tuscany-BaseCompiler in system.xml and added it to the runtime/core compilations I'm working on php (which is the only one left for the source code) now, but its giving me problems, so it might take a while. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Brady Johnson [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 7:52 AM To: tuscany-dev@ws.apache.org Subject: RE: [SCA Native] preliminary ant build That was the only drawback that I could see too. Each depot ought to be basically stand-alone. As for a top-level build.xml for all 3 projects, that would be very simple and would not require any of the ant infrastructure used by the individual projects. It would be very similar to the root build.xml for TuscanySCA. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Pete Robbins [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 11:30 PM To: tuscany-dev@ws.apache.org Subject: Re: [SCA Native] preliminary ant build A top level build in tuscany/cpp should be easy to do. I'm not sure we should move (as Brady suggested) the common ant scripts up into cpp/etc though. I think it's important that I can extract tuscany/cpp/sdo, for example, and build it without using anything outside of that tree. Cheers, On 24/07/07, Adriano Crestani [EMAIL PROTECTED] wrote: Great idea, soon I will try to apply this idea to Native DAS and see how it works. I think the idea could also be easily applied to Native SDO, as it does not have too much dependencies and code generation as Native SCA does. A folder ant-core could be created under tuscany/cpp/ folder to place the ant scripts shared by the projects. Also, we could add a build.xml under tuscany/ccp/ that builds all 3 subprojects at once, if the 3 to implement this ant build process. What do you think? Regards, Adriano Crestani On 7/23/07, Brady Johnson [EMAIL PROTECTED] wrote: Correction, it should be like this: target name=compile.core cpp-compile srcdir=${core.abs.dir} objdir=${lib.dir} infiles=${core.cpp.files} custom-cc-elements defineset if=windows define=SCA_EXPORTS/ /custom-cc-elements /cpp-compile /target Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Brady Johnson [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 5:05 PM To: tuscany-dev@ws.apache.org Subject: RE: [SCA Native] preliminary ant build Pete, Good catch. That's an easy fix. I'll submit it with the next patch tomorrow. Basically it involves removing SCA_EXPORTS from the Tuscany-BaseCompiler and adding it to the runtime/core/src targets: compile.core compile.extension compile.model compile.util Like this: target name=compile.core cpp-compile srcdir=${core.abs.dir} objdir=${lib.dir} infiles=${core.cpp.files}/ custom-cc-element defineset if=windows define=SCA_EXPORTS/ /custom-cc-element /target Tomorrow I'll have the python, ruby, rest, and maybe php extensions complete. Brady Johnson Lead Software Developer - HydraSCA Rogue Wave Software - [EMAIL PROTECTED] -Original Message- From: Pete Robbins [mailto:[EMAIL PROTECTED] Sent: Monday, July 23, 2007 2:41 PM To: tuscany-dev@ws.apache.org Subject: Re: [SCA Native] preliminary ant build I think there is a problem in the extension compilations. The SCA_EXPORTS directive should only be set when compiling the runtime/core. When compiling for dlls on windows which use the core dll SCA_EXPORTS must not be set. I guess
RE: How does xsd:ID property type is distinguished from xsd:string
Hi Frank, Database IDs (e.g., primary and foreign keys) are more related to xsd:key/xsd:keyref, then xsd:ID, but fortunately SDO 3 is planning to address all of them :-) Thanks for telling me this. Now is ((property.getType().isDataType()) ((EAttribute)property).isID())) == true for a property p that is declared as xsd:key or xsd:keyref? Or broadly, what is the semantics of Eattribute.isID()? Pinaki Poddar 972.834.2865 -Original Message- From: Frank Budinsky [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 3:01 PM To: tuscany-dev@ws.apache.org Subject: RE: How does xsd:ID property type is distinguished from xsd:string Hi Pinaki, Identity support is also in the SDO 3 charter: Support for a concept of identity in SDO, and its relationship to other technologies. Database IDs (e.g., primary and foreign keys) are more related to xsd:key/xsd:keyref, then xsd:ID, but fortunately SDO 3 is planning to address all of them :-) Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 11:02:21 AM: Hi Frank, Thanks. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is good news. However, identity mechanics should appear more distinctly on API surface e.g. boolean Proprty.isIdentifier(); ListProperty Type.getIdentifiers(); I will qualify absence of any identity semantics from SDO a major drawback. Especially, if it comes to any persistence operation on SDO DataObject/DataGraph. Hopefully some of the SDO spec writers would notice this omission and add it to future spec version. After a quick pick in current DAS implementation, it appeared that 'primary key' identification is based on existing database column name ID (yes, hardcoded) -- but I may be wrong and am ready to learn how DAS is handling identity issue. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is a good decision. Wrapping should always provide access to what is being wrapped. downcasting to the EMF implementation class Thanks for this info. I will do this for now. But I heed your advice and have already a scheme in place that programs against *only* commonj.sdo API but can access underlying implementaion, if available, without any compile-time binding. Slightly costly -- but works for, say, extracting package name from Types. Pinaki Poddar 972.834.2865 -Original Message- From: Frank Budinsky [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 9:16 AM To: tuscany-dev@ws.apache.org Subject: Re: How does xsd:ID property type is distinguished from xsd:string Hi Pinaki, They can't be distinguished in the current version of SDO metadata, you need to look at the original XSD. The next version of SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata. In Tuscany, you can currently determine this by downcasting to the EMF implementation class, although we don't recommend people do that: System.out.println( Property isID: + ((property.getType().isDataType()) ((EAttribute)property).isID())); Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 01:00:03 AM: Hi, A newbie question: How does two Property: one defined as xsd:string and other as xsd:ID can be distiguished? Assume: 1. we have a simple XML schema defining a Person SDO Type with two properties as follows: xsd:complexType name=Person xsd:attribute name=firstName type=xsd:string/ xsd:attribute name=idtype=xsd:ID/ /xsd:complexType 2. TypeHelper.INSTANCE.define() defines SDO Type with two commonj.sdo.Property, p1 (for firstName) and p2 (for id) 3. both p1.getType().getInstanceClass() and p2.getType().getInstanceClass() return java.lang.String both p1.getType().isDataType() and p2.getType().isDataType() return true The question is, what can be done to identify p2 as a property that were defined as xsd:ID? Thanks for your help -- Pinaki Poddar 972.834.2865 Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Notice: This email message, together
RE: How does xsd:ID property type is distinguished from xsd:string
EAttribute.isID() will only be true if the type is xsd:ID. Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 05:31:09 PM: Hi Frank, Database IDs (e.g., primary and foreign keys) are more related to xsd:key/xsd:keyref, then xsd:ID, but fortunately SDO 3 is planning to address all of them :-) Thanks for telling me this. Now is ((property.getType().isDataType()) ((EAttribute)property).isID())) == true for a property p that is declared as xsd:key or xsd:keyref? Or broadly, what is the semantics of Eattribute.isID()? Pinaki Poddar 972.834.2865 -Original Message- From: Frank Budinsky [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 3:01 PM To: tuscany-dev@ws.apache.org Subject: RE: How does xsd:ID property type is distinguished from xsd:string Hi Pinaki, Identity support is also in the SDO 3 charter: Support for a concept of identity in SDO, and its relationship to other technologies. Database IDs (e.g., primary and foreign keys) are more related to xsd:key/xsd:keyref, then xsd:ID, but fortunately SDO 3 is planning to address all of them :-) Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 11:02:21 AM: Hi Frank, Thanks. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is good news. However, identity mechanics should appear more distinctly on API surface e.g. boolean Proprty.isIdentifier(); ListProperty Type.getIdentifiers(); I will qualify absence of any identity semantics from SDO a major drawback. Especially, if it comes to any persistence operation on SDO DataObject/DataGraph. Hopefully some of the SDO spec writers would notice this omission and add it to future spec version. After a quick pick in current DAS implementation, it appeared that 'primary key' identification is based on existing database column name ID (yes, hardcoded) -- but I may be wrong and am ready to learn how DAS is handling identity issue. SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata That is a good decision. Wrapping should always provide access to what is being wrapped. downcasting to the EMF implementation class Thanks for this info. I will do this for now. But I heed your advice and have already a scheme in place that programs against *only* commonj.sdo API but can access underlying implementaion, if available, without any compile-time binding. Slightly costly -- but works for, say, extracting package name from Types. Pinaki Poddar 972.834.2865 -Original Message- From: Frank Budinsky [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 24, 2007 9:16 AM To: tuscany-dev@ws.apache.org Subject: Re: How does xsd:ID property type is distinguished from xsd:string Hi Pinaki, They can't be distinguished in the current version of SDO metadata, you need to look at the original XSD. The next version of SDO (SDO 3) is planning to provide an api for accessing extended XSD metadata. In Tuscany, you can currently determine this by downcasting to the EMF implementation class, although we don't recommend people do that: System.out.println( Property isID: + ((property.getType().isDataType()) ((EAttribute)property).isID())); Frank. Pinaki Poddar [EMAIL PROTECTED] wrote on 07/24/2007 01:00:03 AM: Hi, A newbie question: How does two Property: one defined as xsd:string and other as xsd:ID can be distiguished? Assume: 1. we have a simple XML schema defining a Person SDO Type with two properties as follows: xsd:complexType name=Person xsd:attribute name=firstName type=xsd:string/ xsd:attribute name=idtype=xsd:ID/ /xsd:complexType 2. TypeHelper.INSTANCE.define() defines SDO Type with two commonj.sdo.Property, p1 (for firstName) and p2 (for id) 3. both p1.getType().getInstanceClass() and p2.getType().getInstanceClass() return java.lang.String both p1.getType().isDataType() and p2.getType().isDataType() return true The question is, what can be done to identify p2 as a property that were defined as xsd:ID? Thanks for your help -- Pinaki Poddar 972.834.2865 Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. - To unsubscribe, e-mail:
[XmlSchema] Pluggability for XSD import/include resolvers?
Hi, We currently use XmlSchema to load XSDs. To resolve the import/include directives using our schemes, we provide an implementation of org.apache.ws.commons.schema.resolver.URIResolver and set it to org.apache.ws.commons.schema.XmlSchemaCollection. It works well if the schemaLocation attribute for the xsd:import or xsd:include is set. Now we would like to handle the cases where the schemaLocation attribute is not present. For example, xsd:import namespace=http://ns1/. Without the schemaLocation, we resolve the import/include by namespace. In this case, we already have a map keyed by namespace for a list of XmlSchema objects loaded from a catalog or other files and we want to reuse them. Would it be possible to open the XmlSchemaCollection.getSchema(SchemaKey) method so that we can override/customize the behavior to associate existing XmlSchema instances to a SchemaKey? BTW, using a singleton of XmlSchemaCollection to keep the schema map is not always feasible. Another observation is that a NPE will be thrown if the URIResolver.resolveEntity() returns null. Is there any way to disable the aggressive resolving/loading of import/include? [EMAIL PROTECTED] Raymond Feng Apache Tuscany - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[LDAP DAS] Current API and EMF dependencies...
Hi Ole After you mentioned you are almost done with a initial version of LDAP DAS that provides CRUD operations [1] I went to your sandbox [2] to take a quick look at the current implementation and have couple questions : - Where is the latest implementation of LDAP DAS available ? Is the code you have in your sandbox current and the most update one ? - Your sandbox looks like still using a lot of EMF dependencies. Do you have any plans to base the LDAP DAS implementation on current Tuscany SDO (SDO 2.1 specification implementation) ? - The beauty of Heterogenous DAS is to have a consistent programming model and a single set of APIs to access data from heterogeneous data sources, such as RDB and LDAP. Looks like the current implementation of LDAP DAS is using a different set of APIs for it's implementation, thus introducing a new programming model and a new set of APIs. What are your plans here ? [1] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg20553.html [2] https://svn.apache.org/repos/asf/directory/sandbox/oersoy/das.ldap.parent/ -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[SCA Native] SDO Build error on Linux
Trying to build Native/C++ SDO on Linux RHEL5 gives me this error: if /bin/sh ../../../../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I. -I../../../../.. -I../../../../../runtime/core/src -I//home/delfinoj/include/libxml2 -g -O0 -MT HelperProvider.lo -MD -MP -MF .deps/HelperProvider.Tpo -c -o HelperProvider.lo HelperProvider.cpp; \ then mv -f .deps/HelperProvider.Tpo .deps/HelperProvider.Plo; else rm -f .deps/HelperProvider.Tpo; exit 1; fi g++ -DHAVE_CONFIG_H -I. -I. -I../../../../.. -I../../../../../runtime/core/src -I//home/delfinoj/include/libxml2 -g -O0 -MT HelperProvider.lo -MD -MP -MF .deps/HelperProvider.Tpo -c HelperProvider.cpp -fPIC -DPIC -o .libs/HelperProvider.o ../../../../../runtime/core/src/commonj/sdo/SDOSchemaSAX2Parser.h:88: error: extra qualification 'commonj::sdo::SDOSchemaSAX2Parser::' on member 'parseURI' make[6]: *** [HelperProvider.lo] Error 1 make[6]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo/runtime/core/src/commonj/sdo' make[5]: *** [all-recursive] Error 1 make[5]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo/runtime/core/src/commonj' make[4]: *** [all-recursive] Error 1 make[4]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo/runtime/core/src' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo/runtime/core' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo/runtime' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/delfinoj/Tuscany/apache-repos/cpp/sdo' make: *** [all] Error 2 Any idea? -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Resolving WSDL/XSD import/include for SCA contributions
Two questions inline. Raymond Feng wrote: Hi, I'm working on the artifact processing of WSDL/XSD from SCA contributions, especially for the import/include directives. I would like to share what I have so far to get your feedback. Let's assume we have the following artifacts ([1][2]). * helloworld-service.wsdl (definition) imports helloworld-interface.wsdl * helloworld-interface.wsdl (inline schema) imports greeting.xsd * greeting.xsd includes name.xsd For the import/include, we could have different ways to use the location attribute for a WSDL import. Please note the SCA spec says the explicit location attribute should be honored. If it's not present, then we use the namespace-based resolution defined by SCA. 1. location=helloworld-interface.wsdl (relative to the base document where the import is defined) 2. location=/wsdl/helloworld-interface.wsdl (relative to a SCA contribution) 3. location=http://example.com/helloworld-interface.wsdl; (absolute URL pointing to an external resource) 4. location= or location is not present: Use the namespace to resolve the imported definition Is location= even valid? I didn't think so. We have two options here: a) Plugin a tuscany-specific resolver for WSDL4J (javax.wsdl.xml.WSDLLocator) and XmlSchema (org.apache.ws.commons.schema.resolver.URIResolver). This option can handle location case 1, 2 and 3. For 2, we probably need some context from the contribution service. The difficulty is that both resolvers expect to take an InputSource. For location case 4 (empty or not present), we don't have a corresponding InputSource. I was going to respond with a long list of pros-cons for both options, then deleted all my comments to ask a simple question :). Why can't we return an InputSource for the contents of the imported document? To make WSDL4J happy, we might be able to provide a dummy InputSource pointing to a byte array which contains the empty definition (AFAIK, null InputSource won't work) and then resolve the imported definition by QName during the resolve() phase. b) Disable the import/include resolving feature and re-link the related artifacts by Tuscany There are two challenges: How to disable the aggressive resolving of import/include? How to re-link the artifacts after the fragments are loaded? WSDL4J: We can disable the import processing by WSDL4J and then resolve the imported artifacts in the different step. Some of the elements are undefined and we have to navigate the WSDL4J model and resolve them. During the procedure, we can use the location as a key to constrain the scope of resolution. XmlSchema doesn't seem to have a way to disable the aggressive resolving. What do you guys think? Any opinions are welcome. Thanks, Raymond [1] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/wsdl [2] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/xsd - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Resolving WSDL/XSD import/include for SCA contributions
Hi, Please see my comments inline. Thanks, Raymond - Original Message - From: Jean-Sebastien Delfino [EMAIL PROTECTED] To: tuscany-dev@ws.apache.org Sent: Tuesday, July 24, 2007 6:46 PM Subject: Re: Resolving WSDL/XSD import/include for SCA contributions Two questions inline. Raymond Feng wrote: Hi, I'm working on the artifact processing of WSDL/XSD from SCA contributions, especially for the import/include directives. I would like to share what I have so far to get your feedback. Let's assume we have the following artifacts ([1][2]). * helloworld-service.wsdl (definition) imports helloworld-interface.wsdl * helloworld-interface.wsdl (inline schema) imports greeting.xsd * greeting.xsd includes name.xsd For the import/include, we could have different ways to use the location attribute for a WSDL import. Please note the SCA spec says the explicit location attribute should be honored. If it's not present, then we use the namespace-based resolution defined by SCA. 1. location=helloworld-interface.wsdl (relative to the base document where the import is defined) 2. location=/wsdl/helloworld-interface.wsdl (relative to a SCA contribution) 3. location=http://example.com/helloworld-interface.wsdl; (absolute URL pointing to an external resource) 4. location= or location is not present: Use the namespace to resolve the imported definition Is location= even valid? I didn't think so. I have clarified this on the follow-up e-mail. The location attribute is required for import.wsdl. doesn't seem to be a valid URI. The absence of schemaLocation for xsd:import and xsd:include are valid though. We have two options here: a) Plugin a tuscany-specific resolver for WSDL4J (javax.wsdl.xml.WSDLLocator) and XmlSchema (org.apache.ws.commons.schema.resolver.URIResolver). This option can handle location case 1, 2 and 3. For 2, we probably need some context from the contribution service. The difficulty is that both resolvers expect to take an InputSource. For location case 4 (empty or not present), we don't have a corresponding InputSource. I was going to respond with a long list of pros-cons for both options, then deleted all my comments to ask a simple question :). Why can't we return an InputSource for the contents of the imported document? Well, for the import/include that can be resolved to a document, we return the InputSource. I have said that it works for location case 1, 2 and 3. But if the import/include only doesn't have the schemaLocation attribute, what InputSource should we return? A related question, for an artifact processer that loads multiple artifacts following the import/include directives, how can we avoid the duplicate loading? For example, we have a.wsdl imports b.wsdl, both a.wsdl and b.wsdl are in the same contribution and they are processed by the WSDL artifact processor. We probably don't want to load b.wsdl twice in this case. To make WSDL4J happy, we might be able to provide a dummy InputSource pointing to a byte array which contains the empty definition (AFAIK, null InputSource won't work) and then resolve the imported definition by QName during the resolve() phase. b) Disable the import/include resolving feature and re-link the related artifacts by Tuscany There are two challenges: How to disable the aggressive resolving of import/include? How to re-link the artifacts after the fragments are loaded? WSDL4J: We can disable the import processing by WSDL4J and then resolve the imported artifacts in the different step. Some of the elements are undefined and we have to navigate the WSDL4J model and resolve them. During the procedure, we can use the location as a key to constrain the scope of resolution. XmlSchema doesn't seem to have a way to disable the aggressive resolving. What do you guys think? Any opinions are welcome. Thanks, Raymond [1] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/wsdl [2] http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/interface-wsdl-xml/src/test/resources/xsd - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [LDAP DAS] Current API and EMF dependencies...
Hey Luciano, Luciano Resende wrote: Hi Ole After you mentioned you are almost done with a initial version of LDAP DAS that provides CRUD operations [1] I went to your sandbox [2] to take a quick look at the current implementation and have couple questions : - Where is the latest implementation of LDAP DAS available ? Is the code you have in your sandbox current and the most update one ? As soon as I have finished up the DAS interface code, commented everything, and removed left over scraps I'll do another check in the Working version. The sandbox is still the old stuff from when I first got reading and writing graphs working. - Your sandbox looks like still using a lot of EMF dependencies. Do you have any plans to base the LDAP DAS implementation on current Tuscany SDO (SDO 2.1 specification implementation)? I think we might want to shoot for having it compliant with the SDO 3.0 spec. 2.1 seems to be missing some key API features such as something equivalent to getEIDAttributes() which returns the EAttribute where id is true. Also another thing that both EMF and SDO 2.1 are missing is getEAllCrossReferences (which is the opposite of getEAllContainmentReferences()...EMF does provide an implementation specific way of doing this though, although it's not part of the API). Having these in the SDO spec would give us a head start of having a consistent programming model across DASs. - The beauty of Heterogenous DAS is to have a consistent programming model and a single set of APIs to access data from heterogeneous data sources, such as RDB and LDAP. Looks like the current implementation of LDAP DAS is using a different set of APIs for it's implementation, thus introducing a new programming model and a new set of APIs. What are your plans here ? I totally agree with you. The LDAP DAS should work with the standard SDO API asap. So we need to identify the gaps between the EMF API parts used currently and the SDO API and bridge them. Meanwhile, those users needing a common interface for both the LDAP DAS and RDB DAS would have to use the EMF SDO implementation. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [LDAP DAS] 1.0.0 Just about Done + Question
Great tip Adriano. I modeled the configuration file using ecore, so I just made it a multiplicity many EAttribute with type EString. I'll make a note in my todo list make it a TreeSet instead. Thanks, - Ole Adriano Crestani wrote: This list of xsd namespaces, instead of using ListString, wouldn't be better to user TreeSetString? Once you need to look up for namespaces on the list frequently, unless if there are another reason to use List that I'm not aware about : ) Regards, Adriano Crestani On 7/24/07, Ole Ersoy [EMAIL PROTECTED] wrote: Luciano Resende wrote: Are we trying to make it much more complex ? :-) I'm a big believer in as simple as possible, but no simpler. Are users going to get confused to think they have a local copy of the XML config file, but the LDAP DAS is really using the one stored on the server ? You are right. Having the whole configuration file in the DIT does not make much sense. Initially I was thinking that the config file would contain a list of all the xsd namespaces representing the schemas that are written to the server. But now I'm thinking that this list should be contained on the server, but will remain independent of the config file. The DAS then goes through the following sequence before writing a graph: - Lookup the supported schemas (List of xsd namespaces) in the DIT (Just creates a ListString of xsd namespaces) - See whether the list contains the xsd namespace for the graph that is about to be written - If it does, write the graph - If it does not, write the schema - Add the xsd namespace string to the supported schema list - Update this list on the server Sound OK? SNIP BTW, it would be great if you could add some overview design doc on the Wiki, also some sample code, or pointers to sample code, etc... Sure - I'm just implementing the things we are going over right now, and then I'm going to write a users guide, followed by updates to the design guide. The remaining (For a working LDAP DAS) task list looks approximately like this: - Finish the DAS Interface / object (Main CRUD Interface (LdapDAS.write(EDataGraph), ) - Test the DAS Interface / object - Finish and test JNDI Connection pooling configuration (Low priority) - Update the EDataGraphCreator to ignore Transient properties (Right now it will write all the properties) - Add support for multiplicity many EAttributes (Right now it just assumes that they are singular) - Complete users guide - Complete design guide - Formal Apache review Thoughts? SNIP - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: An idea of WS Service address
Thank you for your attention ant! You encouraged me reading osoa's specification again, then I found your point. OK, now let me speak about my question to some further: I found in WebService_Binding 2.3, wsdl should be generated for an binding.ws, and there follows a generation rule in the spec, OK now we know Tuscany' s binding.ws works noly if we provide a wsdl(not generate it). I don't think it a big problem, but since I found a binding.ws uri=[url] just override the soap:address location=[url], and it seems without this binding.ws's uri, the url of the web service would be some where random(not according to the soap:address's). ant elder [EMAIL PROTECTED] wrote: On 7/18/07, shaoguang geng wrote: When I work on the svn code, I found that the service address of a binding.ws depends on it's uri attribute, not the inside the wsdl file. If the is some thing different from the binding.ws's uri, or it does not exists absolutely, the client will get a confusion form it http://[host]:[port]/[servicename]?wsdl, If I don't give a , I will see a warn, but without the binding.ws' uri, Tuscany runs without any message. The WS service address is calculated based on section 2.1.1 of the WS binding spec and section 1.7.2.1 of the assembly spec (see [1]), and there's a bit about it in the Tuscany doc at [2]. From that, Tuscany should be using the from the WSDL if you reference the WSDL port from the binding.ws wsdlElement= ..., the uri attribute is only used if you don't reference the wsdl port or if it is a relative url. Not sure if that answers you question though? ...ant [1] http://osoa.org/display/Main/Service+Component+Architecture+Specifications [2] http://incubator.apache.org/tuscany/sca-java-bindingws.html - Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel and lay it on us.
Re: [LDAP DAS] Current API and EMF dependencies...
Ole On 7/24/07, Ole Ersoy [EMAIL PROTECTED] wrote: Hey Luciano, Luciano Resende wrote: Hi Ole After you mentioned you are almost done with a initial version of LDAP DAS that provides CRUD operations [1] I went to your sandbox [2] to take a quick look at the current implementation and have couple questions : - Where is the latest implementation of LDAP DAS available ? Is the code you have in your sandbox current and the most update one ? As soon as I have finished up the DAS interface code, commented everything, and removed left over scraps I'll do another check in the Working version. The sandbox is still the old stuff from when I first got reading and writing graphs working. Good, I'll wait for your updates to take a further look. Please let me know when you committ these updates. - Your sandbox looks like still using a lot of EMF dependencies. Do you have any plans to base the LDAP DAS implementation on current Tuscany SDO (SDO 2.1 specification implementation)? I think we might want to shoot for having it compliant with the SDO 3.0 spec. 2.1 seems to be missing some key API features such as something equivalent to getEIDAttributes() which returns the EAttribute where id is true. Also another thing that both EMF and SDO 2.1 are missing is getEAllCrossReferences (which is the opposite of getEAllContainmentReferences()...EMF does provide an implementation specific way of doing this though, although it's not part of the API). Having these in the SDO spec would give us a head start of having a consistent programming model across DASs. I have no idea of when SDO 3.0 will be available, and we are not sure that we will have all the necessary APIs you think is missing on that release, right ? Also, I think Tuscany SDO is trying to move off EMF dependencies, but I guess the SDO team would be able to give a better explanation of the directions here. One thing I always had in mind with multiple DAS implementations, was the scenario where you had your data stored in one type of data source (e.g your person records would come from a HR Table), but later, you start having that info coming from LDAP, and you would only have to change your DAS Config files, to update connection information, and command syntax. By having RDB DAS producing SDO 1.x (I guess this is what EMF supports) and then having DAS RDB producing SDO 2.1, can we still accomplish this ? We should also involve the SDO folks here, and get their input on this subject, but, having RDB DAS and LDAP DAS returning incompatible SDO would be a issue to really take in consideration when making the final decision here. BTW, the gap between EMF and SDO 2.1, can they be workarounded in the LDAP DAS implementation ? - The beauty of Heterogenous DAS is to have a consistent programming model and a single set of APIs to access data from heterogeneous data sources, such as RDB and LDAP. Looks like the current implementation of LDAP DAS is using a different set of APIs for it's implementation, thus introducing a new programming model and a new set of APIs. What are your plans here ? I totally agree with you. The LDAP DAS should work with the standard SDO API asap. So we need to identify the gaps between the EMF API parts used currently and the SDO API and bridge them. Meanwhile, those users needing a common interface for both the LDAP DAS and RDB DAS would have to use the EMF SDO implementation. This is not really around SDO, but making usage of DAS Interfaces. I'd expect that the LDAP DASImpl [1] would implement a variation of Tuscany DAS Interface [2], and all other Tuscany DAS implementations would do the same (e.g RDB, XQuery, or any other that comes in the future). A similar interface is being used for DAS C++. This allows for a common programming model and api on the DAS level. Any plans for this ? [1] https://svn.apache.org/repos/asf/directory/sandbox/oersoy/das.ldap.parent/das.ldap/src/main/java/org/apache/tuscany/das/ldap/impl/DASImpl.java [2] https://svn.apache.org/repos/asf/incubator/tuscany/sandbox/lresende/das/api/src/main/java/org/apache/tuscany/das/DAS.java - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Luciano Resende Apache Tuscany Committer http://people.apache.org/~lresende http://lresende.blogspot.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]